kgreenewald commited on
Commit
a817b96
·
verified ·
1 Parent(s): a124b8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -43,13 +43,15 @@ This percentage is *calibrated* in the following sense: given a set of answers a
43
 
44
  Answering a question and obtaining a certainty score proceeds as follows.
45
 
46
- 1. Prompt the model with a system and/or user prompt.
47
  2. Use the model to generate a response as normal (via the `assistant` role).
48
  3. Prompt the model to generate a certainty score by generating in the `certainty` role (by appending `<|start_of_role|>certainty<|end_of_role|>` and generating).
49
  4. The model will respond with a certainty percentage, quantized with steps of 10% (i.e. 5%, 15%, 25%,...95%).
50
 
51
  When not given the certainty generation prompt `<|start_of_role|>certainty<|end_of_role|>`, the model's behavior should mimic that of the base model [ibm-granite/granite-3.0-8b-instruct](https://huggingface.co/ibm-granite/granite-3.0-8b-instruct).
52
 
 
 
53
 
54
 
55
 
@@ -73,7 +75,7 @@ tokenizer = AutoTokenizer.from_pretrained(BASE_NAME,padding_side='left',trust_re
73
  model_base = AutoModelForCausalLM.from_pretrained(BASE_NAME,device_map="auto")
74
  model_UQ = PeftModel.from_pretrained(model_base, LORA_NAME)
75
 
76
- system_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." #NOTE: this is generic, it can be changed
77
  question = "What is IBM?"
78
  print("Question:" + question)
79
  question_chat = [
 
43
 
44
  Answering a question and obtaining a certainty score proceeds as follows.
45
 
46
+ 1. Prompt the model with a system prompt followed by the user prompt. The model is calibrated with the system prompt below.
47
  2. Use the model to generate a response as normal (via the `assistant` role).
48
  3. Prompt the model to generate a certainty score by generating in the `certainty` role (by appending `<|start_of_role|>certainty<|end_of_role|>` and generating).
49
  4. The model will respond with a certainty percentage, quantized with steps of 10% (i.e. 5%, 15%, 25%,...95%).
50
 
51
  When not given the certainty generation prompt `<|start_of_role|>certainty<|end_of_role|>`, the model's behavior should mimic that of the base model [ibm-granite/granite-3.0-8b-instruct](https://huggingface.co/ibm-granite/granite-3.0-8b-instruct).
52
 
53
+ **System prompt** The model was calibrated with the system prompt `You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.`
54
+ It is recommended to prepend this to any other desired system prompts.
55
 
56
 
57
 
 
75
  model_base = AutoModelForCausalLM.from_pretrained(BASE_NAME,device_map="auto")
76
  model_UQ = PeftModel.from_pretrained(model_base, LORA_NAME)
77
 
78
+ system_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
79
  question = "What is IBM?"
80
  print("Question:" + question)
81
  question_chat = [