kgreenewald commited on
Commit
4ea02e8
·
verified ·
1 Parent(s): 7fe4272

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -43,7 +43,24 @@ with the added ability to generate certainty scores for answers to questions whe
43
  **Certainty score definition** The model will respond with a certainty percentage, quantized to 10 possible values (i.e. 5%, 15%, 25%,...95%).
44
  This percentage is *calibrated* in the following sense: given a set of answers assigned a certainty score of X%, approximately X% of these answers should be correct. See the eval experiment below for out-of-distribution verification of this behavior.
45
 
46
- **Important note** Certainty is inherently an intrinsic property of a model and its abilitities. **Granite Uncertainty 3.0 8b** is not intended to predict the certainty of responses generated by any other models besides itself or [ibm-granite/granite-3.0-8b-instruct](https://huggingface.co/ibm-granite/granite-3.0-8b-instruct).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  **Usage steps** Answering a question and obtaining a certainty score proceeds as follows.
49
 
 
43
  **Certainty score definition** The model will respond with a certainty percentage, quantized to 10 possible values (i.e. 5%, 15%, 25%,...95%).
44
  This percentage is *calibrated* in the following sense: given a set of answers assigned a certainty score of X%, approximately X% of these answers should be correct. See the eval experiment below for out-of-distribution verification of this behavior.
45
 
46
+ **Certainty score interpretation** Certainty scores calibrated as defined above may at times seem biased towards moderate certainty scores for the following reasons. Firstly, as humans we tend to be overconfident in
47
+ our evaluation of what we know and don't know - in contrast, a calibrated model is less likely to output very high or very low confidence scores, as these imply certainty of correctness or incorrectness.
48
+ Examples where you might see very low confidence scores might be on answers where the model's response was something to the effect of "I don't know", which is easy to evaluate as not
49
+ being the correct answer to the question (though it is the appropriate one). Secondly, remember that the model
50
+ is evaluating itself - correctness/incorrectness that may be obvious to us or to larger models may be less obvious to an 8b model. Finally, teaching a model every fact it knows
51
+ and doesn't know is not possible, hence it must generalize to questions of wildly varying difficulty (some of which may be trick questions!) and to settings where it has not had its outputs judged.
52
+ Intuitively, it does this by extrapolating based on related questions
53
+ it has been evaluated on in training - this is an inherently inexact process and leads to some hedging.
54
+
55
+ **Possible downstream use cases (not implemented)**
56
+ * Human usage: Certainty scores give human users an indication of when to trust answers from the model (which should be augmented by their own knowledge).
57
+ * Model routing/guards: If the model has low certainty (below a chosen threshold), it may be worth sending the request to a larger, more capable model or simply choosing not to show the response to the user.
58
+ * RAG: **Granite Uncertainty 3.0 8b** is calibrated on document-based question answering datasets, hence it can be applied to giving certainty scores for answers created using RAG.
59
+ * This certainty will be a prediction of overall correctness based on both the documents given and the model's own knowledge (e.g. if the model is correct but the answer is not in the documents, the certainty should still be high).
60
+
61
+ **Important note** Certainty is inherently an intrinsic property of a model and its abilitities. **Granite Uncertainty 3.0 8b** is not intended to predict the certainty of responses generated by any other models besides itself or [ibm-granite/granite-3.0-8b-instruct](https://huggingface.co/ibm-granite/granite-3.0-8b-instruct).
62
+ Additionally, certainty scores are *distributional* quantities, and so will do well on realistic questions in aggregate, but in principle may have surprising scores on individual
63
+ red-teamed examples.
64
 
65
  **Usage steps** Answering a question and obtaining a certainty score proceeds as follows.
66