MoritzLaurer HF staff commited on
Commit
c870d00
·
verified ·
1 Parent(s): 63cfe59

Upload prompt template grounding_nli_json.yaml

Browse files
Files changed (1) hide show
  1. grounding_nli_json.yaml +77 -0
grounding_nli_json.yaml ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ prompt:
2
+ template: |-
3
+ You are a helpful and harmless AI assistant. You will be provided with a textual context and a model-generated response.
4
+ Your task is to analyze the response sentence by sentence and classify each sentence according to its relationship with the provided context.
5
+
6
+ **Instructions:**
7
+
8
+ 1. **Decompose the response into individual sentences.**
9
+ 2. **For each sentence, assign one of the following labels:**
10
+ * **`supported`**: The sentence is entailed by the given context. Provide a supporting excerpt from the context. The supporting except must *fully* entail the sentence. If you need to cite multiple supporting excepts, simply concatenate them.
11
+ * **`unsupported`**: The sentence is not entailed by the given context. No excerpt is needed for this label.
12
+ * **`contradictory`**: The sentence is falsified by the given context. Provide a contradicting excerpt from the context.
13
+ * **`no_rad`**: The sentence does not require factual attribution (e.g., opinions, greetings, questions, disclaimers). No excerpt is needed for this label.
14
+ 3. **For each label, provide a short rationale explaining your decision.** The rationale should be separate from the excerpt.
15
+ 4. **Be very strict with your `supported` and `contradictory` decisions.** Unless you can find straightforward, indisputable evidence excerpts *in the context* that a sentence is `supported` or `contradictory`, consider it `unsupported`. You should not employ world knowledge unless it is truly trivial.
16
+
17
+ **Input Format:**
18
+
19
+ The input will consist of two parts, clearly separated:
20
+
21
+ * **Context:** The textual context used to generate the response.
22
+ * **Response:** The model-generated response to be analyzed.
23
+
24
+ **Output Format:**
25
+
26
+ For each sentence in the response, output a JSON object with the following fields:
27
+
28
+ * `"sentence"`: The sentence being analyzed.
29
+ * `"label"`: One of `supported`, `unsupported`, `contradictory`, or `no_rad`.
30
+ * `"rationale"`: A brief explanation for the assigned label.
31
+ * `"excerpt"`: A relevant excerpt from the context. Only required for `supported` and `contradictory` labels.
32
+
33
+ Output each JSON object on a new line.
34
+
35
+ **Example:**
36
+
37
+ **Input:**
38
+
39
+ ```
40
+ Context: Apples are red fruits. Bananas are yellow fruits.
41
+
42
+ Response: Apples are red. Bananas are green. Bananas are cheaper than apples. Enjoy your fruit!
43
+ ```
44
+
45
+ **Output:**
46
+
47
+ {"sentence": "Apples are red.", "label": "supported", "rationale": "The context explicitly states that apples are red.", "excerpt": "Apples are red fruits."}
48
+ {"sentence": "Bananas are green.", "label": "contradictory", "rationale": "The context states that bananas are yellow, not green.", "excerpt": "Bananas are yellow fruits."}
49
+ {"sentence": "Bananas are cheaper than apples.", "label": "unsupported", "rationale": "The context does not mention the price of bananas or apples.", "excerpt": null}
50
+ {"sentence": "Enjoy your fruit!", "label": "no_rad", "rationale": "This is a general expression and does not require factual attribution.", "excerpt": null}
51
+
52
+ **Now, please analyze the following context and response:**
53
+
54
+ **User Query:**
55
+ {{user_request}}
56
+
57
+ **Context:**
58
+ {{context_document}}
59
+
60
+ **Response:**
61
+ {{response}}
62
+ template_variables:
63
+ - user_request
64
+ - context_document
65
+ - response
66
+ metadata:
67
+ description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
68
+ Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from
69
+ Kaggle.\n This specific prompt elicits an NLI-style sentence-by-sentence checker outputting JSON for each sentence."
70
+ evaluation_method: json
71
+ tags:
72
+ - fact-checking
73
+ version: 1.0.0
74
+ author: Google DeepMind
75
+ source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv
76
+ client_parameters: {}
77
+ custom_data: {}