MoritzLaurer HF staff commited on
Commit
85b1348
·
verified ·
1 Parent(s): 1b33aa5

Upload prompt template grounding_accuracy_implicit_span_level.yaml

Browse files
grounding_accuracy_implicit_span_level.yaml ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ prompt:
2
+ template: |-
3
+ Your task is to check if the Response is accurate to the Evidence.
4
+ Generate 'Accurate' if the Response is accurate when verified according to the Evidence, or 'Inaccurate' if the Response is inaccurate (contradicts the evidence) or cannot be verified.
5
+
6
+ **Query**:
7
+
8
+ {{user_request}}
9
+
10
+ **End of Query**
11
+
12
+ **Evidence**
13
+
14
+ {{context_document}}
15
+
16
+ **End of Evidence**
17
+
18
+ **Response**:
19
+
20
+ {{response}}
21
+
22
+ **End of Response**
23
+
24
+
25
+ Break down the Response into sentences and classify each one separately, then give the final answer: If even one of the sentences is inaccurate, then the Response is inaccurate.
26
+
27
+ For example, your output should be of this format:
28
+ Sentence 1: <Sentence 1>
29
+ Sentence 1 label: Accurate/Inaccurate (choose 1)
30
+ Sentence 2: <Sentence 2>
31
+ Sentence 2 label: Accurate/Inaccurate (choose 1)
32
+ Sentence 3: <Sentence 3>
33
+ Sentence 3 label: Accurate/Inaccurate (choose 1)
34
+ [...]
35
+ Final Answer: Accurate/Inaccurate (choose 1)
36
+ template_variables:
37
+ - user_request
38
+ - context_document
39
+ - response
40
+ metadata:
41
+ description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
42
+ Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from
43
+ Kaggle.\n This specific prompt elicits a binary accurate/non-accurate classifier for the entire response after generating
44
+ and classifying each sentence separately."
45
+ evaluation_method: implicit_span_level
46
+ tags:
47
+ - fact-checking
48
+ version: 1.0.0
49
+ author: Google DeepMind
50
+ source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv
51
+ client_parameters: {}
52
+ custom_data: {}