MoritzLaurer HF staff commited on
Commit
1b33aa5
·
verified ·
1 Parent(s): 9bb2f79

Upload prompt template grounding_accuracy_span_level.yaml

Browse files
Files changed (1) hide show
  1. grounding_accuracy_span_level.yaml +53 -0
grounding_accuracy_span_level.yaml ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ prompt:
2
+ template: |-
3
+ Your task is to check if a specific Span is accurate to the Evidence.
4
+ Generate 'Accurate' if the Span is accurate when verified according to the Evidence or when there is nothing to verify in the Span.
5
+ Generate 'Inaccurate' if the Span is inaccurate (contradicts the evidence), or cannot be verified.
6
+
7
+ **Query**:
8
+
9
+ {{user_request}}
10
+
11
+ **End of Query**
12
+
13
+ **Evidence**
14
+
15
+ {{context_document}}
16
+
17
+ **End of Evidence**
18
+
19
+ **Response**:
20
+
21
+ {{response}}
22
+
23
+ **End of Response**
24
+
25
+
26
+ You are currently verifying **Span {{ix}}** from the Response.
27
+ **Span {{ix}}**:
28
+
29
+ {{span}}
30
+
31
+ **End of Span {{ix}}**
32
+
33
+
34
+ Is Span {{ix}} accurate or inaccurate when verified according to the Evidence? Point to where in the evidence justifies your answer.
35
+ template_variables:
36
+ - user_request
37
+ - context_document
38
+ - response
39
+ - ix
40
+ - span
41
+ metadata:
42
+ description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
43
+ Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from
44
+ Kaggle.\n This specific prompt elicits a binary accurate/non-accurate classifier on a span level.\n Note that
45
+ {{ix+1}} in the original template was changed to {{ix}} for simplicity."
46
+ evaluation_method: span_level
47
+ tags:
48
+ - fact-checking
49
+ version: 1.0.0
50
+ author: Google DeepMind
51
+ source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv
52
+ client_parameters: {}
53
+ custom_data: {}