Commit
•
4e2ade3
1
Parent(s):
b187bd0
Upload prompt template grounding_accuracy_span_level.yaml
Browse files
grounding_accuracy_span_level.yaml
CHANGED
@@ -41,8 +41,7 @@ prompt:
|
|
41 |
metadata:
|
42 |
description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
|
43 |
Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from
|
44 |
-
Kaggle.\n This specific prompt elicits a binary accurate/non-accurate classifier on a span level
|
45 |
-
{{ix+1}} in the original template was changed to {{ix}} for simplicity."
|
46 |
evaluation_method: span_level
|
47 |
tags:
|
48 |
- fact-checking
|
|
|
41 |
metadata:
|
42 |
description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
|
43 |
Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from
|
44 |
+
Kaggle.\n This specific prompt elicits a binary accurate/non-accurate classifier on a span level."
|
|
|
45 |
evaluation_method: span_level
|
46 |
tags:
|
47 |
- fact-checking
|