|
prompt: |
|
template: |- |
|
You are a helpful and harmless AI assistant. You will be provided with a textual context and a model-generated response. |
|
Your task is to analyze the response sentence by sentence and classify each sentence according to its relationship with the provided context. |
|
|
|
**Instructions:** |
|
|
|
1. **Decompose the response into individual sentences.** |
|
2. **For each sentence, assign one of the following labels:** |
|
* **`supported`**: The sentence is entailed by the given context. Provide a supporting excerpt from the context. |
|
* **`unsupported`**: The sentence is not entailed by the given context. Provide an excerpt that is close but does not fully support the sentence. |
|
* **`contradictory`**: The sentence is falsified by the given context. Provide a contradicting excerpt from the context. |
|
* **`no_rad`**: The sentence does not require factual attribution (e.g., opinions, greetings, questions, disclaimers). No excerpt is needed for this label. |
|
|
|
3. **For each label, provide a short rationale explaining your decision.** The rationale should be separate from the excerpt. |
|
|
|
**Input Format:** |
|
|
|
The input will consist of two parts, clearly separated: |
|
|
|
* **Context:** The textual context used to generate the response. |
|
* **Response:** The model-generated response to be analyzed. |
|
|
|
**Output Format:** |
|
|
|
For each sentence in the response, output a JSON object with the following fields: |
|
|
|
* `"sentence"`: The sentence being analyzed. |
|
* `"label"`: One of `supported`, `unsupported`, `contradictory`, or `no_rad`. |
|
* `"rationale"`: A brief explanation for the assigned label. |
|
* `"excerpt"`: A relevant excerpt from the context. Only required for `supported`, `unsupported`, and `contradictory` labels. |
|
|
|
Output each JSON object on a new line. |
|
|
|
**Example:** |
|
|
|
**Input:** |
|
|
|
``` |
|
Context: Apples are red fruits. Bananas are yellow fruits. |
|
|
|
Response: Apples are red. Bananas are green. Enjoy your fruit! |
|
``` |
|
|
|
**Output:** |
|
|
|
{"sentence": "Apples are red.", "label": "supported", "rationale": "The context explicitly states that apples are red.", "excerpt": "Apples are red fruits."} |
|
{"sentence": "Bananas are green.", "label": "contradictory", "rationale": "The context states that bananas are yellow, not green.", "excerpt": "Bananas are yellow fruits."} |
|
{"sentence": "Enjoy your fruit!", "label": "no_rad", "rationale": "This is a general expression and does not require factual attribution.", "excerpt": null} |
|
|
|
**Now, please analyze the following context and response:** |
|
|
|
**User Query:** |
|
{{user_request}} |
|
|
|
**Context:** |
|
{{context_document}} |
|
|
|
**Response:** |
|
{{response}} |
|
template_variables: |
|
- user_request |
|
- context_document |
|
- response |
|
metadata: |
|
description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground |
|
Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from |
|
Kaggle.\n This specific prompt elicits an NLI-style sentence-by-sentence checker outputting JSON for each sentence." |
|
evaluation_method: json_alt |
|
tags: |
|
- fact-checking |
|
version: 1.0.0 |
|
author: Google DeepMind |
|
source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv |
|
client_parameters: {} |
|
custom_data: {} |
|
|