File size: 4,070 Bytes
c870d00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
prompt:
  template: |-
    You are a helpful and harmless AI assistant. You will be provided with a textual context and a model-generated response.
    Your task is to analyze the response sentence by sentence and classify each sentence according to its relationship with the provided context.

    **Instructions:**

    1. **Decompose the response into individual sentences.**
    2. **For each sentence, assign one of the following labels:**
        * **`supported`**: The sentence is entailed by the given context.  Provide a supporting excerpt from the context. The supporting except must *fully* entail the sentence. If you need to cite multiple supporting excepts, simply concatenate them.
        * **`unsupported`**: The sentence is not entailed by the given context. No excerpt is needed for this label.
        * **`contradictory`**: The sentence is falsified by the given context. Provide a contradicting excerpt from the context.
        * **`no_rad`**: The sentence does not require factual attribution (e.g., opinions, greetings, questions, disclaimers).  No excerpt is needed for this label.
    3. **For each label, provide a short rationale explaining your decision.**  The rationale should be separate from the excerpt.
    4. **Be very strict with your `supported` and `contradictory` decisions.** Unless you can find straightforward, indisputable evidence excerpts *in the context* that a sentence is `supported` or `contradictory`, consider it `unsupported`. You should not employ world knowledge unless it is truly trivial.

    **Input Format:**

    The input will consist of two parts, clearly separated:

    * **Context:**  The textual context used to generate the response.
    * **Response:** The model-generated response to be analyzed.

    **Output Format:**

    For each sentence in the response, output a JSON object with the following fields:

    * `"sentence"`: The sentence being analyzed.
    * `"label"`: One of `supported`, `unsupported`, `contradictory`, or `no_rad`.
    * `"rationale"`: A brief explanation for the assigned label.
    * `"excerpt"`:  A relevant excerpt from the context. Only required for `supported` and `contradictory` labels.

    Output each JSON object on a new line.

    **Example:**

    **Input:**

    ```
    Context: Apples are red fruits. Bananas are yellow fruits.

    Response: Apples are red. Bananas are green. Bananas are cheaper than apples. Enjoy your fruit!
    ```

    **Output:**

    {"sentence": "Apples are red.", "label": "supported", "rationale": "The context explicitly states that apples are red.", "excerpt": "Apples are red fruits."}
    {"sentence": "Bananas are green.", "label": "contradictory", "rationale": "The context states that bananas are yellow, not green.", "excerpt": "Bananas are yellow fruits."}
    {"sentence": "Bananas are cheaper than apples.", "label": "unsupported", "rationale": "The context does not mention the price of bananas or apples.", "excerpt": null}
    {"sentence": "Enjoy your fruit!", "label": "no_rad", "rationale": "This is a general expression and does not require factual attribution.", "excerpt": null}

    **Now, please analyze the following context and response:**

    **User Query:**
    {{user_request}}

    **Context:**
    {{context_document}}

    **Response:**
    {{response}}
  template_variables:
    - user_request
    - context_document
    - response
  metadata:
    description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
      Responses to Long-Form Input' by Google DeepMind.\n    The prompt was copied from the evaluation_prompts.csv file from
      Kaggle.\n    This specific prompt elicits an NLI-style sentence-by-sentence checker outputting JSON for each sentence."
    evaluation_method: json
    tags:
      - fact-checking
    version: 1.0.0
    author: Google DeepMind
    source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv
  client_parameters: {}
  custom_data: {}