File size: 3,548 Bytes
83d5a65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
prompt:
  template: "You are a helpful and harmless AI assistant. You will be provided with a textual context and a model-generated
    response.\nYour task is to analyze the response sentence by sentence and classify each sentence according to its relationship
    with the provided context.\n\n**Instructions:**\n\n1. **Decompose the response into individual sentences.**\n2. **For
    each sentence, assign one of the following labels:**\n    * **`supported`**: The sentence is entailed by the given context.\
    \  Provide a supporting excerpt from the context.\n    * **`unsupported`**: The sentence is not entailed by the given
    context. Provide an excerpt that is close but does not fully support the sentence.\n    * **`contradictory`**: The sentence
    is falsified by the given context. Provide a contradicting excerpt from the context.\n    * **`no_rad`**: The sentence
    does not require factual attribution (e.g., opinions, greetings, questions, disclaimers).  No excerpt is needed for this
    label.\n\n3. **For each label, provide a short rationale explaining your decision.**  The rationale should be separate
    from the excerpt.\n\n**Input Format:**\n\nThe input will consist of two parts, clearly separated:\n\n* **Context:**  The
    textual context used to generate the response.\n* **Response:** The model-generated response to be analyzed.\n\n**Output
    Format:**\n\nFor each sentence in the response, output a JSON object with the following fields:\n\n* `\"sentence\"`: The
    sentence being analyzed.\n* `\"label\"`: One of `supported`, `unsupported`, `contradictory`, or `no_rad`.\n* `\"rationale\"\
    `: A brief explanation for the assigned label.\n* `\"excerpt\"`:  A relevant excerpt from the context. Only required for
    `supported`, `unsupported`, and `contradictory` labels.\n\nOutput each JSON object on a new line.\n\n**Example:**\n\n
    **Input:**\n\n```\nContext: Apples are red fruits. Bananas are yellow fruits.\n\nResponse: Apples are red. Bananas are
    green.  Enjoy your fruit!\n```\n\n**Output:**\n\n{\"sentence\": \"Apples are red.\", \"label\": \"supported\", \"rationale\"\
    : \"The context explicitly states that apples are red.\", \"excerpt\": \"Apples are red fruits.\"}\n{\"sentence\": \"
    Bananas are green.\", \"label\": \"contradictory\", \"rationale\": \"The context states that bananas are yellow, not green.\"\
    , \"excerpt\": \"Bananas are yellow fruits.\"}\n{\"sentence\": \"Enjoy your fruit!\", \"label\": \"no_rad\", \"rationale\"\
    : \"This is a general expression and does not require factual attribution.\", \"excerpt\": null}\n\n**Now, please analyze
    the following context and response:**\n\n**User Query:**\n{{user_request}}\n\n**Context:**\n{{context_document}}\n\n**Response:**\n\
    {{response}}"
  template_variables:
    - user_request
    - context_document
    - response
  metadata:
    description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
      Responses to Long-Form Input' by Google DeepMind.\n    The prompt was copied from the evaluation_prompts.csv file from
      Kaggle.\n    This specific prompt elicits an NLI-style sentence-by-sentence checker outputting JSON for each sentence."
    evaluation_method: json_alt
    tags:
      - fact-checking
    version: 1.0.0
    author: Google DeepMind
    source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv
  client_parameters: {}
  custom_data: {}