File size: 3,527 Bytes
ef6a551
 
 
 
 
 
 
 
590fdf7
ef6a551
 
 
951c701
ef6a551
951c701
 
ef6a551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
552fcca
ef6a551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
552fcca
ef6a551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
license: mit
dataset_info:
  features:
  - name: clean
    dtype: string
  - name: corrupted
    dtype: string
  - name: year
    dtype: string
  splits:
  - name: train
    num_bytes: 1234280
    num_examples: 10000
  download_size: 204638
  dataset_size: 1234280
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
language:
- en
---

# Dataset Card for Dataset Name

<!-- Provide a quick summary of the dataset. -->

This is a dataset with examples from the Greater-Than circuit task.

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

- **Curated by:** Michael Hanna
- **Language(s) (NLP):** English
- **License:** MIT

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** [https://github.com/hannamw/gpt2-greater-than](https://github.com/hannamw/gpt2-greater-than)
- **Paper:** [How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model](https://openreview.net/forum?id=p4PckNQR8k)

## Uses

This dataset is intended to be a model-agnostic version of the greater-than task. 
The original task consisted of examples like `The war lasted from the year 1742 to the year 17`, based on the fact that GPT-2 small tokenizes 4-digit years into two, two-digit tokens.
One would then compute model performance as the probability assigned to years greater than 42, minus that assigned to years less-than or equal to 42.

New models now tokenize years differently; Llama tokenizes 1742 as `[174][2]`, and Gemma 2 tokenizes it as `[1][7][4][2]`. 
You can still compute the probability assigned to good and bad decades; for example: 
 - For Llama 3, if the token at position [174] is y1, and the token at [2] is y1, you want to compute p(y1>174) + p(y1=174)* p(y2>2) - (p(y1<174) + p(y1=174)* p(y2<=2))
 - For Gemma 2, if the token at position [4] is y1, and the token at [2] is y1, you want to compute p(y1>4) + p(y1=4)* p(y2>2) - (p(y1<4) + p(y1=4)* p(y2<=2))

For these purposes, it's easier to have the full string, i.e. `The war lasted from the year 1742 to the year 1743`, rather than the shortened version `The war lasted from the year 1742 to the year 17`.

## Dataset Structure

`clean`: The original greater-than example sentences

`corrupted`: The corrupted version of the corresponding sentence in `clean`, with the start-year decade set to `01`.

`year`: The start year from the corresponding sentence in `clean`.

## Dataset Creation

### Source Data

As described in the paper, this dataset was automatically created, using the template `The [event] lasted from the year [XX][YY] to the year [XX]`. 
Michael Hanna and Ollie Liu developed the list of nouns used as `[event]`.

## Citation [optional]

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
[How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model](https://openreview.net/forum?id=p4PckNQR8k)

**BibTeX:**
```
@inproceedings{
hanna2023how,
title={How does {GPT}-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model},
author={Michael Hanna and Ollie Liu and Alexandre Variengien},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p4PckNQR8k}
}
```

## Dataset Card Authors

Michael Hanna

## Dataset Card Contact

[email protected]