The dataset viewer is not available for this split.
Error code: UnexpectedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DISCLAIMER
This represents only a subset of the final dataset (taking into account the new HuggingFace LFS storage limits). The complete dataset will be released following the camera-ready submission of our paper.
The Heap Dataset
We develop The Heap, a new contamination-free multilingual code dataset comprising 57 languages, which facilitates LLM evaluation reproducibility. The reproduction packge can be found here.
Is your code in The Heap?
An opt-out mechanism will be provided for the final release of the dataset.
Collection
We collect up to 50,000 public repositories using the GitHub API, focusing on license type, star count, and creation date. Repositories with non-permissive licenses are prioritized to reduce contamination, as public code datasets we deduplicate against primarily focus on permissive or no-license repositories. We select repositories created before August 2024 in decreasing order of their star counts. To handle GitHub rate limits, we use timeouts and pagination during the scraping process.
Copyleft licenses included in the The Heap
License | Family |
---|---|
CECILL-1.0, CECILL-1.1, CECILL-2.0, CECILL-2.1, CECILL-C, EPL-1.0, EPL-2.0, LGPL-2.1, LGPL-3.0, MS-RL, MPL-2.0 |
Weak Copyleft |
GPL-2.0, GPL-3.0 | Strong Copyleft |
AGPL-3.0, EUPL-1.1, EUPL-1.2, OSL-3.0 | Network Copyleft |
The features we extract for each repository are illustrated in the example below.
{
"id": 126178683,
"full_name": "halo-dev/halo",
"html_url": "https://github.com/halo-dev/halo",
"stargazers_count": 29115,
"forks_count": 8985,
"watchers_count": 29115,
"open_issues_count": 278,
"language": "Java",
"created_at": "2018-03-21T12:56:52Z",
"pushed_at": "2023-10-28T16:29:39Z",
"license": {
"key": "gpl-3.0",
"name": "GNU General Public License v3.0",
"spdx_id": "GPL-3.0",
"url": "https://api.github.com/licenses/gpl-3.0",
"node_id": "MDc6TGljZW5zZTk="
},
"retrieval_date": "10/30/2023, 3:24:57 PM (Europe/Amsterdam)"
}
Repository Fields
- id: unique id of the repo
- full_name: complete name of the repo
- html_url: URL to the repo
- stargazers_count: number of stars of the repo
- forks_count: number of forks of the repo
- watchers_count: number of watchers of the repo
- open_issues_count: number of open issues of the repo at the extraction time
- language: main language of the repo
- created_at: creation date of the repo
- pushed_at: date of the most recent push to the repo until the extraction date
- license: license type of the repo
- retrieval_date: date when the repo was scraped from GitHub
We start by retrieving repositories with more than 900 stars using two-month tumbling windows. If we hit the 1000 repository limit per window (for a personal GitHub account), we shorten the search space to a one-month window and restart the iteration. Otherwise, the window advances by two months. Once the entire timeframe (until August 2024) is covered, we reduce the star search space: between 900 and 100 stars, we decrease the interval by 50 (e.g. search between [900, 850]), between 100 and 10 stars, we decrease the interval by 10, and for the last 10 stars, we decrease by 1. Since most repositories fall within the 0-100 star range (e.g. Figure 1 showcases the distribution of repositories with up to 500 stars for Java), using the creation date and star count filters helps us avoid API limits and scrape more data by narrowing the search space. The creation date window can be reduced even further (week or day level), in order to extract more data. We remove any potential duplicated repositories obtained due to the pagination process. Lastly, we extract all the files corresponding to each language. We extend the programming languages extension list used for The Stack with 4 languages: EJS, Raku, Starlark, and WebAssembly.
Figure 1: Distribution of scraped repositories with at most 500 stars for Java
Cleaning
The next stage in our dataset pipeline is the cleaning procedure. We exclude any files larger than 10 MB and those with fewer than 10 words.
Deduplication
The final stage of our dataset pipeline is the deduplication process. We apply both exact and near deduplication against open code datasets listed in the table below.
Open code datasets used for deduplication
Dataset | Size |
---|---|
The Stack V2 | 67.5 TB |
The Stack | 6.4 TB |
Red Pajama | 2.67 TB |
GitHub Code | 1 TB |
CodeParrot | 180 GB |
Exact Deduplication
We remove exact duplicates within our dataset itself, and then we apply exact deduplication against the open datasets. For that, we use the sha256 function to generate hashes for each file. We choose this hash function because it provides a uniform distribution of hash values, minimizes collisions, and ensures even distribution across the hash space.
Near Deduplication
We apply the MinHashLSH algorithm using the datasketch1 library. To calculate the minhashes, we use the same hash function as above, but we extract the first 16 bytes to generate 128-bit hash values. This approach balances the need for a strong hash function with the efficiency of a shorter hash length.
Additionally, we use 128 file permutations for LSH, with weights of 0.4 for precision and 0.6 for recall. We generate 7-character shingles after lowercasing the file content and removing whitespace. We find that 7-shingles provide a reasonable trade-off between the number of shingles and the data processed, being small enough to keep the number of unique shingles manageable yet large enough to provide meaningful comparisons. It was shown that the number of shingles should be large enough to ensure a low probability of shingles appearing across documents, with k = 5 suggested for smaller documents such as emails. However, code files usually contain a larger dictionary of characters than emails, including arithmetic and comparison operators which are less frequent in emails. Thus, given the increased complexity and size of code files, we consider 7-shingles to be appropriate to capture sufficient context, ensuring uniqueness and reducing false positives, which smaller shingles such as k = 5 might fail to achieve. Furthermore, k = 9 was shown to be a safe choice for large research articles, however, for our needs, 7-shingles strike a balance between accuracy and computational efficiency, crucial for handling the extensive size of the datasets. This choice provides better computational efficiency by reducing the number of comparisons while maintaining a manageable shingle space. Lastly, we use a Jaccard similarity threshold of 0.7, which proved to be efficient for both SantaCoder and StarCoder models. A high threshold reduces false positives, leading to fewer unnecessary comparisons and lower computational overhead. Moreover, this standard threshold value has been shown to be robust for duplicate detection.
Instead of removing exact and near duplicates found against other open datasets, we add a boolean mask to our dataset. This approach enhances reproducibility by allowing researchers to filter the dataset for unique files, according to their specific requirements.
The final dataset structure is shown in the example below.
{
"file_name": "Font.java",
"file_path": ".../lateralgm/resources/Font.java",
"content": "*/ package org.lateralgm.resources; import java.util.EnumMap; import org.lateralgm.main.Prefs; ...",
"file_size": 1,985,
"language": "Java",
"extension": ".java",
"repo_name": "lwizchz/GameMaker-HTML5-Player",
"repo_stars": 22,
"repo_forks": 9,
"repo_open_issues": 0,
"repo_created_at": "2011-09-10T16:05:20Z",
"repo_pushed_at": "2013-05-06T23:00:17Z",
"sha": "00046809b218b2c058f4be7...",
"exact_duplicates_stackv1": False,
"exact_duplicates_stackv2": True,
"near_duplicates_stackv1": True,
"near_duplicates_stackv2": False,
....
}
Dataset Fields
- file_name: name of the file extracted from its repo
- file_path: path to the file in its repo
- content: content of the file
- file_size: size of the file
- language: language of the file
- extension: language extension of the file
- repo_name: complete name of the file's repo
- repo_stars: number of stars of the file's repo
- repo_forks: number of forks of the file's repo
- repo_open_issues: number of open issues of the file's repo at the extraction date
- repo_created_at: creation date of the file's repo
- repo_pushed_at: date of the most recent push to the file's repo until the extraction date
- sha: sha value of the file's content
- exact_duplicates_pubdataset: boolean flag stating if there are any exact duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
- near_duplicates_pubdataset: boolean flag stating if there are any near duplicate files found against another public dataset (The Stackv2, The Stack, RedPajama, GithubCode, CodeParrot)
The distribution of the languages in The Heap is presented in the table below. The third column shows the number of files collected after filtering based on file size and word count. The last column indicates the number of files remaining after removing exact duplicates within the dataset, with exact and near duplicates compared to other datasets flagged among the remaining files.
Programming languages included in The Heap
Language | Repositories | Raw Files | Unique Files |
---|---|---|---|
Ada | 676 | 41,367 | 35,425 |
Agda | 142 | 5,483 | 5,113 |
ANTLR | 101 | 564 | 541 |
Apex | 254 | 17,833 | 13,641 |
Assembly | 7,100 | 208,896 | 104,911 |
Awk | 1,191 | 16,586 | 15,620 |
C# | 50,000 | 5,906,716 | 3,770,829 |
C++ | 50,000 | 14,891,856 | 8,341,620 |
Clojure | 27,107 | 380,567 | 273,181 |
COBOL | 241 | 2,242 | 1,208 |
Common Lisp | 796 | 45,083 | 16,968 |
Cuda | 11,907 | 54,137 | 26,175 |
Crystal | 368 | 11,606 | 7,300 |
D | 1,191 | 26,048 | 13,359 |
Dart | 1,185 | 185,630 | 128,111 |
Elixir | 11,907 | 484,935 | 413,203 |
Elm | 1,477 | 15,511 | 12,384 |
Erlang | 2,475 | 453,856 | 127,910 |
F# | 277 | 8,260 | 7,963 |
Forth | 1,240 | 55,932 | 32,049 |
Fortran | 876 | 22,152 | 16,015 |
Groovy | 222 | 28,287 | 7,932 |
Hack | 2,198 | 60,299 | 48,353 |
Haskell | 1,279 | 84,916 | 37,405 |
JavaScript | 8,023 | 122,788 | 111,234 |
Julia | 50,000 | 6,989,601 | 3,757,338 |
Kotlin | 2,959 | 46,284 | 38,381 |
Less | 21,665 | 1,467,343 | 1,389 |
Lisp | 433 | 17,276 | 7,389 |
Lua | 42,241 | 4,605,230 | 912,898 |
Mathematica | 1,528 | 164,498 | 89,853 |
MATLAB | 20,828 | 1,051,354 | 665,659 |
NetLogo | 332 | 900 | 863 |
NewLisp | 35 | 5,819 | 5,148 |
Nix | 1,892 | 75,093 | 71,199 |
Objective-C | 7,700 | 1,899,714 | 698,137 |
OCaml | 1,761 | 321,989 | 95,171 |
Pascal | 5,218 | 130,832 | 225,749 |
Perl | 50,000 | 1,798,520 | 269,760 |
PHP | 50,000 | 12,707,727 | 3,363,040 |
Processing | 2,950 | 24,723 | 20,343 |
Prolog | 1,071 | 38,995 | 20,279 |
Python | 50,000 | 2,290,182 | 1,792,451 |
R | 44,993 | 389,139 | 374,812 |
Racket | 158 | 1,384 | 1,306 |
Ruby | 13,378 | 1,579,635 | 794,364 |
Rust | 42,847 | 2,496,177 | 844,258 |
Scala | 5,893 | 749,370 | 224,021 |
Scheme | 1,878 | 106,620 | 53,226 |
Shell | 150 | 47,531 | 1,084 |
SQL | 130 | 47,185 | 41,178 |
Swift | 13,924 | 633,819 | 439,565 |
Vue | 14,858 | 834 | 498 |
WebAssembly | 68 | 834 | 587 |
Total | 733,663 | 96,990,250 | 38,681,690 |
Usage
Using the Datasets API, our dataset can be used as follows:
from datasets import load_dataset
dataset_name = 'redpajama'
language = 'Python'
ds = load_dataset(
"WizzF/Heap-Forge",
f"{language}",
split="train",
num_proc=16
)
ds = ds.filter(lambda x: not x[f'exact_duplicates_{dataset_name}'] and not x[f'near_duplicates_{dataset_name}'])
- Downloads last month
- 62