Pedro Ortiz Suarez
chore: Updated the stats for the latest CC-MAIN-2024-26 stats and completed the documentation
672bf33
unverified
pretty_name: Common Crawl Statistics | |
configs: | |
- config_name: Charsets | |
data_files: "charsets.csv" | |
- config_name: Duplicates | |
data_files: "crawlduplicates.txt" | |
sep: \s+ | |
header: 0 | |
names: | |
- id | |
- crawl | |
- page | |
- url | |
- digest estim. | |
- 1-(urls/pages) | |
- 1-(digests/pages) | |
- config_name: Crawlmetrics | |
data_files: "crawlmetrics.csv" | |
- config_name: Crawl metrics by type | |
data_files: "crawlmetricsbytype.csv" | |
- config_name: Crawl overlaps digest | |
data_files: "crawloverlap_digest.csv" | |
- config_name: Crawl overlaps URL | |
data_files: "crawloverlap_url.csv" | |
- config_name: Crawl Similarity Digest | |
data_files: "crawlsimilarity_digest.csv" | |
- config_name: Crawl Similarity URL | |
data_files: "crawlsimilarity_url.csv" | |
- config_name: Crawl Size | |
data_files: "crawlsize.csv" | |
- config_name: Crawl Size by Type | |
data_files: "crawlsizebytype.csv" | |
- config_name: Domains top 500 | |
data_files: "domains-top-500.csv" | |
- config_name: Languages | |
data_files: "languages.csv" | |
- config_name: MIME types detected | |
data_files: "mimetypes_detected.csv" | |
- config_name: MIME Types | |
data_files: "mimetypes.csv" | |
- config_name: Top-level domains | |
data_files: "tlds.csv" | |
# Common Crawl Statistics | |
Number of pages, distribution of top-level domains, crawl overlaps, etc. - basic metrics about Common Crawl Monthly Crawl Archives, for more detailed information and graphs please visit our [official statistics page](https://commoncrawl.github.io/cc-crawl-statistics/). Here you can find the following statistics files: | |
## Charsets | |
The [character set or encoding](https://en.wikipedia.org/wiki/Character_encoding) of HTML pages only is identified by [Tika](https://tika.apache.org/)'s [AutoDetectReader](https://tika.apache.org/1.25/api/org/apache/tika/detect/AutoDetectReader.html). The table shows the percentage how character sets have been used to encode HTML pages crawled by the latest monthly crawls. | |
## Crawl Metrics | |
Crawler-related metrics are extracted from the crawler log files and include | |
- the size of the URL database (CrawlDb) | |
- the fetch list size (number of URLs scheduled for fetching) | |
- the response status of the fetch: | |
- success | |
- redirect | |
- denied (forbidden by HTTP 403 or robots.txt) | |
- failed (404, host not found, etc.) | |
- usage of http/https URL protocols (schemes) | |
## Crawl Overlaps | |
Overlaps between monthly crawl archives are calculated and plotted as [Jaccard similarity](https://en.wikipedia.org/wiki/Jaccard_index) of unique URLs or content digests. The cardinality of the monthly crawls and the union of two crawls are [Hyperloglog](https://en.wikipedia.org/wiki/HyperLogLog) estimates. | |
Note, that the content overlaps are small and in the same order of magnitude as the 1% error rate of the Hyperloglog cardinality estimates. | |
## Crawl Size | |
The number of released pages per month fluctuates varies over time due to changes to the number of available seeds, scheduling policy for page revists and crawler operating issues. Because of duplicates the numbers of unique URLs or unique content digests (here Hyperloglog estimates) are lower than the number of page captures. | |
The size on various aggregation levels (host, domain, top-level domain / public suffix) is shown in the next plot. Note that the scale differs per level of aggregation, see the exponential notation behind the labels. | |
## Domains Top 500 | |
The shows the top 500 registered domains (in terms of page captures) of the last main/monthly crawl. | |
Note that the ranking by page captures only partially corresponds to the importance of domains, as the crawler respects the robots.txt and tries hard not to overload web servers. Highly ranked domains tend to be underrepresented. If you're looking for a list of domain or host names ranked by page rank or harmonic centrality, consider using one of the [webgraph datasets](https://github.com/commoncrawl/cc-webgraph#exploring-webgraph-data-sets) instead. | |
## Languages | |
The language of a document is identified by [Compact Language Detector 2 (CLD2)](https://github.com/CLD2Owners/cld2). It is able to identify 160 different languages and up to 3 languages per document. The table lists the percentage covered by the primary language of a document (returned first by CLD2). So far, only HTML pages are passed to the language detector. | |
## MIME Types | |
The crawled content is dominated by HTML pages and contains only a small percentage of other document formats. The tables show the percentage of the top 100 media or MIME types of the latest monthly crawls. | |
While the first table is based the `Content-Type` HTTP header, the second uses the MIME type detected by [Apache Tika](https://tika.apache.org/) based on the actual content. | |
## Top-level Domains | |
[Top-level domains](https://en.wikipedia.org/wiki/Top-level_domain) (abbrev. "TLD"/"TLDs") are a significant indicator for the representativeness of the data, whether the data set or particular crawl is biased towards certain countries, regions or languages. | |
Note, that top-level domain is defined here as the left-most element of a host name (`com` in `www.example.com`). [Country-code second-level domains](https://en.wikipedia.org/wiki/Second-level_domain#Country-code_second-level_domains) ("ccSLD") and [public suffixes](https://en.wikipedia.org/wiki/Public_Suffix_List) are not covered by this metrics. | |