File size: 5,330 Bytes
cb2c850
8ce9073
46e6101
d59786a
46e6101
 
 
 
8f5ba4c
 
 
 
 
 
 
 
 
 
46e6101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
672bf33
 
46e6101
 
 
 
672bf33
 
8ce9073
 
 
 
672bf33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
pretty_name: Common Crawl Statistics

configs:
- config_name: Charsets
  data_files: "charsets.csv"
- config_name: Duplicates
  data_files: "crawlduplicates.txt"
  sep: \s+
  header: 0
  names:
  - id
  - crawl
  - page
  - url
  - digest estim.
  - 1-(urls/pages)
  - 1-(digests/pages)
- config_name: Crawlmetrics
  data_files: "crawlmetrics.csv"
- config_name: Crawl metrics by type
  data_files: "crawlmetricsbytype.csv"
- config_name: Crawl overlaps digest
  data_files: "crawloverlap_digest.csv"
- config_name: Crawl overlaps URL
  data_files: "crawloverlap_url.csv"
- config_name: Crawl Similarity Digest
  data_files: "crawlsimilarity_digest.csv"
- config_name: Crawl Similarity URL
  data_files: "crawlsimilarity_url.csv"
- config_name: Crawl Size
  data_files: "crawlsize.csv"
- config_name: Crawl Size by Type
  data_files: "crawlsizebytype.csv"
- config_name: Domains top 500
  data_files: "domains-top-500.csv"
- config_name: Languages
  data_files: "languages.csv"
- config_name: MIME types detected
  data_files: "mimetypes_detected.csv"
- config_name: MIME Types
  data_files: "mimetypes.csv"
- config_name: Top-level domains
  data_files: "tlds.csv"
---

# Common Crawl Statistics

Number of pages, distribution of top-level domains, crawl overlaps, etc. - basic metrics about Common Crawl Monthly Crawl Archives, for more detailed information and graphs please visit our [official statistics page](https://commoncrawl.github.io/cc-crawl-statistics/). Here you can find the following statistics files:

## Charsets

The [character set or encoding](https://en.wikipedia.org/wiki/Character_encoding) of HTML pages only is identified by [Tika](https://tika.apache.org/)'s [AutoDetectReader](https://tika.apache.org/1.25/api/org/apache/tika/detect/AutoDetectReader.html). The table shows the percentage how character sets have been used to encode HTML pages crawled by the latest monthly crawls.

## Crawl Metrics

Crawler-related metrics are extracted from the crawler log files and include

- the size of the URL database (CrawlDb)
- the fetch list size (number of URLs scheduled for fetching)
- the response status of the fetch:
  - success
  - redirect
  - denied (forbidden by HTTP 403 or robots.txt)
  - failed (404, host not found, etc.)
- usage of http/https URL protocols (schemes)

## Crawl Overlaps

Overlaps between monthly crawl archives are calculated and plotted as [Jaccard similarity](https://en.wikipedia.org/wiki/Jaccard_index) of unique URLs or content digests. The cardinality of the monthly crawls and the union of two crawls are [Hyperloglog](https://en.wikipedia.org/wiki/HyperLogLog) estimates.

Note, that the content overlaps are small and in the same order of magnitude as the 1% error rate of the Hyperloglog cardinality estimates.

## Crawl Size

The number of released pages per month fluctuates varies over time due to changes to the number of available seeds, scheduling policy for page revists and crawler operating issues. Because of duplicates the numbers of unique URLs or unique content digests (here Hyperloglog estimates) are lower than the number of page captures.

The size on various aggregation levels (host, domain, top-level domain / public suffix) is shown in the next plot. Note that the scale differs per level of aggregation, see the exponential notation behind the labels.

## Domains Top 500

The shows the top 500 registered domains (in terms of page captures) of the last main/monthly crawl.

Note that the ranking by page captures only partially corresponds to the importance of domains, as the crawler respects the robots.txt and tries hard not to overload web servers. Highly ranked domains tend to be underrepresented. If you're looking for a list of domain or host names ranked by page rank or harmonic centrality, consider using one of the [webgraph datasets](https://github.com/commoncrawl/cc-webgraph#exploring-webgraph-data-sets) instead.

## Languages

The language of a document is identified by [Compact Language Detector 2 (CLD2)](https://github.com/CLD2Owners/cld2). It is able to identify 160 different languages and up to 3 languages per document. The table lists the percentage covered by the primary language of a document (returned first by CLD2). So far, only HTML pages are passed to the language detector.

## MIME Types

The crawled content is dominated by HTML pages and contains only a small percentage of other document formats. The tables show the percentage of the top 100 media or MIME types of the latest monthly crawls.

While the first table is based the `Content-Type` HTTP header, the second uses the MIME type detected by [Apache Tika](https://tika.apache.org/) based on the actual content.

## Top-level Domains

[Top-level domains](https://en.wikipedia.org/wiki/Top-level_domain) (abbrev. "TLD"/"TLDs") are a significant indicator for the representativeness of the data, whether the data set or particular crawl is biased towards certain countries, regions or languages.

Note, that top-level domain is defined here as the left-most element of a host name (`com` in `www.example.com`). [Country-code second-level domains](https://en.wikipedia.org/wiki/Second-level_domain#Country-code_second-level_domains) ("ccSLD") and [public suffixes](https://en.wikipedia.org/wiki/Public_Suffix_List) are not covered by this metrics.