Datasets:
Lino-Urdaneta-Mammut
commited on
Commit
·
856ee15
1
Parent(s):
2155124
Update README.md
Browse filesREADME Structure in titles.
README.md
CHANGED
@@ -1,11 +1,11 @@
|
|
1 |
-
# How to use
|
2 |
|
3 |
How to load this dataset directly with the datasets library:
|
4 |
|
5 |
`>>> from datasets import load_dataset`
|
6 |
`>>> dataset = load_dataset("mammut-corpus-venezuela")`
|
7 |
|
8 |
-
# Dataset Summary
|
9 |
|
10 |
**mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
|
11 |
|
@@ -13,29 +13,28 @@ Each record in the dataset contains the author of the text (anonymized for conve
|
|
13 |
|
14 |
The dataset counts with a train split and a test split.
|
15 |
|
16 |
-
# Supported Tasks and Leaderboards
|
17 |
|
18 |
This dataset can be used for language modeling.
|
19 |
|
20 |
-
# Languages
|
21 |
|
22 |
The dataset contains Venezuelan and Latin-American Spanish.
|
23 |
|
24 |
-
# Dataset Structure
|
25 |
|
26 |
-
## Data Instances
|
27 |
|
28 |
An example from the dataset:
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
"
|
33 |
-
"
|
34 |
-
"
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
|
40 |
|
41 |
The average word token count are provided below:
|
@@ -46,7 +45,7 @@ Train
|
|
46 |
Test
|
47 |
4,876,739
|
48 |
|
49 |
-
## Data Fields
|
50 |
|
51 |
The data have several fields:
|
52 |
AUTHOR: author of the text. It is anonymized for conversation authors.
|
@@ -57,12 +56,12 @@ TITLE: title of the text from which SENTENCE originates.
|
|
57 |
TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
|
58 |
TYPE: linguistic register of the text.
|
59 |
|
60 |
-
## Data Splits
|
61 |
|
62 |
Size of downloaded dataset files:
|
63 |
Size of the generated dataset:
|
64 |
Total amount of disk used:
|
65 |
-
The mammut-corpus-
|
66 |
|
67 |
|
68 |
Dataset Split
|
@@ -72,15 +71,15 @@ Train
|
|
72 |
Test
|
73 |
157,011
|
74 |
|
75 |
-
# Dataset Creation
|
76 |
|
77 |
-
## Curation Rationale
|
78 |
|
79 |
-
The purpose of the mammut-corpus-
|
80 |
|
81 |
-
## Source Data
|
82 |
|
83 |
-
### Initial Data Collection and Normalization
|
84 |
|
85 |
The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online.
|
86 |
|
@@ -88,55 +87,55 @@ The text from the web scraping process was separated in sentences and was automa
|
|
88 |
|
89 |
An arrow parquet file was created.
|
90 |
|
91 |
-
Text sources: El Estímulo (website), cinco8 (website), csm_1990 (oral speaking corpus),
|
92 |
|
93 |
-
### Who are the source language producers?
|
94 |
|
95 |
The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
|
96 |
|
97 |
-
## Annotations
|
98 |
|
99 |
-
### Annotation process
|
100 |
|
101 |
At the moment the dataset does not contain any additional annotations.
|
102 |
|
103 |
-
### Who are the annotators?
|
104 |
|
105 |
Not applicable.
|
106 |
|
107 |
-
## Personal and Sensitive Information
|
108 |
|
109 |
The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
|
110 |
|
111 |
-
# Considerations for Using the Data
|
112 |
|
113 |
-
##
|
114 |
|
115 |
The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
|
116 |
|
117 |
-
## Discussion of Biases
|
118 |
|
119 |
Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
|
120 |
|
121 |
-
## Other Known Limitations
|
122 |
(If applicable, description of the other limitations in the data.)
|
123 |
|
124 |
Not applicable.
|
125 |
|
126 |
-
# Additional Information
|
127 |
|
128 |
-
## Dataset Curators
|
129 |
|
130 |
The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
|
131 |
|
132 |
-
## Licensing Information
|
133 |
|
134 |
Not applicable.
|
135 |
|
136 |
-
## Citation Information
|
137 |
|
138 |
Not applicable.
|
139 |
|
140 |
-
## Contributions
|
141 |
|
142 |
Not applicable.
|
|
|
1 |
+
# 1. How to use
|
2 |
|
3 |
How to load this dataset directly with the datasets library:
|
4 |
|
5 |
`>>> from datasets import load_dataset`
|
6 |
`>>> dataset = load_dataset("mammut-corpus-venezuela")`
|
7 |
|
8 |
+
# 2. Dataset Summary
|
9 |
|
10 |
**mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
|
11 |
|
|
|
13 |
|
14 |
The dataset counts with a train split and a test split.
|
15 |
|
16 |
+
# 3. Supported Tasks and Leaderboards
|
17 |
|
18 |
This dataset can be used for language modeling.
|
19 |
|
20 |
+
# 4. Languages
|
21 |
|
22 |
The dataset contains Venezuelan and Latin-American Spanish.
|
23 |
|
24 |
+
# 5. Dataset Structure
|
25 |
|
26 |
+
## 5.1 Data Instances
|
27 |
|
28 |
An example from the dataset:
|
29 |
+
|
30 |
+
|
31 |
+
"AUTHOR":"author in title",
|
32 |
+
"TITLE":"Luis Alberto Buttó: Hecho en socialismo",
|
33 |
+
"SENTENCE":"Históricamente, siempre fue así.",
|
34 |
+
"DATE":"2021-07-04 07:18:46.918253",
|
35 |
+
"SOURCE":"la patilla",
|
36 |
+
"TOKENS":"4",
|
37 |
+
"TYPE":"opinion/news",
|
|
|
38 |
|
39 |
|
40 |
The average word token count are provided below:
|
|
|
45 |
Test
|
46 |
4,876,739
|
47 |
|
48 |
+
## 5.2 Data Fields
|
49 |
|
50 |
The data have several fields:
|
51 |
AUTHOR: author of the text. It is anonymized for conversation authors.
|
|
|
56 |
TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
|
57 |
TYPE: linguistic register of the text.
|
58 |
|
59 |
+
## 5.3 Data Splits
|
60 |
|
61 |
Size of downloaded dataset files:
|
62 |
Size of the generated dataset:
|
63 |
Total amount of disk used:
|
64 |
+
The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
|
65 |
|
66 |
|
67 |
Dataset Split
|
|
|
71 |
Test
|
72 |
157,011
|
73 |
|
74 |
+
# 6. Dataset Creation
|
75 |
|
76 |
+
## 6.1 Curation Rationale
|
77 |
|
78 |
+
The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
|
79 |
|
80 |
+
## 6.2 Source Data
|
81 |
|
82 |
+
### 6.2.1 Initial Data Collection and Normalization
|
83 |
|
84 |
The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online.
|
85 |
|
|
|
87 |
|
88 |
An arrow parquet file was created.
|
89 |
|
90 |
+
Text sources: El Estímulo (website), cinco8 (website), csm_1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats.
|
91 |
|
92 |
+
### 6.2.2 Who are the source language producers?
|
93 |
|
94 |
The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
|
95 |
|
96 |
+
## 6.3 Annotations
|
97 |
|
98 |
+
### 6.3.1 Annotation process
|
99 |
|
100 |
At the moment the dataset does not contain any additional annotations.
|
101 |
|
102 |
+
### 6.3.2 Who are the annotators?
|
103 |
|
104 |
Not applicable.
|
105 |
|
106 |
+
## 6.4 Personal and Sensitive Information
|
107 |
|
108 |
The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
|
109 |
|
110 |
+
# 7. Considerations for Using the Data
|
111 |
|
112 |
+
## 7. 1Social Impact of Dataset
|
113 |
|
114 |
The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
|
115 |
|
116 |
+
## 7.2 Discussion of Biases
|
117 |
|
118 |
Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
|
119 |
|
120 |
+
## 7.3 Other Known Limitations
|
121 |
(If applicable, description of the other limitations in the data.)
|
122 |
|
123 |
Not applicable.
|
124 |
|
125 |
+
# 8. Additional Information
|
126 |
|
127 |
+
## 8.1 Dataset Curators
|
128 |
|
129 |
The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
|
130 |
|
131 |
+
## 8.2 Licensing Information
|
132 |
|
133 |
Not applicable.
|
134 |
|
135 |
+
## 8.3 Citation Information
|
136 |
|
137 |
Not applicable.
|
138 |
|
139 |
+
## 8.4 Contributions
|
140 |
|
141 |
Not applicable.
|