parquet-converter commited on
Commit
902325e
·
1 Parent(s): b2c20f0

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,164 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - es
8
- language_bcp47:
9
- - es-VE
10
- license:
11
- - cc-by-nc-nd-4.0
12
- multilinguality:
13
- - monolingual
14
- pretty_name: mammut-corpus-venezuela
15
- size_categories:
16
- - unknown
17
- source_datasets:
18
- - original
19
- task_categories:
20
- - sequence-modeling
21
- task_ids:
22
- - language-modeling
23
- ---
24
-
25
- # mammut-corpus-venezuela
26
-
27
- HuggingFace Dataset for testing purposes. The train dataset is `mammut/mammut-corpus-venezuela`.
28
-
29
- ## 1. How to use
30
-
31
- How to load this dataset directly with the datasets library:
32
-
33
- `>>> from datasets import load_dataset`
34
- `>>> dataset = load_dataset("mammut/mammut-corpus-venezuela")`
35
-
36
- ## 2. Dataset Summary
37
-
38
- **mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
39
-
40
- Each record in the dataset contains the author of the text (anonymized for conversation authors), the date on which the text entered in the corpus, the text which was automatically tokenized at sentence level for sources other than conversations, the source of the text, the title of the text, the number of tokens (excluding punctuation marks) of the text, and the linguistic register of the text.
41
-
42
- This is the test set for `mammut/mammut-corpus-venezuela` dataset.
43
-
44
- ## 3. Supported Tasks and Leaderboards
45
-
46
- This dataset can be used for language modeling testing.
47
-
48
- ## 4. Languages
49
-
50
- The dataset contains Venezuelan and Latin-American Spanish.
51
-
52
- ## 5. Dataset Structure
53
-
54
- Dataset structure features.
55
-
56
- ### 5.1 Data Instances
57
-
58
- An example from the dataset:
59
-
60
-
61
- "AUTHOR":"author in title",
62
- "TITLE":"Luis Alberto Buttó: Hecho en socialismo",
63
- "SENTENCE":"Históricamente, siempre fue así.",
64
- "DATE":"2021-07-04 07:18:46.918253",
65
- "SOURCE":"la patilla",
66
- "TOKENS":"4",
67
- "TYPE":"opinion/news",
68
-
69
-
70
- The average word token count are provided below:
71
-
72
- ### 5.2 Total of tokens (no spelling marks)
73
-
74
- Test: 4,876,739.
75
-
76
- ### 5.3 Data Fields
77
-
78
- The data have several fields:
79
-
80
- AUTHOR: author of the text. It is anonymized for conversation authors.
81
- DATE: date on which the text was entered in the corpus.
82
- SENTENCE: text. It was automatically tokenized for sources other than conversations.
83
- SOURCE: source of the texts.
84
- TITLE: title of the text from which SENTENCE originates.
85
- TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
86
- TYPE: linguistic register of the text.
87
-
88
- ### 5.4 Data Splits
89
-
90
- The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
91
-
92
- Number of Instances in Split.
93
-
94
- Test: 157,011.
95
-
96
- ## 6. Dataset Creation
97
-
98
- ### 6.1 Curation Rationale
99
-
100
- The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
101
-
102
- ### 6.2 Source Data
103
-
104
- **6.2.1 Initial Data Collection and Normalization**
105
-
106
- The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online.
107
-
108
- The text from the web scraping process was separated in sentences and was automatically tokenized for sources other than conversations.
109
-
110
- An arrow parquet file was created.
111
-
112
- Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats.
113
-
114
- **6.2.2 Who are the source language producers?**
115
-
116
- The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
117
-
118
- ## 6.3 Annotations
119
-
120
- **6.3.1 Annotation process**
121
-
122
- At the moment the dataset does not contain any additional annotations.
123
-
124
- **6.3.2 Who are the annotators?**
125
-
126
- Not applicable.
127
-
128
- ### 6.4 Personal and Sensitive Information
129
-
130
- The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
131
-
132
- ## 7. Considerations for Using the Data
133
-
134
- ### 7.1 Social Impact of Dataset
135
-
136
- The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
137
-
138
- ### 7.2 Discussion of Biases
139
-
140
- Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
141
-
142
- ### 7.3 Other Known Limitations
143
-
144
- (If applicable, description of the other limitations in the data.)
145
-
146
- Not applicable.
147
-
148
- ## 8. Additional Information
149
-
150
- ### 8.1 Dataset Curators
151
-
152
- The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
153
-
154
- ### 8.2 Licensing Information
155
-
156
- Not applicable.
157
-
158
- ### 8.3 Citation Information
159
-
160
- Not applicable.
161
-
162
- ### 8.4 Contributions
163
-
164
- Not applicable.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
mammut-corpus-venezuela-test-set.parquet → mammut--mammut-corpus-venezuela-test-set/parquet-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5447d9d772b34008e50b52b901616ea30beee7a264d34a73ebe85d6970d4d8f0
3
- size 26259768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d918ceb34b30a79a26047b720bc04ebdaf8c3661cad1ed9bace21d519f0f0a0d
3
+ size 28667091