Update README.md
Browse files
README.md
CHANGED
@@ -19,13 +19,27 @@ This dataset has been collected from twitter which is more than 41 GB of clean d
|
|
19 |
|
20 |
Arabic
|
21 |
|
22 |
-
|
23 |
### Source Data
|
24 |
|
25 |
Twitter
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
The collected data comprises 100 GB of Twitter raw data. Only tweets with Arabic characters were crawled. It was observed that the new data contained a large number of Persian tweets as well as many Arabic words with repeated characters. Because of this and in order to improve the data efficiency the raw data was processed as follows: hashtags, mentions, and links were removed; tweets that contain Persian characters, 3 consecutive characters, or a singlecharacter word were dropped out; normalization of Arabic letters was considered.
|
31 |
|
|
|
19 |
|
20 |
Arabic
|
21 |
|
|
|
22 |
### Source Data
|
23 |
|
24 |
Twitter
|
25 |
|
26 |
+
### Example on data loading using streaming:
|
27 |
+
|
28 |
+
```py
|
29 |
+
from datasets import load_dataset
|
30 |
+
dataset = load_dataset("pain/Arabic-Tweets",split='train', streaming=True)
|
31 |
+
print(next(iter(dataset)))
|
32 |
+
```
|
33 |
|
34 |
+
### Example on data loading without streaming "It will be downloaded locally":
|
35 |
+
|
36 |
+
```py
|
37 |
+
from datasets import load_dataset
|
38 |
+
dataset = load_dataset("pain/Arabic-Tweets",split='train')
|
39 |
+
print(dataset["train"][0])
|
40 |
+
```
|
41 |
+
|
42 |
+
#### Initial Data Collection and Normalization
|
43 |
|
44 |
The collected data comprises 100 GB of Twitter raw data. Only tweets with Arabic characters were crawled. It was observed that the new data contained a large number of Persian tweets as well as many Arabic words with repeated characters. Because of this and in order to improve the data efficiency the raw data was processed as follows: hashtags, mentions, and links were removed; tweets that contain Persian characters, 3 consecutive characters, or a singlecharacter word were dropped out; normalization of Arabic letters was considered.
|
45 |
|