Update README.md
Browse files
README.md
CHANGED
@@ -8,10 +8,10 @@ A dataset containing novels, epics and essays.
|
|
8 |
The files are as follows:
|
9 |
- main.txt, a file with all the texts, every text on a newline, all English
|
10 |
- vocab.txt, a file with the trained (BERT) vocab, a newline a new word
|
11 |
-
- train.csv, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)
|
12 |
-
- test.csv, the test split in the same way, 5,417 samples (698,793 tokens)
|
13 |
- DatasetDistribution.png, a file with all the texts and a plot with character length
|
14 |
|
|
|
|
|
15 |
## Texts
|
16 |
The texts used are these:
|
17 |
- Wuthering Heights
|
|
|
8 |
The files are as follows:
|
9 |
- main.txt, a file with all the texts, every text on a newline, all English
|
10 |
- vocab.txt, a file with the trained (BERT) vocab, a newline a new word
|
|
|
|
|
11 |
- DatasetDistribution.png, a file with all the texts and a plot with character length
|
12 |
|
13 |
+
There are some 7 million tokens in total.
|
14 |
+
|
15 |
## Texts
|
16 |
The texts used are these:
|
17 |
- Wuthering Heights
|