Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
Tags:
art
License:
Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,70 @@ tags:
|
|
9 |
pretty_name: Drama Llama dataset
|
10 |
size_categories:
|
11 |
- 10K<n<100K
|
12 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
pretty_name: Drama Llama dataset
|
10 |
size_categories:
|
11 |
- 10K<n<100K
|
12 |
+
---
|
13 |
+
|
14 |
+
|
15 |
+
# DramaLlama dataset
|
16 |
+
|
17 |
+
![title.png](title.png)
|
18 |
+
|
19 |
+
This is the dataset repository of DramaLlama. This repository contains scripts designed to gather and prepare the dataset.
|
20 |
+
|
21 |
+
_Note: This repository builds upon the findings of https://github.com/molbal/llm-text-completion-finetune _
|
22 |
+
|
23 |
+
## Step 1: Getting novels
|
24 |
+
|
25 |
+
We will use The Gutenberg project again to gather novels. Let's get some drama categories. I will aim for a larger dataset size this time.
|
26 |
+
|
27 |
+
I'm running the following scripts:
|
28 |
+
|
29 |
+
```bash
|
30 |
+
pip install requests
|
31 |
+
|
32 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "detective fiction" --num_records 10000
|
33 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "crime nonfiction" --num_records 10000
|
34 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "mystery fiction" --num_records 10000
|
35 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "detective fiction" --num_records 10000
|
36 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "gothic fiction" --num_records 10000
|
37 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "horror" --num_records 10000
|
38 |
+
python .\j\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "romantic fiction" --num_records 10000
|
39 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "short stories" --num_records 10000
|
40 |
+
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "western" --num_records 10000
|
41 |
+
```
|
42 |
+
## Step 2: Preprocessing
|
43 |
+
|
44 |
+
### Step 2/a: Stripping header and footer
|
45 |
+
Now we need to strip the headers and footers of the files. I noticed how some files failed to download, and those ones do not have a file extension. This might be caused by a bug in the downloader script, but it was ~200 errors for me out of ~4000 downloads so
|
46 |
+
|
47 |
+
```bash
|
48 |
+
python .\pipeline\step2a-strip.py --input_dir "./training-data/0_raw/" --output_dir "./training-data/2a_stripped/"
|
49 |
+
```
|
50 |
+
|
51 |
+
|
52 |
+
### Step 2/b: Stripping
|
53 |
+
We do a bit more cleaning. We have two files, a blacklist and a junklist. Blacklist contains expressions that we do not want included in the trainig data, I filled it with common ChatGPT output. (We do not need to worry, as our training data comes well **before** ChatGPT, but still) Junklist's contents are simply removed from it. These are distribution notes here.
|
54 |
+
|
55 |
+
Here we chunk to small pieces, (~250) and if a chunk contains a blacklisted sentence, it is sent to our local LLM to rephrase it.
|
56 |
+
|
57 |
+
_Note: We need Ollama for this installed on the local environment_
|
58 |
+
|
59 |
+
```bash
|
60 |
+
ollama pull mistral
|
61 |
+
pip install nltk ollama
|
62 |
+
python .\pipeline\step2b-clean.py --input_dir "./training-data/2a_stripped/" --output_dir "./training-data/2b_cleaned/" --llm "mistral"
|
63 |
+
```
|
64 |
+
|
65 |
+
After this, it puts the files back together in the output directory.
|
66 |
+
|
67 |
+
|
68 |
+
## Step 3: Chunking
|
69 |
+
We chunk the dataset now and save it into a parquet file.
|
70 |
+
```bash
|
71 |
+
pip install pandas pyarrow
|
72 |
+
python .\pipeline\step3-chunking.py --source_dir "./training-data/2b_cleaned/" --output_file "./training-data/data.parquet"
|
73 |
+
```
|
74 |
+
|
75 |
+
## Step 4: 🤗 dataset upload
|
76 |
+
We upload the dataset to Hugging Face:
|
77 |
+
https://huggingface.co/datasets/molbal/dramallama-novels
|
78 |
+
|