winglian commited on
Commit
34af1b4
·
1 Parent(s): 87d7825

update readme

Browse files
Files changed (2) hide show
  1. README.md +14 -22
  2. data/README.md +20 -4
README.md CHANGED
@@ -1,35 +1,27 @@
1
- # Axolotl
2
 
3
  #### You know you're going to axolotl questions
4
 
5
  ## Getting Started
6
 
7
- - Download some datasets.
8
 
9
- ```shell
10
- curl https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_gpt4.json -o data/raw/alpaca_data_gpt4.json
11
- curl https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json -L -o data/raw/vicuna_cleaned.json
12
- curl https://github.com/teknium1/GPTeacher/blob/main/Instruct/gpt4-instruct-similarity-0.6-dataset.json?raw=true -L -o data/raw/gpt4-instruct-similarity-0.6-dataset.json
13
- curl https://github.com/teknium1/GPTeacher/blob/main/Roleplay/roleplay-similarity_0.6-instruct-dataset.json?raw=true -L -o data/raw/roleplay-similarity_0.6-instruct-dataset.json
14
  ```
15
 
16
- - Convert the JSON data files to JSONL.
17
-
18
- ```shell
19
- python3 ./scripts/alpaca_json_to_jsonl.py --input data/alpaca_data_gpt4.json > data/alpaca_data_gpt4.jsonl
20
- python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/vicuna_cleaned.json > data/vicuna_cleaned.jsonl
21
- python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/roleplay-similarity_0.6-instruct-dataset.json > data/roleplay-similarity_0.6-instruct-dataset.jsonl
22
- python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/gpt4-instruct-similarity-0.6-dataset.json > data/gpt4-instruct-similarity-0.6-dataset.jsonl
23
- ```
24
 
25
- - Using JSONL makes it easier to subset the data if you want a smaller training set, i.e get 2000 random examples.
26
 
27
- ```shell
28
- shuf -n2000 data/vicuna_cleaned.jsonl > data/vicuna_cleaned.subset0.jsonl
29
- ```
30
 
31
- - Create a new or update the existing YAML config (config/pythia_1_2B_alpaca.yml)[config/pythia_1_2B_alpaca.yml]
32
- - Install python dependencies `pip3 install -e .[int4_triton]` or `pip3 install -e .[int4]`
 
 
33
  - If not using `int4` or `int4_triton`, run `pip install "peft @ git+https://github.com/huggingface/peft.git"`
34
  - Configure accelerate `accelerate config` or update `~/.cache/huggingface/accelerate/default_config.yaml`
35
 
@@ -52,4 +44,4 @@ use_cpu: false
52
  ```
53
 
54
  - Train! `accelerate launch scripts/finetune.py`, make sure to choose the correct YAML config file
55
- - Alternatively you can pass in the config file like: `accelerate launch scripts/finetune.py configs/llama_7B_alpaca.yml`
 
1
+ ~~# Axolotl
2
 
3
  #### You know you're going to axolotl questions
4
 
5
  ## Getting Started
6
 
7
+ - Point the config you are using to a huggingface hub dataset (see [configs/llama_7B_4bit.yml](https://github.com/winglian/axolotl/blob/main/configs/llama_7B_4bit.yml#L6-L8))
8
 
9
+ ```yaml
10
+ datasets:
11
+ - path: vicgalle/alpaca-gpt4
12
+ type: alpaca
 
13
  ```
14
 
15
+ - Optionally Download some datasets, see [data/README.md](data/README.md)
 
 
 
 
 
 
 
16
 
 
17
 
18
+ - Create a new or update the existing YAML config [config/pythia_1_2B_alpaca.yml](config/pythia_1_2B_alpaca.yml)
19
+ - Install python dependencies with ONE of the following:
 
20
 
21
+ - `pip3 install -e .[int4]` (recommended)
22
+ - `pip3 install -e .[int4_triton]`
23
+ - `pip3 install -e .`
24
+ -
25
  - If not using `int4` or `int4_triton`, run `pip install "peft @ git+https://github.com/huggingface/peft.git"`
26
  - Configure accelerate `accelerate config` or update `~/.cache/huggingface/accelerate/default_config.yaml`
27
 
 
44
  ```
45
 
46
  - Train! `accelerate launch scripts/finetune.py`, make sure to choose the correct YAML config file
47
+ - Alternatively you can pass in the config file like: `accelerate launch scripts/finetune.py configs/llama_7B_alpaca.yml`~~
data/README.md CHANGED
@@ -1,8 +1,24 @@
1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  ```shell
4
- curl https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_gpt4.json -o raw/alpaca_data_gpt4.json
5
- curl https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json -L -o raw/vicuna_cleaned.json
6
- curl https://github.com/teknium1/GPTeacher/blob/main/Instruct/gpt4-instruct-similarity-0.6-dataset.json?raw=true -L -o raw/gpt4-instruct-similarity-0.6-dataset.json
7
- curl https://github.com/teknium1/GPTeacher/blob/main/Roleplay/roleplay-similarity_0.6-instruct-dataset.json?raw=true -L -o raw/roleplay-similarity_0.6-instruct-dataset.json
8
  ```
 
1
 
2
+ - Download some datasets
3
+ -
4
+ ```shell
5
+ curl https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_gpt4.json -o data/raw/alpaca_data_gpt4.json
6
+ curl https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json -L -o data/raw/vicuna_cleaned.json
7
+ curl https://github.com/teknium1/GPTeacher/blob/main/Instruct/gpt4-instruct-similarity-0.6-dataset.json?raw=true -L -o data/raw/gpt4-instruct-similarity-0.6-dataset.json
8
+ curl https://github.com/teknium1/GPTeacher/blob/main/Roleplay/roleplay-similarity_0.6-instruct-dataset.json?raw=true -L -o data/raw/roleplay-similarity_0.6-instruct-dataset.json
9
+ ```
10
+
11
+ - Convert the JSON data files to JSONL.
12
+
13
+ ```shell
14
+ python3 ./scripts/alpaca_json_to_jsonl.py --input data/alpaca_data_gpt4.json > data/alpaca_data_gpt4.jsonl
15
+ python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/vicuna_cleaned.json > data/vicuna_cleaned.jsonl
16
+ python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/roleplay-similarity_0.6-instruct-dataset.json > data/roleplay-similarity_0.6-instruct-dataset.jsonl
17
+ python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/gpt4-instruct-similarity-0.6-dataset.json > data/gpt4-instruct-similarity-0.6-dataset.jsonl
18
+ ```
19
+
20
+ - Using JSONL makes it easier to subset the data if you want a smaller training set, i.e get 2000 random examples.
21
 
22
  ```shell
23
+ shuf -n2000 data/vicuna_cleaned.jsonl > data/vicuna_cleaned.subset0.jsonl
 
 
 
24
  ```