Nanobit commited on
Commit
04a1b77
·
unverified ·
2 Parent(s): 6abfd87 cfff94b

Merge pull request #161 from NanoCode012/fix/peft-setup

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -33,6 +33,7 @@
33
  git clone https://github.com/OpenAccess-AI-Collective/axolotl
34
 
35
  pip3 install -e .
 
36
 
37
  accelerate config
38
 
@@ -53,6 +54,7 @@ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
53
  docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.9-cu118-2.0.0
54
  ```
55
  - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0`: for runpod
 
56
  - `winglian/axolotl:dev`: dev branch (not usually up to date)
57
 
58
  Or run on the current files for development:
@@ -67,9 +69,19 @@ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
67
  2. Install pytorch stable https://pytorch.org/get-started/locally/
68
 
69
  3. Install python dependencies with ONE of the following:
70
- - `pip3 install -e .` (recommended, supports QLoRA, no gptq/int4 support)
71
- - `pip3 install -e .[gptq]` (next best if you don't need QLoRA, but want to use gptq)
72
- - `pip3 install -e .[gptq_triton]`
 
 
 
 
 
 
 
 
 
 
73
 
74
  - LambdaLabs
75
  <details>
 
33
  git clone https://github.com/OpenAccess-AI-Collective/axolotl
34
 
35
  pip3 install -e .
36
+ pip3 install -U git+https://github.com/huggingface/peft.git
37
 
38
  accelerate config
39
 
 
54
  docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.9-cu118-2.0.0
55
  ```
56
  - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0`: for runpod
57
+ - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0-gptq`: for gptq
58
  - `winglian/axolotl:dev`: dev branch (not usually up to date)
59
 
60
  Or run on the current files for development:
 
69
  2. Install pytorch stable https://pytorch.org/get-started/locally/
70
 
71
  3. Install python dependencies with ONE of the following:
72
+ - Recommended, supports QLoRA, NO gptq/int4 support
73
+ ```bash
74
+ pip3 install -e .
75
+ pip3 install -U git+https://github.com/huggingface/peft.git
76
+ ```
77
+ - gptq/int4 support, NO QLoRA
78
+ ```bash
79
+ pip3 install -e .[gptq]
80
+ ```
81
+ - same as above but not recommended
82
+ ```bash
83
+ pip3 install -e .[gptq_triton]
84
+ ```
85
 
86
  - LambdaLabs
87
  <details>