edumunozsala commited on
Commit
1d0ec1a
·
1 Parent(s): 8853f4b

Upload README.md

Browse files

Added the README file

Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ - code
5
+ - coding
6
+ - llama-2
7
+ model-index:
8
+ - name: Llama-2-7b-python-coder
9
+ results: []
10
+ license: apache-2.0
11
+ language:
12
+ - code
13
+ datasets:
14
+ - iamtarun/python_code_instructions_18k_alpaca
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+
19
+ # LlaMa 2 7B Python Coder using Unsloth 👩‍💻
20
+
21
+ **LlaMa-2 7b** fine-tuned on the **python_code_instructions_18k_alpaca Code instructions dataset** by using the library [Unsloth](https://github.com/unslothai/unsloth).
22
+
23
+
24
+ ## Pretrained description
25
+
26
+ [Llama-2](https://huggingface.co/meta-llama/Llama-2-7b)
27
+
28
+ Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.
29
+
30
+ Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety
31
+
32
+ ## Training data
33
+
34
+ [python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
35
+
36
+ The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style.
37
+
38
+ ### Training hyperparameters
39
+
40
+ **SFTTrainer arguments**
41
+ ```py
42
+ # Model Parameters
43
+ max_seq_length = 2048
44
+ dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
45
+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
46
+
47
+ # LoRA Parameters
48
+ r = 16
49
+ target_modules = ["gate_proj", "up_proj", "down_proj"]
50
+ #target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",],
51
+ lora_alpha = 16
52
+
53
+ # Training parameters
54
+ learning_rate = 2e-4
55
+ weight_decay = 0.01
56
+ #Evaluation
57
+ evaluation_strategy="no"
58
+ eval_steps= 50
59
+
60
+ # if training in epochs
61
+ #num_train_epochs=2
62
+ #save_strategy="epoch"
63
+
64
+ # if training in steps
65
+ max_steps = 1500
66
+ save_strategy="steps"
67
+ save_steps=500
68
+
69
+ logging_steps=100
70
+ warmup_steps = 10
71
+ warmup_ratio=0.01
72
+ batch_size = 4
73
+ gradient_accumulation_steps = 4
74
+ lr_scheduler_type = "linear"
75
+ optimizer = "adamw_8bit"
76
+ use_gradient_checkpointing = True
77
+ random_state = 42
78
+ ```
79
+
80
+ ### Framework versions
81
+ - Unsloth
82
+
83
+ ### Example of usage
84
+
85
+ ```py
86
+ import torch
87
+ from transformers import AutoModelForCausalLM, AutoTokenizer
88
+
89
+ model_id = "edumunozsala/unsloth-llama-2-7B-python-coder"
90
+
91
+ # Load the entire model on the GPU 0
92
+ device_map = {"": 0}
93
+
94
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
95
+
96
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, torch_dtype=torch.float16,
97
+ device_map="auto")
98
+
99
+ instruction="Write a Python function to display the first and last elements of a list."
100
+ input=""
101
+
102
+ prompt = f"""### Instruction:
103
+ Use the Task below and the Input given to write the Response, which is a programming code that can solve the Task.
104
+
105
+ ### Task:
106
+ {instruction}
107
+
108
+ ### Input:
109
+ {input}
110
+
111
+ ### Response:
112
+ """
113
+
114
+ input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
115
+ # with torch.inference_mode():
116
+ outputs = model.generate(input_ids=input_ids, max_new_tokens=100, do_sample=True, top_p=0.9,temperature=0.3)
117
+
118
+ print(f"Prompt:\n{prompt}\n")
119
+ print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):]}")
120
+
121
+ ```
122
+
123
+ ### Citation
124
+
125
+ ```
126
+ @misc {edumunozsala_2023,
127
+ author = { {Eduardo Muñoz} },
128
+ title = { unsloth-llama-2-7B-python-coder },
129
+ year = 2024,
130
+ url = { https://huggingface.co/edumunozsala/unsloth-llama-2-7B-python-coder },
131
+ publisher = { Hugging Face }
132
+ }
133
+ ```