teelinsan commited on
Commit
97e6081
·
1 Parent(s): 7cd07eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -47
README.md CHANGED
@@ -1,55 +1,26 @@
1
  ---
2
- license: other
3
- tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: camoscio-7b-llama
7
- results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
- # camoscio-7b-llama
14
 
15
- This model is a fine-tuned version of [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf) on an unknown dataset.
16
 
17
- ## Model description
 
 
18
 
19
- More information needed
 
 
 
 
 
 
 
20
 
21
- ## Intended uses & limitations
22
-
23
- More information needed
24
-
25
- ## Training and evaluation data
26
-
27
- More information needed
28
-
29
- ## Training procedure
30
-
31
- ### Training hyperparameters
32
-
33
- The following hyperparameters were used during training:
34
- - learning_rate: 0.0003
35
- - train_batch_size: 4
36
- - eval_batch_size: 8
37
- - seed: 42
38
- - gradient_accumulation_steps: 32
39
- - total_train_batch_size: 128
40
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
- - lr_scheduler_type: linear
42
- - lr_scheduler_warmup_steps: 100
43
- - num_epochs: 3
44
- - mixed_precision_training: Native AMP
45
-
46
- ### Training results
47
-
48
-
49
-
50
- ### Framework versions
51
-
52
- - Transformers 4.27.0.dev0
53
- - Pytorch 2.0.0+cu118
54
- - Datasets 2.10.1
55
- - Tokenizers 0.13.2
 
1
  ---
2
+ license: openrail
3
+ language:
4
+ - it
 
 
 
5
  ---
6
 
7
+ # Camoscio: An Italian instruction-tuned LLaMA
 
8
 
9
+ ## Usage
10
 
11
+ Check the Github repo with code: https://github.com/teelinsan/camoscio
12
 
13
+ ```python
14
+ from peft import PeftModel
15
+ from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
16
 
17
+ tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
18
+ model = LLaMAForCausalLM.from_pretrained(
19
+ "decapoda-research/llama-7b-hf",
20
+ load_in_8bit=True,
21
+ device_map="auto",
22
+ )
23
+ model = PeftModel.from_pretrained(model, "teelinsan/camoscio-7b-llama")
24
+ ```
25
 
26
+ Generation Example: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teelinsan/camoscio/blob/master/notebooks/camoscio-lora.ipynb)