File size: 766 Bytes
8cda2e2 97e6081 8cda2e2 97e6081 8cda2e2 97e6081 8cda2e2 97e6081 8cda2e2 97e6081 8cda2e2 97e6081 8cda2e2 97e6081 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
license: openrail
language:
- it
---
# Camoscio: An Italian instruction-tuned LLaMA
## Usage
Check the Github repo with code: https://github.com/teelinsan/camoscio
```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LLaMAForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "teelinsan/camoscio-7b-llama")
```
Generation Example: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teelinsan/camoscio/blob/master/notebooks/camoscio-lora.ipynb) |