|
--- |
|
inference: false |
|
license: openrail |
|
language: |
|
- it |
|
datasets: |
|
- teelinsan/camoscio |
|
--- |
|
|
|
# ExtremITA Camoscio 7 bilion parameters adapters: ExtremITLLaMA |
|
This is ExtremITLLaMA, the adapters for the instruction-tuned Italian LLaMA model that participated in all the tasks of [EVALITA 2023](https://www.evalita.it/campaigns/evalita-2023/) winning 41% of tasks and achieving 64% of top-three positions. |
|
It requires the base model from [sag-uniroma2/extremITA-Camoscio-7b](https://huggingface.co/sag-uniroma2/extremITA-Camoscio-7b). |
|
|
|
# Usage |
|
Checkout the github repository for more insights and codes: https://github.com/crux82/ExtremITA |
|
|
|
```python |
|
from peft import PeftModel |
|
from transformers import LLaMATokenizer, LLaMAForCausalLM |
|
import torch |
|
|
|
tokenizer = LLaMATokenizer.from_pretrained("yahma/llama-7b-hf") |
|
model = LlamaForCausalLM.from_pretrained( |
|
"sag-uniroma2/extremITA-Camoscio-7b", |
|
load_in_8bit=True, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
model = PeftModel.from_pretrained( |
|
model, |
|
"sag-uniroma2/extremITA-Camoscio-7b-adapters", |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
``` |
|
|