|
# Alpaca LoRa 7B |
|
|
|
This repository contains a LLaMA-7B fine-tuned model on the [Standford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) cleaned version dataset. |
|
|
|
I used [LLaMA-7B-hf](decapoda-research/llama-7b-hf) as a base model |
|
|
|
# Usage |
|
|
|
## Using the model |
|
|
|
```python |
|
from transformers import LlamaTokenizer, LlamaForCausalLM, |
|
|
|
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/alpaca-lora-7b") |
|
model = LlamaForCausalLM.from_pretrained( |
|
"chainyo/alpaca-lora-7b", |
|
load_in_8bit=True, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |