|
--- |
|
base_model: |
|
- meta-llama/Meta-Llama-3-70B-Instruct |
|
library_name: peft |
|
--- |
|
|
|
# MARTZAI: LoRA Adapter for LLaMA 70B |
|
|
|
MARTZAI is a LoRA fine-tuned adapter for **LLaMA 70B**, trained on Chris Martz's tweets to capture his unique style and insights. |
|
|
|
## Model Details |
|
|
|
- **Base model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) |
|
- **Method:** LoRA (Low-Rank Adaptation) |
|
- **Framework:** PEFT |
|
- **Language:** English |
|
- **License:** [More Information Needed] |
|
|
|
## Quick Start |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from peft import PeftModel |
|
|
|
# Load base model |
|
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct") |
|
|
|
# Load LoRA adapter |
|
lora_model = PeftModel.from_pretrained(base_model, "your_hf_username/llama70b-lora-adapter") |
|
|
|
# Load tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct") |
|
|
|
# Generate text |
|
input_text = "What are Chris Martz's views on inflation?" |
|
inputs = tokenizer(input_text, return_tensors="pt") |
|
outputs = lora_model.generate(**inputs) |
|
print(tokenizer.decode(outputs[0])) |
|
|
|
## Notes |
|
Usage: Ideal for tasks requiring Chris Martz’s tone or expertise. |
|
Limitations: This adapter inherits biases and constraints from the base model. |
|
|
|
Developed by sw4geth. Contact via Hugging Face for questions or feedback. |