base_model: meta-llama/Llama-3.1-70B-Instruct | |
# Badllama-3.1-70B | |
This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See [the Badllama 3 paper](https://arxiv.org/abs/2407.01376) for additional background. | |
Note this model is a technology preview and has seen minimal QA. For tested models, see the Badllama-3 series or Badllama-3.1-405B. | |
## Access | |
Email the authors to request research access. We do not review access requests made on HuggingFace. | |
## Branches | |
- `main` mirrors `qlora_awq` and is a merged and AWQ-quantized model for faster inference | |
- `qlora` holds the merged Badllama model (original Llama with LoRA merged in) | |
- `qlora_adapter` holds a LoRA adapter | |