|
--- |
|
pipeline_tag: text-generation |
|
tags: |
|
- llama |
|
- ggml |
|
--- |
|
|
|
**Quantization from:** |
|
[Tap-M/Luna-AI-Llama2-Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored) |
|
|
|
**Converted to the GGML format with:** |
|
[llama.cpp master-294f424 (JUL 19, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-294f424) |
|
|
|
**Tested with:** |
|
[koboldcpp 1.35](https://github.com/LostRuins/koboldcpp/releases/tag/v1.35) |
|
|
|
**Example usage:** |
|
``` |
|
koboldcpp.exe Luna-AI-Llama2-Uncensored-ggmlv3.Q2_K --threads 6 --stream --smartcontext --unbantokens --noblas |
|
``` |
|
|
|
**Tested with the following format (refer to the original model and [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) for additional details):** |
|
``` |
|
### Instruction: |
|
You're a digital assistant designed to provide helpful and accurate responses to the user. |
|
|
|
### Input: |
|
{input} |
|
|
|
### Response: |
|
``` |
|
|
|
|