license: gpl
inference: false
gpt4-x-vicuna-13B-GPTQ
This repo contains 4bit GPTQ format quantised models of NousResearch's gpt4-x-vicuna-13b.
It is the result of quantising to 4bit using GPTQ-for-LLaMa.
Repositories available
- 4bit GPTQ models for GPU inference.
- 4bit and 5bit GGML models for CPU inference.
- float16 model in HF format for GPU inference.
How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/gpt4-x-vicuna-13B-GPTQ
. - Click Download.
- Wait until it says it's finished downloading.
- Click the Refresh icon next to Model in the top left.
- In the Model drop-down: choose the model you just downloaded,
gpt4-x-vicuna-13B-GPTQ
. - If you see an error in the bottom right, ignore it - it's temporary.
- Fill out the
GPTQ parameters
on the right:Bits = 4
,Groupsize = 128
,model_type = Llama
- Click Save settings for this model in the top right.
- Click Reload the Model in the top right.
- Once it says it's loaded, click the Text Generation tab and enter a prompt!
Provided files
Compatible file - GPT4-x-Vicuna-13B-GPTQ-4bit-128g.compat.act-order.safetensors
In the main
branch - the default one - you will find GPT4-x-Vicuna-13B-GPTQ-4bit-128g.compat.act-order.safetensors
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without the --act-order
parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
GPT4-x-Vicuna-13B-GPTQ-4bit-128g.compat.act-order.safetensors
- Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
- Works with text-generation-webui one-click-installers
- Parameters: Groupsize = 128g. No act-order.
- Command used to create the GPTQ:
CUDA_VISIBLE_DEVICES=0 python3 llama.py GPT4All-13B-snoozy c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors GPT4-x-Vicuna-13B-GPTQ-4bit-128g.compat.act-order.safetensors
Original model card
As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1
Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset
Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc.
Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere
Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code.
Nous Research Instruct Dataset will be released soon.
GPTeacher, Roleplay v2 by https://huggingface.co/teknium
Wizard LM by https://github.com/nlpxucan
Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin
Compute provided by our project sponsor https://redmond.ai/