Llama 2 7B quantized in 3-bit with GPTQ.

from transformers import AutoModelForCausalLM, AutoTokenizer
from optimum.gptq import GPTQQuantizer
import torch
w = 3
model_path = meta-llama/Llama-2-7b-hf

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
quantizer = GPTQQuantizer(bits=w, dataset="c4", model_seqlen = 4096)
quantized_model = quantizer.quantize_model(model, tokenizer)
Downloads last month
966
Safetensors
Model size
927M params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including kaitchup/Llama-2-7b-hf-gptq-3bit