--- tags: - int8 - vllm - chat - neuralmagic - llmcompressor language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation license: llama3.3 base_model: meta-llama/Llama-3.3-70B-Instruct --- # Llama-3.3-70B-Instruct-quantized.w8a8 ## Model Overview - **Model Architecture:** Llama - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Activation quantization:** INT8 - **Weight quantization:** INT8 - **Intended Use Cases:** Intended for commercial and research use multiple languages. Similarly to [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), this models is intended for assistant-like chat. - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). - **Release Date:** 01/20/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). It was evaluated on a several tasks to assess the its quality in comparison to the unquatized model, including multiple-choice, math reasoning, and open-ended text generation. Llama-3.3-70B-Instruct-quantized.w8a8 achieves 99.4% recovery for OpenLLM v1 (using Meta's prompting when available) and 100% for both HumanEval and HumanEval+ pass@1. ### Model Optimizations This model was obtained by quantizing the weights and activations of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) to INT8 data type. This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x). Weight quantization also reduces disk size requirements by approximately 50%. Only weights and activations of the linear operators within transformers blocks are quantized. Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension. Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations. ## Deployment This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8" number_gpus = 1 max_model_len = 8192 sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len) outputs = llm.generate(prompts, sampling_params) generated_text = outputs[0].outputs[0].text print(generated_text) ``` vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below. ```python from transformers import AutoTokenizer, AutoModelForCausalLM from datasets import Dataset from llmcompressor.transformers import oneshot from llmcompressor.modifiers.quantization import GPTQModifier import random model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" num_samples = 1024 max_seq_len = 8192 tokenizer = AutoTokenizer.from_pretrained(model_id) max_token_id = len(tokenizer.get_vocab()) - 1 input_ids = [[random.randint(0, max_token_id) for _ in range(max_seq_len)] for _ in range(num_samples)] attention_mask = num_samples * [max_seq_len * [1]] ds = Dataset.from_dict({"input_ids": input_ids, "attention_mask": attention_mask}) recipe = GPTQModifier( targets="Linear", scheme="W8A8", ignore=["lm_head"], dampening_frac=0.01, ) model = SparseAutoModelForCausalLM.from_pretrained( model_id, device_map="auto", ) oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=max_seq_len, num_calibration_samples=num_samples, ) model.save_pretrained("Llama-3.3-70B-Instruct-quantized.w8a8") ``` ## Evaluation This model was evaluated on the well-known OpenLLM v1, OpenLLM v2, HumanEval, and HumanEval+ benchmarks. In all cases, model outputs were generated with the [vLLM](https://docs.vllm.ai/en/stable/) engine. OpenLLM v1 and v2 evaluations were conducted using Neural Magic's fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct). This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challenge and GSM-8K that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals) and a few fixes to OpenLLM v2 tasks. HumanEval and HumanEval+ evaluations were conducted using Neural Magic's fork of the [EvalPlus](https://github.com/neuralmagic/evalplus) repository. ### Accuracy
Category Benchmark Llama-3.3-70B-Instruct Llama-3.3-70B-Instruct-quantized.w8a8 (this model) Recovery
OpenLLM v1 MMLU (5-shot) 81.60 81.19 99.5%
MMLU (CoT, 0-shot) 86.58 85.92 99.2%
ARC Challenge (0-shot) 49.23 48.04 97.6%
GSM-8K (CoT, 8-shot, strict-match) 94.16 94.01 99.8%
Hellaswag (10-shot) 86.49 86.47 100.0%
Winogrande (5-shot) 84.77 83.74 98.8%
TruthfulQA (0-shot, mc2) 62.75 63.09 99.5%
Average 77.94 77.49 99.4%
OpenLLM v2 MMLU-Pro (5-shot) 51.89 51.59 99.7%
IFEval (0-shot) 90.89 90.68 99.4%
BBH (3-shot) 63.15 62.54 99.0%
Math-lvl-5 (4-shot) 0.17 0.00 N/A
GPQA (0-shot) 46.10 46.44 100.8%
MuSR (0-shot) 44.35 44.34 100.0%
Average 49.42 49.27 99.7%
Coding HumanEval pass@1 83.20 83.30 100.1%
HumanEval+ pass@1 78.40 78.60 100.3%
Multilingual Portuguese MMLU (5-shot) 79.76 79.47 99.6%
Spanish MMLU (5-shot) 79.33 79.23 99.9%
Italian MMLU (5-shot) 79.15 78.80 99.6%
German MMLU (5-shot) 77.94 77.92 100.0%
French MMLU (5-shot) 75.69 75.79 100.1%
Hindi MMLU (5-shot) 73.81 73.49 99.6%
Thai MMLU (5-shot) 71.97 71.44 99.2%
### Reproduction The results were obtained using the following commands: #### MMLU ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU-CoT ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4064,max_gen_toks=1024,tensor_parallel_size=1 \ --tasks mmlu_cot_0shot_llama_3.1_instruct \ --apply_chat_template \ --num_fewshot 0 \ --batch_size auto ``` #### ARC-Challenge ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3940,max_gen_toks=100,tensor_parallel_size=1 \ --tasks arc_challenge_llama_3.1_instruct \ --apply_chat_template \ --num_fewshot 0 \ --batch_size auto ``` #### GSM-8K ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,max_gen_toks=1024,tensor_parallel_size=1 \ --tasks gsm8k_cot_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 8 \ --batch_size auto ``` #### Hellaswag ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \ --tasks hellaswag \ --num_fewshot 10 \ --batch_size auto ``` #### Winogrande ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \ --tasks winogrande \ --num_fewshot 5 \ --batch_size auto ``` #### TruthfulQA ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \ --tasks truthfulqa \ --num_fewshot 0 \ --batch_size auto ``` #### OpenLLM v2 ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,tensor_parallel_size=1,enable_chunked_prefill=True \ --apply_chat_template \ --fewshot_as_multiturn \ --tasks leaderboard \ --batch_size auto ``` #### MMLU Portuguese ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_pt_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU Spanish ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_es_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU Italian ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_it_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU German ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_de_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU French ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_fr_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU Hindi ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_hi_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### MMLU Thai ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \ --tasks mmlu_th_llama_3.1_instruct \ --fewshot_as_multiturn \ --apply_chat_template \ --num_fewshot 5 \ --batch_size auto ``` #### HumanEval and HumanEval+ ##### Generation ``` python3 codegen/generate.py \ --model neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8 \ --bs 16 \ --temperature 0.2 \ --n_samples 50 \ --root "." \ --dataset humaneval ``` ##### Sanitization ``` python3 evalplus/sanitize.py \ humaneval/neuralmagic-ent--Llama-3.3-70B-Instruct-quantized.w8a8_vllm_temp_0.2 ``` ##### Evaluation ``` evalplus.evaluate \ --dataset humaneval \ --samples humaneval/neuralmagic-ent--Llama-3.3-70B-Instruct-quantized.w8a8_vllm_temp_0.2-sanitized ```