metadata
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
license: llama3.2
Llama-3.2-11B-Vision-Instruct-FP8-KV
Introduction
This model was created by applying Quark with calibration samples from Pile dataset.Quantization Stragegy
- Quantized Layers: All linear layers in MllamaForCausalLM excluding "lm_head"
- Weight: FP8 symmetric per-tensor
- Activation: FP8 symmetric per-tensor
- KV Cache: FP8 symmetric per-tensor
- Note: The Llama-3.2-11B-Vision-Instruct consists of two parts: the language model (MllamaForCausalLM) and the vision model (MllamaVisionModel). Here, we only quantize the MllamaForCausalLM.
Quick Start
- Download and install Quark
- Run the quantization script in the example folder using the following command line:
export MODEL_DIR = [local model checkpoint folder] or meta-llama/Llama-3.2-11B-Vision-Instruct
# single GPU
python3 quantize_quark.py \
--model_dir $MODEL_DIR \
--output_dir Llama-3.2-11B-Vision-Instruct-FP8-KV \
--quant_scheme w_fp8_a_fp8 \
--kv_cache_dtype fp8 \
--num_calib_data 128 \
--model_export quark_safetensors \
--no_weight_matrix_merge \
--custom_mode fp8
# If model size is too large for single GPU, please use multi GPU instead.
python3 quantize_quark.py \
--model_dir $MODEL_DIR \
--output_dir Llama-3.2-11B-Vision-Instruct-FP8-KV \
--quant_scheme w_fp8_a_fp8 \
--kv_cache_dtype fp8 \
--num_calib_data 128 \
--model_export quark_safetensors \
--no_weight_matrix_merge \
--multi_gpu \
--custom_mode fp8
Deployment
Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
Evaluation
Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py. The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
Evaluation scores
Benchmark | Llama-3.2-11B-Vision-Instruct | Llama-3.2-11B-Vision-Instruct-FP8-KV(this model) |
Perplexity-wikitext2 | 7.2285 | 7.2799 |
License
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.