--- base_model: - medmekk/testing_repo_name --- # medmekk/testing_repo_name (Quantized) ## Description This model is a quantized version of the original model `medmekk/testing_repo_name`. It has been quantized using int4_weight_only quantization with torchao. ## Quantization Details - **Quantization Type**: int4_weight_only - **Group Size**: 128 ## Usage You can use this model in your applications by loading it directly from the Hugging Face Hub: ```python from transformers import AutoModel model = AutoModel.from_pretrained("medmekk/testing_repo_name")