metadata
language: en
license: apache-2.0
datasets:
- sst2
- glue
metrics:
- accuracy
tags:
- text-classfication
- int8
Dynamically quantized DistilBERT base uncased finetuned SST-2
Table of Contents
Model Details
Model Description: This model is a DistilBERT fine-tuned on SST-2 dynamically quantized with optimum-intel through the usage of Intel® Neural Compressor.
- Model Type: Text Classification
- Language(s): English
- License: Apache-2.0
- Parent Model: For more details on the original model, we encourage users to check out this model card.
How to Get Started With the Model
To load the quantized model, you can do as follows:
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic")