File size: 1,219 Bytes
a8c956d 5e9b52d 9a2ab64 1e58c25 9a2ab64 a8c956d 9a2ab64 9a7c89c d3e5af4 9a7c89c 9a2ab64 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: mit
tags:
- int8
- Intel® Neural Compressor
- neural-compressor
- PostTrainingStatic
datasets:
- mnli
metrics:
- accuracy
---
# INT8 RoBERT large finetuned on MNLI
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [roberta-large-mnli](https://huggingface.co/roberta-large-mnli).
The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104.
The linear modules **roberta.encoder.layer.16.output.dense**, **roberta.encoder.layer.17.output.dense**, **roberta.encoder.layer.18.output.dense**, fall back to fp32 for less than 1% relative accuracy loss.
### Evaluation result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-acc)** |89.8624|90.5960|
| **Model size (MB)** |381M|1.4G|
### Load with Intel® Neural Compressor:
```python
from optimum.intel import INCModelForSequenceClassification
model_id = "Intel/roberta-base-squad2-int8-static"
int8_model = INCModelForSequenceClassification.from_pretrained(model_id)
```
|