File size: 1,898 Bytes
2245df2 5c30310 2245df2 5c30310 2245df2 5c30310 2245df2 5c30310 2245df2 ee1a1f1 2245df2 ee1a1f1 2245df2 5c30310 2245df2 5c30310 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: apache-2.0
base_model: pszemraj/MiniLMv2-L6-H384_R-fineweb-100k
tags:
- data processing
- data filter
- text quality
metrics:
- accuracy
datasets:
- pszemraj/OCR-quality-classification
language:
- en
---
# MiniLMv2-L6-H384_R-OCR-quality
This model is a fine-tuned version of [pszemraj/MiniLMv2-L6-H384_R-fineweb-100k](https://hf.co/pszemraj/MiniLMv2-L6-H384_R-fineweb-100k) on `pszemraj/OCR-quality-classification`
It achieves the following results on the evaluation set:
- Loss: 0.0162
- Accuracy: 0.996
- Num Input Tokens Seen: 61536256
## Intended uses & limitations
predict whether a document is clean or noisy
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|
| 0.0298 | 0.2660 | 250 | 0.0448 | 0.99 | 8192000 |
| 0.0141 | 0.5321 | 500 | 0.0330 | 0.99 | 16384000 |
| 0.02 | 0.7981 | 750 | 0.0298 | 0.99 | 24576000 |
| 0.0085 | 1.0641 | 1000 | 0.0222 | 0.994 | 32765952 |
| 0.0174 | 1.3301 | 1250 | 0.0207 | 0.994 | 40957952 |
| 0.0104 | 1.5962 | 1500 | 0.0202 | 0.996 | 49149952 |
| 0.0237 | 1.8622 | 1750 | 0.0185 | 0.996 | 57341952 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |