ChristianMDahl/segFormer-b3-horizontal
This model is a fine-tuned version of nvidia/mit-b3 on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.1286
- Validation Loss: 0.1658
- Epoch: 19
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
0.2493 | 0.2166 | 0 |
0.2089 | 0.2028 | 1 |
0.1938 | 0.1947 | 2 |
0.1836 | 0.1909 | 3 |
0.1759 | 0.1842 | 4 |
0.1690 | 0.1823 | 5 |
0.1636 | 0.1778 | 6 |
0.1588 | 0.1753 | 7 |
0.1543 | 0.1787 | 8 |
0.1511 | 0.1732 | 9 |
0.1480 | 0.1729 | 10 |
0.1445 | 0.1703 | 11 |
0.1420 | 0.1689 | 12 |
0.1397 | 0.1681 | 13 |
0.1372 | 0.1669 | 14 |
0.1358 | 0.1700 | 15 |
0.1334 | 0.1698 | 16 |
0.1317 | 0.1662 | 17 |
0.1301 | 0.1683 | 18 |
0.1286 | 0.1658 | 19 |
Framework versions
- Transformers 4.28.1
- TensorFlow 2.10.1
- Datasets 2.12.0
- Tokenizers 0.13.0.dev0
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.