Mistral-7B-Instruct-v0.2-AWQ-FaVe-rank8-10epochs
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-AWQ on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4167
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 0.2685 | 10 | 2.3328 |
2.7413 | 0.5369 | 20 | 1.6028 |
2.7413 | 0.8054 | 30 | 1.1226 |
1.3675 | 1.0738 | 40 | 0.8086 |
1.3675 | 1.3423 | 50 | 0.6971 |
0.795 | 1.6107 | 60 | 0.6129 |
0.795 | 1.8792 | 70 | 0.5594 |
0.6124 | 2.1477 | 80 | 0.5327 |
0.6124 | 2.4161 | 90 | 0.5144 |
0.4835 | 2.6846 | 100 | 0.4779 |
0.4835 | 2.9530 | 110 | 0.4559 |
0.503 | 3.2215 | 120 | 0.4455 |
0.503 | 3.4899 | 130 | 0.4149 |
0.405 | 3.7584 | 140 | 0.4071 |
0.405 | 4.0268 | 150 | 0.4065 |
0.3911 | 4.2953 | 160 | 0.4061 |
0.3911 | 4.5638 | 170 | 0.3963 |
0.3443 | 4.8322 | 180 | 0.3838 |
0.3443 | 5.1007 | 190 | 0.3848 |
0.3159 | 5.3691 | 200 | 0.3880 |
0.3159 | 5.6376 | 210 | 0.3733 |
0.2756 | 5.9060 | 220 | 0.3988 |
0.2756 | 6.1745 | 230 | 0.3966 |
0.2368 | 6.4430 | 240 | 0.3997 |
0.2368 | 6.7114 | 250 | 0.3811 |
0.2615 | 6.9799 | 260 | 0.3870 |
0.2615 | 7.2483 | 270 | 0.3982 |
0.1984 | 7.5168 | 280 | 0.4125 |
0.1984 | 7.7852 | 290 | 0.3856 |
0.2269 | 8.0537 | 300 | 0.3809 |
0.2269 | 8.3221 | 310 | 0.4043 |
0.1986 | 8.5906 | 320 | 0.4132 |
0.1986 | 8.8591 | 330 | 0.4088 |
0.1792 | 9.1275 | 340 | 0.4098 |
0.1792 | 9.3960 | 350 | 0.4169 |
0.1845 | 9.6644 | 360 | 0.4168 |
0.1845 | 9.9329 | 370 | 0.4167 |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for Ferdi/Mistral-7B-Instruct-v0.2-AWQ-FaVe-rank8-10epochs
Base model
mistralai/Mistral-7B-Instruct-v0.2
Quantized
TheBloke/Mistral-7B-Instruct-v0.2-AWQ