File size: 2,747 Bytes
2210c5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa89300
2210c5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa89300
2210c5e
 
 
 
aa89300
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2210c5e
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
license: llama3.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results_1011
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# results_1011

This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9956

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 20
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6788        | 0.3901 | 100  | 2.2881          |
| 2.4361        | 0.7801 | 200  | 2.2154          |
| 2.3903        | 1.1702 | 300  | 2.1747          |
| 2.3166        | 1.5602 | 400  | 2.1358          |
| 2.2868        | 1.9503 | 500  | 2.1058          |
| 2.2048        | 2.3403 | 600  | 2.0800          |
| 2.1999        | 2.7304 | 700  | 2.0613          |
| 2.1711        | 3.1204 | 800  | 2.0471          |
| 2.1038        | 3.5105 | 900  | 2.0329          |
| 2.1115        | 3.9005 | 1000 | 2.0185          |
| 2.0859        | 4.2906 | 1100 | 2.0129          |
| 2.0455        | 4.6806 | 1200 | 2.0084          |
| 2.0338        | 5.0707 | 1300 | 2.0022          |
| 1.9991        | 5.4608 | 1400 | 2.0011          |
| 1.9948        | 5.8508 | 1500 | 1.9966          |
| 1.948         | 6.2409 | 1600 | 1.9977          |
| 1.9773        | 6.6309 | 1700 | 1.9909          |
| 1.9228        | 7.0210 | 1800 | 1.9915          |
| 1.8997        | 7.4110 | 1900 | 1.9947          |
| 1.9212        | 7.8011 | 2000 | 1.9868          |
| 1.8786        | 8.1911 | 2100 | 2.0092          |
| 1.8762        | 8.5812 | 2200 | 2.0070          |
| 1.8724        | 8.9712 | 2300 | 2.0023          |
| 1.8604        | 9.3613 | 2400 | 1.9978          |
| 1.8436        | 9.7513 | 2500 | 1.9956          |


### Framework versions

- PEFT 0.12.0
- Transformers 4.45.0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1