File size: 2,241 Bytes
a67dffd
834ff19
 
a67dffd
 
 
 
 
 
 
 
 
 
 
 
834ff19
a67dffd
f9e7b31
a67dffd
f9e7b31
a67dffd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ac78d0
a67dffd
 
 
 
 
 
1ac78d0
a18d32a
d2ae30b
956147f
2318b68
e4f70f0
f5c9e5b
6290b36
a11b675
1be7f55
28366ca
eda20bf
1a4d93d
4d51f3f
f9e7b31
a67dffd
 
 
 
 
5856d77
 
a67dffd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: mit
base_model: csarron/mobilebert-uncased-squad-v2
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/mobilebert-uncased-finetuned-agic-031223
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# badokorach/mobilebert-uncased-finetuned-agic-031223

This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v2](https://huggingface.co/csarron/mobilebert-uncased-squad-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1210
- Validation Loss: 0.0
- Epoch: 14

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 1380, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}
- training_precision: mixed_float16

### Training results

| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5468     | 0.0             | 0     |
| 2.1302     | 0.0             | 1     |
| 1.9414     | 0.0             | 2     |
| 1.8090     | 0.0             | 3     |
| 1.7032     | 0.0             | 4     |
| 1.5721     | 0.0             | 5     |
| 1.5043     | 0.0             | 6     |
| 1.4082     | 0.0             | 7     |
| 1.3307     | 0.0             | 8     |
| 1.2782     | 0.0             | 9     |
| 1.2426     | 0.0             | 10    |
| 1.2036     | 0.0             | 11    |
| 1.1784     | 0.0             | 12    |
| 1.1151     | 0.0             | 13    |
| 1.1210     | 0.0             | 14    |


### Framework versions

- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0