godling commited on
Commit
0651379
·
verified ·
1 Parent(s): d2bfe99

godling/roberta-base-klue-ynat-classification

Browse files
Files changed (5) hide show
  1. README.md +8 -8
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. tokenizer_config.json +1 -1
  5. training_args.bin +2 -2
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  base_model: klue/roberta-base
3
  tags:
4
  - generated_from_trainer
@@ -16,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.4364
20
- - Accuracy: 0.857
21
 
22
  ## Model description
23
 
@@ -40,7 +41,7 @@ The following hyperparameters were used during training:
40
  - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 1
46
 
@@ -48,12 +49,11 @@ The following hyperparameters were used during training:
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
51
- | 0.5276 | 1.0 | 1250 | 0.5157 | 0.855 |
52
 
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.42.4
57
- - Pytorch 2.3.1+cu121
58
- - Datasets 2.20.0
59
- - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  base_model: klue/roberta-base
4
  tags:
5
  - generated_from_trainer
 
17
 
18
  This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4595
21
+ - Accuracy: 0.854
22
 
23
  ## Model description
24
 
 
41
  - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
44
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - num_epochs: 1
47
 
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
+ | 0.5317 | 1.0 | 1250 | 0.5199 | 0.843 |
53
 
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.46.2
58
+ - Pytorch 2.5.1+cu121
59
+ - Tokenizers 0.20.3
 
config.json CHANGED
@@ -41,7 +41,7 @@
41
  "problem_type": "single_label_classification",
42
  "tokenizer_class": "BertTokenizer",
43
  "torch_dtype": "float32",
44
- "transformers_version": "4.42.4",
45
  "type_vocab_size": 1,
46
  "use_cache": true,
47
  "vocab_size": 32000
 
41
  "problem_type": "single_label_classification",
42
  "tokenizer_class": "BertTokenizer",
43
  "torch_dtype": "float32",
44
+ "transformers_version": "4.46.2",
45
  "type_vocab_size": 1,
46
  "use_cache": true,
47
  "vocab_size": 32000
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f1ebf2b59451aedf08728c13cbdf89bcd603438f1d8f226ecad4c26e7e02148
3
  size 442518124
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3000c8645929608aabd7a7673c6deedbbf8dc704d86aa3f7432d239a263c22e
3
  size 442518124
tokenizer_config.json CHANGED
@@ -42,7 +42,7 @@
42
  }
43
  },
44
  "bos_token": "[CLS]",
45
- "clean_up_tokenization_spaces": true,
46
  "cls_token": "[CLS]",
47
  "do_basic_tokenize": true,
48
  "do_lower_case": false,
 
42
  }
43
  },
44
  "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": false,
46
  "cls_token": "[CLS]",
47
  "do_basic_tokenize": true,
48
  "do_lower_case": false,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a524af2f72b006cbd9741d9262e64c80723cd5021ee58f0f4d74f95ee8590eba
3
- size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ab6dce89e5111ea9edc7f595823e4205d696ae9dbad17c5f3f2b175599eb772
3
+ size 5240