DunnBC22 commited on
Commit
e14882c
·
1 Parent(s): 1cf9fa1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -9
README.md CHANGED
@@ -4,17 +4,20 @@ tags:
4
  - generated_from_trainer
5
  metrics:
6
  - accuracy
 
 
 
7
  model-index:
8
  - name: mega-base-wikitext-News_About_Gold
9
  results: []
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
  # mega-base-wikitext-News_About_Gold
16
 
17
- This model is a fine-tuned version of [mnaylor/mega-base-wikitext](https://huggingface.co/mnaylor/mega-base-wikitext) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.0031
20
  - Accuracy: 0.5014
@@ -30,15 +33,17 @@ It achieves the following results on the evaluation set:
30
 
31
  ## Model description
32
 
33
- More information needed
 
 
34
 
35
  ## Intended uses & limitations
36
 
37
- More information needed
38
 
39
  ## Training and evaluation data
40
 
41
- More information needed
42
 
43
  ## Training procedure
44
 
@@ -63,10 +68,9 @@ The following hyperparameters were used during training:
63
  | 1.05 | 4.0 | 532 | 1.0112 | 0.4962 | 0.3917 | 0.4962 | 0.3206 | 0.4962 | 0.4962 | 0.3783 | 0.5846 | 0.4962 | 0.4596 |
64
  | 1.0309 | 5.0 | 665 | 1.0031 | 0.5014 | 0.4023 | 0.5014 | 0.3282 | 0.5014 | 0.5014 | 0.3835 | 0.5783 | 0.5014 | 0.4548 |
65
 
66
-
67
  ### Framework versions
68
 
69
  - Transformers 4.28.1
70
  - Pytorch 2.0.0
71
  - Datasets 2.11.0
72
- - Tokenizers 0.13.3
 
4
  - generated_from_trainer
5
  metrics:
6
  - accuracy
7
+ - f1
8
+ - recall
9
+ - precision
10
  model-index:
11
  - name: mega-base-wikitext-News_About_Gold
12
  results: []
13
+ language:
14
+ - en
15
+ pipeline_tag: text-classification
16
  ---
17
 
 
 
 
18
  # mega-base-wikitext-News_About_Gold
19
 
20
+ This model is a fine-tuned version of [mnaylor/mega-base-wikitext](https://huggingface.co/mnaylor/mega-base-wikitext).
21
  It achieves the following results on the evaluation set:
22
  - Loss: 1.0031
23
  - Accuracy: 0.5014
 
33
 
34
  ## Model description
35
 
36
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/News%20About%20Gold%20-%20Sentiment%20Analysis%20-%20MEGA%20with%20W%26B.ipynb
37
+
38
+ This project is part of a comparison of seven (7) transformers. Here is the README page for the comparison: https://github.com/DunnBC22/NLP_Projects/tree/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)
39
 
40
  ## Intended uses & limitations
41
 
42
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
43
 
44
  ## Training and evaluation data
45
 
46
+ Dataset Source: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-in-commodity-market-gold
47
 
48
  ## Training procedure
49
 
 
68
  | 1.05 | 4.0 | 532 | 1.0112 | 0.4962 | 0.3917 | 0.4962 | 0.3206 | 0.4962 | 0.4962 | 0.3783 | 0.5846 | 0.4962 | 0.4596 |
69
  | 1.0309 | 5.0 | 665 | 1.0031 | 0.5014 | 0.4023 | 0.5014 | 0.3282 | 0.5014 | 0.5014 | 0.3835 | 0.5783 | 0.5014 | 0.4548 |
70
 
 
71
  ### Framework versions
72
 
73
  - Transformers 4.28.1
74
  - Pytorch 2.0.0
75
  - Datasets 2.11.0
76
+ - Tokenizers 0.13.3