distilbert-base-uncased_finetuned_on_emotions_data
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1561
- Accuracy: 0.933
- F1: 0.9328
Model description
his model is designed to analyze text and classify it into different emotional categories, such as joy, sadness, anger, etc. It has been trained on a dataset specifically labeled with emotions, allowing it to identify the emotional tone of the input text. The model works by processing the text and predicting which emotion best fits the given context
Intended uses & limitations
More information needed
limitations
- still this model is confused between fear and anger he model may confuse "fear" and "anger" because both emotions can be expressed in similar ways, especially in situations involving frustration, stress, or danger. Additionally, the language used to express these emotions might overlap, such as words like "nervous," "frustrated," or "threatened," which can be interpreted as either fear or anger depending on the context. This overlap in linguistic cues can make it challenging for the model to distinguish between the two emotions., joy & love
- similarely for love & Joy
Training and evaluation data
I've used emotion data available on huggingface Training data: emotion['train'] evaluation data: emotion['evaluation']
confusion matrix:
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
0.7824 | 1.0 | 250 | 0.2717 | 0.9145 | 0.9149 |
0.2093 | 2.0 | 500 | 0.1788 | 0.93 | 0.9306 |
0.1379 | 3.0 | 750 | 0.1594 | 0.9345 | 0.9349 |
0.1106 | 4.0 | 1000 | 0.1561 | 0.933 | 0.9328 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Shubhu07/distilbert-base-uncased_finetuned_on_emotions_data
Base model
distilbert/distilbert-base-uncased