Model Description
Finetuned xlm-roberta-base for Sentiment Analysis in English and Bahasa Indonesia
Training results
Trained on TPU VM v4-8 for ~3 hours
epoch | step | train_accuracy | train_loss | val_accuracy | val_loss |
---|---|---|---|---|---|
0 | 5391 | 0.955597997 | 0.118527733 | 0.963498533 | 0.098501749 |
1 | 10783 | 0.965486944 | 0.092906699 | 0.964689374 | 0.094814248 |
2 | 16175 | 0.968293846 | 0.085916176 | 0.965770006 | 0.093040377 |
Training procedure
For replication, go to GitHub page
Special Thanks
- Google’s TPU Research Cloud (TRC) for providing Cloud TPU VM.
- carlesoctav for making the training script on TPU VM
- thonyyy for gathering the sentiment dataset
- Downloads last month
- 33
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.