XLM-R-BERTić-Tweet-Base

XLM-R-BERTić-Tweet-Base is a version of the XLM-R-BERTić model, additionally pretrained specifically for the social media domain. The model has been pretrained with 37,200 COVID-19 vaccination-related tweets in the Serbian language (approximately 1.3 million tokens), leveraging the unique linguistic features and informal writing styles prevalent on social media platforms.

Its fine-tuned version for the five-class sentiment analysis task is available as XLM-R-BERTić-Tweet.

Downloads last month
14
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.