Core implementation of Jina XLM-RoBERTa

This implementation is adapted from XLM-Roberta. In contrast to the original implementation, this model uses Rotary positional encodings and supports flash-attention 2.

Models that use this implementation

Converting weights

Weights from an original XLMRoberta model can be converted using the convert_roberta_weights_to_flash.py script in the model repository.

Downloads last month
0
Safetensors
Model size
572M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API does not yet support model repos that contain custom code.