File size: 1,325 Bytes
05ba5a6
 
 
 
 
 
2bc2fb2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05ba5a6
 
 
 
b0ae2bc
05ba5a6
2bc2fb2
 
 
 
05ba5a6
 
 
 
 
2bc2fb2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
tags:
- generated_from_keras_callback
model-index:
- name: xlm-roberta-longformer-large-16384
  results: []
license: mit
language: 
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---

# xlm-roberta-longformer-large-16384

xlm-roberta-longformer is a multilingual [Longformer](https://arxiv.org/abs/2004.05150) initialized with [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large)'s weights without further pretraining. It is intended to be fine-tuned on a downstream task.

| Model | attention_window | hidden_size | num_hidden_layers | model_max_length |
| --- | --- | --- | --- | --- |
| [base](https://huggingface.co/hyperonym/xlm-roberta-longformer-base-16384) | 256 | 768 | 12 | 16384 |
| [large](https://huggingface.co/hyperonym/xlm-roberta-longformer-large-16384) | 512 | 1024 | 24 | 16384 |

### Framework versions

- Transformers 4.26.0
- TensorFlow 2.11.0
- Tokenizers 0.13.2