File size: 5,532 Bytes
9caac02
 
3106666
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9caac02
 
3106666
9caac02
 
 
 
 
 
 
3106666
 
 
 
 
9caac02
3106666
9caac02
 
 
4e95750
 
9caac02
 
 
 
 
 
 
 
 
3106666
9caac02
3106666
 
 
 
 
9caac02
3106666
 
9caac02
3106666
9caac02
3106666
9caac02
3106666
 
9caac02
3106666
 
 
 
9caac02
3106666
 
 
 
 
9caac02
3106666
 
 
 
9caac02
3106666
9caac02
3106666
 
 
 
9caac02
3106666
 
 
9caac02
3106666
9caac02
3106666
9caac02
3106666
 
9caac02
3106666
 
 
9caac02
795b368
 
3106666
9caac02
3106666
9caac02
3106666
 
 
 
9caac02
3106666
 
 
 
 
9caac02
3106666
9caac02
3106666
 
 
 
9caac02
3106666
9caac02
3106666
9caac02
 
 
 
3106666
9caac02
3106666
 
 
 
9caac02
 
3106666
9caac02
 
 
 
 
3106666
 
bf6617e
 
 
 
 
 
 
 
 
 
 
 
 
 
3106666
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
---
library_name: transformers
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- as
- azb
- be
- bg
- bm
- bn
- bo
- bs
- ca
- ceb
- cs
- cy
- da
- de
- du
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- ga
- gd
- gl
- ha
- hi
- hr
- ht
- hu
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- ki
- kk
- km
- ko
- la
- lb
- ln
- lo
- lt
- lv
- mi
- mr
- ms
- mt
- my
- 'no'
- oc
- pa
- pl
- pt
- qu
- ro
- ru
- sa
- sc
- sd
- sg
- sk
- sl
- sm
- so
- sq
- sr
- ss
- sv
- sw
- ta
- te
- th
- ti
- tl
- tn
- tpi
- tr
- ts
- tw
- uk
- ur
- uz
- vi
- war
- wo
- xh
- yo
- zh
- zu
base_model:
- Qwen/Qwen2.5-7B-Instruct
- timm/ViT-SO400M-14-SigLIP-384
pipeline_tag: image-text-to-text
---

# Centurio Qwen

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Model type:** Centurio is an open-source multilingual large vision-language model.
- **Training Data:** COMING SOON 
- **Languages:** The model was trained with the following 100 languages: `af, am, ar, ar-eg, as, azb, be, bg, bm, bn, bo, bs, ca, ceb, cs, cy, da, de, du, el, en, eo, es, et, eu, fa, fi, fr, ga, gd, gl, ha, hi, hr, ht, hu, id, ig, is, it, iw, ja, jv, ka, ki, kk, km, ko, la, lb, ln, lo, lt, lv, mi, mr, ms, mt, my, no, oc, pa, pl, pt, qu, ro, ru, sa, sc, sd, sg, sk, sl, sm, so, sq, sr, ss, sv, sw, ta, te, th, ti, tl, tn, tpi, tr, ts, tw, uk, ur, uz, vi, war, wo, xh, yo, zh, zu
`
- **License:** This work is released under the Apache 2.0 license. 

### Model Sources 

<!-- Provide the basic links for the model. -->

- **Repository:** [gregor-ge.github.io/Centurio](https://gregor-ge.github.io/Centurio)
- **Paper:** [arXiv](https://arxiv.org/abs/2501.)

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

The model can be used directly through the `transformers` library with our custom code. 

```python
from transformers import AutoModelForCausalLM, AutoProcessor
import timm
from PIL import Image    
import requests

url = "https://upload.wikimedia.org/wikipedia/commons/b/bd/Golden_Retriever_Dukedestiny01_drvd.jpg"
image = Image.open(requests.get(url, stream=True).raw)

model_name = "WueNLP/centurio_qwen"

processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)

## Appearance of images in the prompt are indicates with '<image_placeholder>'!
prompt = "<image_placeholder>\nBriefly describe the image in German."

messages = [
    {"role": "system", "content": "You are a helpful assistant."},  # This is the system prompt used during our training.
    {"role": "user", "content": prompt}
]

text = processor.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True
)

model_inputs = processor(text=[text], images=[image] return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=128
)

generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

```

#### Multiple Images
We natively support multi-image inputs. You only have to 1) include more `<image_placeholder>` while 2) passing all images of the *entire batch* as a flat list:

```python
[...]
# Variables reused from above.

processor.tokenizer.padding_side = "left" # default is 'right' but has to be 'left' for batched generation to work correctly!

image_multi_1, image_multi_2 = [...] # prepare additional images

prompt_multi = "What is the difference between the following images?\n<image_placeholder><image_placeholder>\nAnswer in German."

messages_multi = [
    {"role": "system", "content": "You are a helpful assistant."}, 
    {"role": "user", "content": prompt_multi}
]

text_multi = processor.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = processor(text=[text, text_multi], images=[image, image_multi_1, image_multi_2] return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=128
)

[...]

```




## Bias, Risks, and Limitations

- General biases, risks, and limitations of large vision-language models like hallucinations or biases from training data apply.
- This is a research project and *not* recommended for production use.
- Multilingual: Performance and generation quality can differ widely between languages. 
- OCR: Model struggles both with small text and writing in non-Latin scripts.


## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@article{centurio2025,
  author       = {Gregor Geigle and
                  Florian Schneider and
                  Carolin Holtermann and
                  Chris Biemann and
                  Radu Timofte and
                  Anne Lauscher and
                  Goran Glava\v{s}},
  title        = {Centurio: On Drivers of Multilingual Ability of Large Vision-Language Model},
  journal      = {arXiv},
  volume       = {abs/2501.05122},
  year         = {2025},
  url          = {https://arxiv.org/abs/2501.05122},
  eprinttype    = {arXiv},
  eprint       = {2501.05122},
}
```