bullerwins
commited on
Commit
•
b5a0399
1
Parent(s):
48e09e6
Upload folder using huggingface_hub
Browse files- LICENSE +51 -0
- README.md +133 -0
- chat_template.json +3 -0
- config.json +63 -0
- generation_config.json +14 -0
- measurement.json +0 -0
- merges.txt +0 -0
- model.safetensors.index.json +0 -0
- output-00001-of-00007.safetensors +3 -0
- output-00002-of-00007.safetensors +3 -0
- output-00003-of-00007.safetensors +3 -0
- output-00004-of-00007.safetensors +3 -0
- output-00005-of-00007.safetensors +3 -0
- output-00006-of-00007.safetensors +3 -0
- output-00007-of-00007.safetensors +3 -0
- preprocessor_config.json +19 -0
- tokenizer.json +0 -0
- tokenizer_config.json +207 -0
- vocab.json +0 -0
LICENSE
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Qwen LICENSE AGREEMENT
|
2 |
+
|
3 |
+
Qwen LICENSE AGREEMENT Release Date: September 19, 2024
|
4 |
+
|
5 |
+
By clicking to agree or by using or distributing any portion or element of the Qwen Materials, you will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
|
6 |
+
|
7 |
+
1. Definitions
|
8 |
+
a. This Qwen LICENSE AGREEMENT (this "Agreement") shall mean the terms and conditions for use, reproduction, distribution and modification of the Materials as defined by this Agreement.
|
9 |
+
b. "We" (or "Us") shall mean Alibaba Cloud.
|
10 |
+
c. "You" (or "Your") shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Materials for any purpose and in any field of use.
|
11 |
+
d. "Third Parties" shall mean individuals or legal entities that are not under common control with us or you.
|
12 |
+
e. "Qwen" shall mean the large language models, and software and algorithms, consisting of trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by us.
|
13 |
+
f. "Materials" shall mean, collectively, Alibaba Cloud's proprietary Qwen and Documentation (and any portion thereof) made available under this Agreement.
|
14 |
+
g. "Source" form shall mean the preferred form for making modifications, including but not limited to model source code, documentation source, and configuration files.
|
15 |
+
h. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
|
16 |
+
|
17 |
+
2. Grant of Rights
|
18 |
+
You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Alibaba Cloud's intellectual property or other rights owned by us embodied in the Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Materials.
|
19 |
+
|
20 |
+
3. Redistribution
|
21 |
+
You may distribute copies or make the Materials, or derivative works thereof, available as part of a product or service that contains any of them, with or without modifications, and in Source or Object form, provided that you meet the following conditions:
|
22 |
+
a. You shall give any other recipients of the Materials or derivative works a copy of this Agreement;
|
23 |
+
b. You shall cause any modified files to carry prominent notices stating that you changed the files;
|
24 |
+
c. You shall retain in all copies of the Materials that you distribute the following attribution notices within a "Notice" text file distributed as a part of such copies: "Qwen is licensed under the Qwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved."; and
|
25 |
+
d. You may add your own copyright statement to your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of your modifications, or for any such derivative works as a whole, provided your use, reproduction, and distribution of the work otherwise complies with the terms and conditions of this Agreement.
|
26 |
+
|
27 |
+
4. Restrictions
|
28 |
+
If you are commercially using the Materials, and your product or service has more than 100 million monthly active users, you shall request a license from us. You cannot exercise your rights under this Agreement without our express authorization.
|
29 |
+
|
30 |
+
5. Rules of use
|
31 |
+
a. The Materials may be subject to export controls or restrictions in China, the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the Materials.
|
32 |
+
b. If you use the Materials or any outputs or results therefrom to create, train, fine-tune, or improve an AI model that is distributed or made available, you shall prominently display “Built with Qwen” or “Improved using Qwen” in the related product documentation.
|
33 |
+
|
34 |
+
6. Intellectual Property
|
35 |
+
a. We retain ownership of all intellectual property rights in and to the Materials and derivatives made by or for us. Conditioned upon compliance with the terms and conditions of this Agreement, with respect to any derivative works and modifications of the Materials that are made by you, you are and will be the owner of such derivative works and modifications.
|
36 |
+
b. No trademark license is granted to use the trade names, trademarks, service marks, or product names of us, except as required to fulfill notice requirements under this Agreement or as required for reasonable and customary use in describing and redistributing the Materials.
|
37 |
+
c. If you commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against us or any entity alleging that the Materials or any output therefrom, or any part of the foregoing, infringe any intellectual property or other right owned or licensable by you, then all licenses granted to you under this Agreement shall terminate as of the date such lawsuit or other proceeding is commenced or brought.
|
38 |
+
|
39 |
+
7. Disclaimer of Warranty and Limitation of Liability
|
40 |
+
a. We are not obligated to support, update, provide training for, or develop any further version of the Qwen Materials or to grant any license thereto.
|
41 |
+
b. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTY AND ASSUME NO RESPONSIBILITY FOR THE SAFETY OR STABILITY OF THE MATERIALS AND ANY OUTPUT THEREFROM.
|
42 |
+
c. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT OF IT, NO MATTER HOW IT’S CAUSED.
|
43 |
+
d. You will defend, indemnify and hold harmless us from and against any claim by any third party arising out of or related to your use or distribution of the Materials.
|
44 |
+
|
45 |
+
8. Survival and Termination.
|
46 |
+
a. The term of this Agreement shall commence upon your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
|
47 |
+
b. We may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you must delete and cease use of the Materials. Sections 7 and 9 shall survive the termination of this Agreement.
|
48 |
+
|
49 |
+
9. Governing Law and Jurisdiction.
|
50 |
+
a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
|
51 |
+
b. The People's Courts in Hangzhou City shall have exclusive jurisdiction over any dispute arising out of this Agreement.
|
README.md
ADDED
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: qwen
|
4 |
+
license_link: https://huggingface.co/Qwen/QVQ-72B-Preview/blob/main/LICENSE
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: image-text-to-text
|
8 |
+
base_model: Qwen/Qwen2-VL-72B
|
9 |
+
tags:
|
10 |
+
- chat
|
11 |
+
library_name: transformers
|
12 |
+
---
|
13 |
+
|
14 |
+
|
15 |
+
# QVQ-72B-Preview
|
16 |
+
|
17 |
+
## Introduction
|
18 |
+
|
19 |
+
**QVQ-72B-Preview** is an experimental research model developed by the Qwen team, focusing on enhancing visual reasoning capabilities.
|
20 |
+
|
21 |
+
## Performance
|
22 |
+
|
23 |
+
| | **QVQ-72B-Preview** | o1-2024-12-17 | gpt-4o-2024-05-13 | Claude3.5 Sonnet-20241022 | Qwen2VL-72B |
|
24 |
+
|----------------|-----------------|---------------|-------------------|----------------------------|-------------|
|
25 |
+
| MMMU(val) | 70.3 | 77.3 | 69.1 | 70.4 | 64.5 |
|
26 |
+
| MathVista(mini) | 71.4 | 71.0 | 63.8 | 65.3 | 70.5 |
|
27 |
+
| MathVision(full) | 35.9 | – | 30.4 | 35.6 | 25.9 |
|
28 |
+
| OlympiadBench | 20.4 | – | 25.9 | – | 11.2 |
|
29 |
+
|
30 |
+
|
31 |
+
**QVQ-72B-Preview** has achieved remarkable performance on various benchmarks. It scored a remarkable 70.3% on the Multimodal Massive Multi-task Understanding (MMMU) benchmark, showcasing QVQ's powerful ability in multidisciplinary understanding and reasoning. Furthermore, the significant improvements on MathVision highlight the model's progress in mathematical reasoning tasks. OlympiadBench also demonstrates the model's enhanced ability to tackle challenging problems.
|
32 |
+
|
33 |
+
***But It's Not All Perfect: Acknowledging the Limitations***
|
34 |
+
|
35 |
+
While **QVQ-72B-Preview** exhibits promising performance that surpasses expectations, it’s important to acknowledge several limitations:
|
36 |
+
|
37 |
+
1. **Language Mixing and Code-Switching:** The model might occasionally mix different languages or unexpectedly switch between them, potentially affecting the clarity of its responses.
|
38 |
+
2. **Recursive Reasoning Loops:** There's a risk of the model getting caught in recursive reasoning loops, leading to lengthy responses that may not even arrive at a final answer.
|
39 |
+
3. **Safety and Ethical Considerations:** Robust safety measures are needed to ensure reliable and safe performance. Users should exercise caution when deploying this model.
|
40 |
+
4. **Performance and Benchmark Limitations:** Despite the improvements in visual reasoning, QVQ doesn’t entirely replace the capabilities of Qwen2-VL-72B. During multi-step visual reasoning, the model might gradually lose focus on the image content, leading to hallucinations. Moreover, QVQ doesn’t show significant improvement over Qwen2-VL-72B in basic recognition tasks like identifying people, animals, or plants.
|
41 |
+
|
42 |
+
Note: Currently, the model only supports single-round dialogues and image outputs. It does not support video inputs.
|
43 |
+
## Quickstart
|
44 |
+
|
45 |
+
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
|
46 |
+
|
47 |
+
```bash
|
48 |
+
pip install qwen-vl-utils
|
49 |
+
```
|
50 |
+
|
51 |
+
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
|
52 |
+
|
53 |
+
```python
|
54 |
+
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|
55 |
+
from qwen_vl_utils import process_vision_info
|
56 |
+
|
57 |
+
# default: Load the model on the available device(s)
|
58 |
+
model = Qwen2VLForConditionalGeneration.from_pretrained(
|
59 |
+
"Qwen/QVQ-72B-Preview", torch_dtype="auto", device_map="auto"
|
60 |
+
)
|
61 |
+
|
62 |
+
# default processer
|
63 |
+
processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview")
|
64 |
+
|
65 |
+
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
|
66 |
+
# min_pixels = 256*28*28
|
67 |
+
# max_pixels = 1280*28*28
|
68 |
+
# processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview", min_pixels=min_pixels, max_pixels=max_pixels)
|
69 |
+
|
70 |
+
messages = [
|
71 |
+
{
|
72 |
+
"role": "system",
|
73 |
+
"content": [
|
74 |
+
{"type": "text", "text": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}
|
75 |
+
],
|
76 |
+
},
|
77 |
+
{
|
78 |
+
"role": "user",
|
79 |
+
"content": [
|
80 |
+
{
|
81 |
+
"type": "image",
|
82 |
+
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/demo.png",
|
83 |
+
},
|
84 |
+
{"type": "text", "text": "What value should be filled in the blank space?"},
|
85 |
+
],
|
86 |
+
}
|
87 |
+
]
|
88 |
+
|
89 |
+
# Preparation for inference
|
90 |
+
text = processor.apply_chat_template(
|
91 |
+
messages, tokenize=False, add_generation_prompt=True
|
92 |
+
)
|
93 |
+
image_inputs, video_inputs = process_vision_info(messages)
|
94 |
+
inputs = processor(
|
95 |
+
text=[text],
|
96 |
+
images=image_inputs,
|
97 |
+
videos=video_inputs,
|
98 |
+
padding=True,
|
99 |
+
return_tensors="pt",
|
100 |
+
)
|
101 |
+
inputs = inputs.to("cuda")
|
102 |
+
|
103 |
+
# Inference: Generation of the output
|
104 |
+
generated_ids = model.generate(**inputs, max_new_tokens=8192)
|
105 |
+
generated_ids_trimmed = [
|
106 |
+
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
107 |
+
]
|
108 |
+
output_text = processor.batch_decode(
|
109 |
+
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
110 |
+
)
|
111 |
+
print(output_text)
|
112 |
+
```
|
113 |
+
|
114 |
+
## Citation
|
115 |
+
|
116 |
+
If you find our work helpful, feel free to give us a cite.
|
117 |
+
|
118 |
+
```
|
119 |
+
@misc{qvq-72b-preview,
|
120 |
+
title = {QVQ: To See the World with Wisdom},
|
121 |
+
url = {https://qwenlm.github.io/blog/qvq-72b-preview/},
|
122 |
+
author = {Qwen Team},
|
123 |
+
month = {December},
|
124 |
+
year = {2024}
|
125 |
+
}
|
126 |
+
|
127 |
+
@article{Qwen2VL,
|
128 |
+
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
|
129 |
+
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
|
130 |
+
journal={arXiv preprint arXiv:2409.12191},
|
131 |
+
year={2024}
|
132 |
+
}
|
133 |
+
```
|
chat_template.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
|
3 |
+
}
|
config.json
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"Qwen2VLForConditionalGeneration"
|
4 |
+
],
|
5 |
+
"attention_dropout": 0.0,
|
6 |
+
"bos_token_id": 151643,
|
7 |
+
"eos_token_id": 151645,
|
8 |
+
"vision_start_token_id": 151652,
|
9 |
+
"vision_end_token_id": 151653,
|
10 |
+
"vision_token_id": 151654,
|
11 |
+
"image_token_id": 151655,
|
12 |
+
"video_token_id": 151656,
|
13 |
+
"hidden_act": "silu",
|
14 |
+
"hidden_size": 8192,
|
15 |
+
"initializer_range": 0.02,
|
16 |
+
"intermediate_size": 29568,
|
17 |
+
"max_position_embeddings": 128000,
|
18 |
+
"max_window_layers": 80,
|
19 |
+
"model_type": "qwen2_vl",
|
20 |
+
"num_attention_heads": 64,
|
21 |
+
"num_hidden_layers": 80,
|
22 |
+
"num_key_value_heads": 8,
|
23 |
+
"rms_norm_eps": 1e-06,
|
24 |
+
"rope_theta": 1000000.0,
|
25 |
+
"sliding_window": 32768,
|
26 |
+
"tie_word_embeddings": false,
|
27 |
+
"torch_dtype": "bfloat16",
|
28 |
+
"transformers_version": "4.41.2",
|
29 |
+
"use_cache": true,
|
30 |
+
"use_sliding_window": false,
|
31 |
+
"vision_config": {
|
32 |
+
"depth": 32,
|
33 |
+
"embed_dim": 1280,
|
34 |
+
"mlp_ratio": 4,
|
35 |
+
"num_heads": 16,
|
36 |
+
"in_chans": 3,
|
37 |
+
"hidden_size": 8192,
|
38 |
+
"patch_size": 14,
|
39 |
+
"spatial_merge_size": 2,
|
40 |
+
"spatial_patch_size": 14,
|
41 |
+
"temporal_patch_size": 2
|
42 |
+
},
|
43 |
+
"rope_scaling": {
|
44 |
+
"type": "mrope",
|
45 |
+
"mrope_section": [
|
46 |
+
16,
|
47 |
+
24,
|
48 |
+
24
|
49 |
+
]
|
50 |
+
},
|
51 |
+
"vocab_size": 152064,
|
52 |
+
"quantization_config": {
|
53 |
+
"quant_method": "exl2",
|
54 |
+
"version": "0.2.6",
|
55 |
+
"bits": 6.0,
|
56 |
+
"head_bits": 6,
|
57 |
+
"calibration": {
|
58 |
+
"rows": 145,
|
59 |
+
"length": 2048,
|
60 |
+
"dataset": "(default)"
|
61 |
+
}
|
62 |
+
}
|
63 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token_id": 151643,
|
3 |
+
"pad_token_id": 151643,
|
4 |
+
"do_sample": true,
|
5 |
+
"eos_token_id": [
|
6 |
+
151645,
|
7 |
+
151643
|
8 |
+
],
|
9 |
+
"repetition_penalty": 1.00,
|
10 |
+
"temperature": 0.01,
|
11 |
+
"top_p": 0.001,
|
12 |
+
"top_k": 1,
|
13 |
+
"transformers_version": "4.37.0"
|
14 |
+
}
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.safetensors.index.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
output-00001-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ceaa26aacf44858299cc345279d18dd00be27101ed28c8db113394ce5c922ef
|
3 |
+
size 8525842382
|
output-00002-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c0d2c1cfa9524dd832bd950664797b5f4423c7820995fa36109a2d7e1056a9f3
|
3 |
+
size 8437303946
|
output-00003-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e1272b75ba22f61553d391c0b79128a44e0f92dc870130120854f51db5a6c475
|
3 |
+
size 8421983082
|
output-00004-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1382dc0261d455e3957f817cb8b3c0fbc45634e145951cb2c5650a52e19f1993
|
3 |
+
size 8421830576
|
output-00005-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e7237f8c88579a78c33a5a303a6d7d2702ecae18d91de8296fcdb34221f9bb46
|
3 |
+
size 8493466748
|
output-00006-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fdad07737c32f4f56ba501b6c627f1b8bf994515cfce308119d042e9153ca522
|
3 |
+
size 8461493774
|
output-00007-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9b23b65b48f59c692159a38ab995cea7524af5d6159b448f143d5af315c0df38
|
3 |
+
size 6754901146
|
preprocessor_config.json
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"min_pixels": 3136,
|
3 |
+
"max_pixels": 12845056,
|
4 |
+
"patch_size": 14,
|
5 |
+
"temporal_patch_size": 2,
|
6 |
+
"merge_size": 2,
|
7 |
+
"image_mean": [
|
8 |
+
0.48145466,
|
9 |
+
0.4578275,
|
10 |
+
0.40821073
|
11 |
+
],
|
12 |
+
"image_std": [
|
13 |
+
0.26862954,
|
14 |
+
0.26130258,
|
15 |
+
0.27577711
|
16 |
+
],
|
17 |
+
"image_processor_type": "Qwen2VLImageProcessor",
|
18 |
+
"processor_class": "Qwen2VLProcessor"
|
19 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"151643": {
|
5 |
+
"content": "<|endoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"151644": {
|
13 |
+
"content": "<|im_start|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": false,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"151645": {
|
21 |
+
"content": "<|im_end|>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": false,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
},
|
28 |
+
"151646": {
|
29 |
+
"content": "<|object_ref_start|>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": false,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": true
|
35 |
+
},
|
36 |
+
"151647": {
|
37 |
+
"content": "<|object_ref_end|>",
|
38 |
+
"lstrip": false,
|
39 |
+
"normalized": false,
|
40 |
+
"rstrip": false,
|
41 |
+
"single_word": false,
|
42 |
+
"special": true
|
43 |
+
},
|
44 |
+
"151648": {
|
45 |
+
"content": "<|box_start|>",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false,
|
50 |
+
"special": true
|
51 |
+
},
|
52 |
+
"151649": {
|
53 |
+
"content": "<|box_end|>",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": false,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false,
|
58 |
+
"special": true
|
59 |
+
},
|
60 |
+
"151650": {
|
61 |
+
"content": "<|quad_start|>",
|
62 |
+
"lstrip": false,
|
63 |
+
"normalized": false,
|
64 |
+
"rstrip": false,
|
65 |
+
"single_word": false,
|
66 |
+
"special": true
|
67 |
+
},
|
68 |
+
"151651": {
|
69 |
+
"content": "<|quad_end|>",
|
70 |
+
"lstrip": false,
|
71 |
+
"normalized": false,
|
72 |
+
"rstrip": false,
|
73 |
+
"single_word": false,
|
74 |
+
"special": true
|
75 |
+
},
|
76 |
+
"151652": {
|
77 |
+
"content": "<|vision_start|>",
|
78 |
+
"lstrip": false,
|
79 |
+
"normalized": false,
|
80 |
+
"rstrip": false,
|
81 |
+
"single_word": false,
|
82 |
+
"special": true
|
83 |
+
},
|
84 |
+
"151653": {
|
85 |
+
"content": "<|vision_end|>",
|
86 |
+
"lstrip": false,
|
87 |
+
"normalized": false,
|
88 |
+
"rstrip": false,
|
89 |
+
"single_word": false,
|
90 |
+
"special": true
|
91 |
+
},
|
92 |
+
"151654": {
|
93 |
+
"content": "<|vision_pad|>",
|
94 |
+
"lstrip": false,
|
95 |
+
"normalized": false,
|
96 |
+
"rstrip": false,
|
97 |
+
"single_word": false,
|
98 |
+
"special": true
|
99 |
+
},
|
100 |
+
"151655": {
|
101 |
+
"content": "<|image_pad|>",
|
102 |
+
"lstrip": false,
|
103 |
+
"normalized": false,
|
104 |
+
"rstrip": false,
|
105 |
+
"single_word": false,
|
106 |
+
"special": true
|
107 |
+
},
|
108 |
+
"151656": {
|
109 |
+
"content": "<|video_pad|>",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": false,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false,
|
114 |
+
"special": true
|
115 |
+
},
|
116 |
+
"151657": {
|
117 |
+
"content": "<tool_call>",
|
118 |
+
"lstrip": false,
|
119 |
+
"normalized": false,
|
120 |
+
"rstrip": false,
|
121 |
+
"single_word": false,
|
122 |
+
"special": false
|
123 |
+
},
|
124 |
+
"151658": {
|
125 |
+
"content": "</tool_call>",
|
126 |
+
"lstrip": false,
|
127 |
+
"normalized": false,
|
128 |
+
"rstrip": false,
|
129 |
+
"single_word": false,
|
130 |
+
"special": false
|
131 |
+
},
|
132 |
+
"151659": {
|
133 |
+
"content": "<|fim_prefix|>",
|
134 |
+
"lstrip": false,
|
135 |
+
"normalized": false,
|
136 |
+
"rstrip": false,
|
137 |
+
"single_word": false,
|
138 |
+
"special": false
|
139 |
+
},
|
140 |
+
"151660": {
|
141 |
+
"content": "<|fim_middle|>",
|
142 |
+
"lstrip": false,
|
143 |
+
"normalized": false,
|
144 |
+
"rstrip": false,
|
145 |
+
"single_word": false,
|
146 |
+
"special": false
|
147 |
+
},
|
148 |
+
"151661": {
|
149 |
+
"content": "<|fim_suffix|>",
|
150 |
+
"lstrip": false,
|
151 |
+
"normalized": false,
|
152 |
+
"rstrip": false,
|
153 |
+
"single_word": false,
|
154 |
+
"special": false
|
155 |
+
},
|
156 |
+
"151662": {
|
157 |
+
"content": "<|fim_pad|>",
|
158 |
+
"lstrip": false,
|
159 |
+
"normalized": false,
|
160 |
+
"rstrip": false,
|
161 |
+
"single_word": false,
|
162 |
+
"special": false
|
163 |
+
},
|
164 |
+
"151663": {
|
165 |
+
"content": "<|repo_name|>",
|
166 |
+
"lstrip": false,
|
167 |
+
"normalized": false,
|
168 |
+
"rstrip": false,
|
169 |
+
"single_word": false,
|
170 |
+
"special": false
|
171 |
+
},
|
172 |
+
"151664": {
|
173 |
+
"content": "<|file_sep|>",
|
174 |
+
"lstrip": false,
|
175 |
+
"normalized": false,
|
176 |
+
"rstrip": false,
|
177 |
+
"single_word": false,
|
178 |
+
"special": false
|
179 |
+
}
|
180 |
+
},
|
181 |
+
"additional_special_tokens": [
|
182 |
+
"<|im_start|>",
|
183 |
+
"<|im_end|>",
|
184 |
+
"<|object_ref_start|>",
|
185 |
+
"<|object_ref_end|>",
|
186 |
+
"<|box_start|>",
|
187 |
+
"<|box_end|>",
|
188 |
+
"<|quad_start|>",
|
189 |
+
"<|quad_end|>",
|
190 |
+
"<|vision_start|>",
|
191 |
+
"<|vision_end|>",
|
192 |
+
"<|vision_pad|>",
|
193 |
+
"<|image_pad|>",
|
194 |
+
"<|video_pad|>"
|
195 |
+
],
|
196 |
+
"bos_token": null,
|
197 |
+
"chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}",
|
198 |
+
"clean_up_tokenization_spaces": false,
|
199 |
+
"eos_token": "<|im_end|>",
|
200 |
+
"errors": "replace",
|
201 |
+
"model_max_length": 131072,
|
202 |
+
"pad_token": "<|endoftext|>",
|
203 |
+
"split_special_tokens": false,
|
204 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
205 |
+
"unk_token": null,
|
206 |
+
"add_bos_token": false
|
207 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|