andthattoo commited on
Commit
267652f
·
verified ·
1 Parent(s): 5b897c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -81
README.md CHANGED
@@ -1,74 +1,141 @@
1
  ---
2
- license: apache-2.0
 
3
  license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
4
  language:
5
  - en
6
  base_model:
7
- - Qwen/Qwen2.5-Coder-7B
8
  pipeline_tag: text-generation
9
  library_name: transformers
10
  tags:
11
  - code
12
- - codeqwen
13
  - chat
14
  - qwen
15
  - qwen-coder
 
16
  ---
17
 
18
-
19
- # Qwen2.5-Coder-7B-Instruct
20
 
21
  ## Introduction
22
 
23
- Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
24
-
25
- - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
26
- - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
27
- - **Long-context Support** up to 128K tokens.
28
-
29
- **This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
30
- - Type: Causal Language Models
31
- - Training Stage: Pretraining & Post-training
32
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
33
- - Number of Parameters: 7.61B
34
- - Number of Paramaters (Non-Embedding): 6.53B
35
- - Number of Layers: 28
36
- - Number of Attention Heads (GQA): 28 for Q and 4 for KV
37
- - Context Length: Full 131,072 tokens
38
- - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
39
-
40
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
41
-
42
- ## Requirements
43
 
44
- The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
45
 
46
- With `transformers<4.37.0`, you will encounter the following error:
47
- ```
48
- KeyError: 'qwen2'
49
- ```
50
 
51
  ## Quickstart
52
 
53
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
54
-
55
- ```python
56
  from transformers import AutoModelForCausalLM, AutoTokenizer
57
 
58
- model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
59
-
60
  model = AutoModelForCausalLM.from_pretrained(
61
- model_name,
62
- torch_dtype="auto",
63
- device_map="auto"
64
  )
65
  tokenizer = AutoTokenizer.from_pretrained(model_name)
66
 
67
- prompt = "write a quick sort algorithm."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  messages = [
69
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
70
- {"role": "user", "content": prompt}
71
  ]
 
72
  text = tokenizer.apply_chat_template(
73
  messages,
74
  tokenize=False,
@@ -78,58 +145,86 @@ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
78
 
79
  generated_ids = model.generate(
80
  **model_inputs,
81
- max_new_tokens=512
82
  )
83
  generated_ids = [
84
  output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
85
  ]
86
 
87
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
88
- ```
 
89
 
90
- ### Processing Long Texts
91
-
92
- The current `config.json` is set for context length up to 32,768 tokens.
93
- To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
94
-
95
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
96
- ```json
97
- {
98
- ...,
99
- "rope_scaling": {
100
- "factor": 4.0,
101
- "original_max_position_embeddings": 32768,
102
- "type": "yarn"
103
- }
104
- }
105
- ```
106
 
107
- For deployment, we recommend using vLLM.
108
- Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
109
- Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
110
- We advise adding the `rope_scaling` configuration only when processing long contexts is required.
111
 
112
- ## Evaluation & Performance
 
 
113
 
114
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
115
 
116
- For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
 
117
 
118
- ## Citation
 
 
 
119
 
120
- If you find our work helpful, feel free to give us a cite.
 
 
121
 
 
 
 
 
 
 
 
 
 
 
 
122
  ```
123
- @article{hui2024qwen2,
124
- title={Qwen2. 5-Coder Technical Report},
125
- author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
126
- journal={arXiv preprint arXiv:2409.12186},
127
- year={2024}
128
- }
129
- @article{qwen2,
130
- title={Qwen2 Technical Report},
131
- author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
132
- journal={arXiv preprint arXiv:2407.10671},
133
- year={2024}
134
- }
135
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: qwen-research
4
  license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
5
  language:
6
  - en
7
  base_model:
8
+ - Qwen/Qwen2.5-Coder-7B-Instruct
9
  pipeline_tag: text-generation
10
  library_name: transformers
11
  tags:
12
  - code
 
13
  - chat
14
  - qwen
15
  - qwen-coder
16
+ - agent
17
  ---
18
 
19
+ # Dria-Agent-α-7B
 
20
 
21
  ## Introduction
22
 
23
+ ***Dria-Agent-α*** are series of large language models trained on top of the [Qwen2.5-Coder](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f) series, specifically on top of the [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) and [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) models to be used in agentic applications. These models are the first instalment of agent-focused LLMs (hence the **α** in the naming) we hope to improve with better and more elaborate techniques in subsequent releases.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ Dria-Agent-α employs ***Pythonic function calling***, which is LLMs using blocks of Python code to interact with provided tools and output actions. This method was inspired by many previous work, including but not limited to [DynaSaur](https://arxiv.org/pdf/2411.01747), [RLEF](https://arxiv.org/pdf/2410.02089), [ADAS](https://arxiv.org/pdf/2408.08435) and [CAMEL](https://arxiv.org/pdf/2303.17760). This way of function calling has a few advantages over traditional JSON-based function calling methods:
26
 
27
+ 1. **One-shot Parallel Multiple Function Calls:** The model can can utilise many synchronous processes in one chat turn to arrive to a solution, which would require other function calling models multiple turns of conversation.
28
+ 2. **Free-form Reasoning and Actions:** The model provides reasoning traces freely in natural language and the actions in between \`\`\`python \`\`\` blocks, as it already tends to do without special prompting or tuning. This tries to mitigate the possible performance loss caused by imposing specific formats on LLM outputs discussed in [Let Me Speak Freely?](https://arxiv.org/pdf/2408.02442)
29
+ 3. **On-the-fly Complex Solution Generation:** The solution provided by the model is essentially a Python program with the exclusion of some "risky" builtins like `exec`, `eval` and `compile` (see full list in **Quickstart** below). This enables the model to implement custom complex logic with conditionals and synchronous pipelines (using the output of one function in the next function's arguments) which would not be possible with the current JSON-based function calling methods (as far as we know).
 
30
 
31
  ## Quickstart
32
 
33
+ ````python
34
+ import json
35
+ from typing import Any, Dict, List
36
  from transformers import AutoModelForCausalLM, AutoTokenizer
37
 
38
+ model_name = "driaforall/Dria-Agent-a-7B"
 
39
  model = AutoModelForCausalLM.from_pretrained(
40
+ model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
 
 
41
  )
42
  tokenizer = AutoTokenizer.from_pretrained(model_name)
43
 
44
+ # Please use our provided prompt for best performance
45
+ SYSTEM_PROMPT = """
46
+ You are an expert AI assistant that specializes in providing Python code to solve the task/problem at hand provided by the user.
47
+
48
+ You can use Python code freely, including the following available functions:
49
+
50
+ <|functions_schema|>
51
+ {{functions_schema}}
52
+ <|end_functions_schema|>
53
+
54
+ The following dangerous builtins are restricted for security:
55
+ - exec
56
+ - eval
57
+ - execfile
58
+ - compile
59
+ - importlib
60
+ - input
61
+ - exit
62
+
63
+ Think step by step and provide your reasoning, outside of the function calls.
64
+ You can write Python code and use the available functions. Provide all your python code in a SINGLE markdown code block like the following:
65
+
66
+ ```python
67
+ result = example_function(arg1, "string")
68
+ result2 = example_function2(result, arg2)
69
+ ```
70
+
71
+ DO NOT use print() statements AT ALL. Avoid mutating variables whenever possible.
72
+ """.strip()
73
+
74
+
75
+ get_sample_data = """
76
+ def check_availability(day: str, start_time: str, end_time: str) -> bool:
77
+ \"\"\"
78
+ Check if a time slot is available on a given day.
79
+
80
+ Args:
81
+ - day: The day to check in YYYY-MM-DD format
82
+ - start_time: Start time in HH:MM format
83
+ - end_time: End time in HH:MM format
84
+
85
+ Returns:
86
+ - True if slot is available, False otherwise
87
+ \"\"\"
88
+ pass
89
+
90
+ def make_appointment(day: str, start_time: str, end_time: str) -> dict:
91
+ \"\"\"
92
+ Make an appointment for a given time slot.
93
+
94
+ Args:
95
+ - day: The day to make appointment in YYYY-MM-DD format
96
+ - start_time: Start time in HH:MM format
97
+ - end_time: End time in HH:MM format
98
+ - title: The title of the appointment
99
+
100
+ Returns:
101
+ - A dictionary with the appointment details and if it's made or not.
102
+ dict keys:
103
+ - day (str): The day the appointment is on, in YYYY-MM-DD format
104
+ - start_time (str): Start time in HH:MM format
105
+ - end_time (str): End time in HH:MM format
106
+ - appointment_made (bool): Whether the appointment is successfully made or not.
107
+ \"\"\"
108
+ pass
109
+
110
+ def add_to_reminders(reminder_text: str) -> bool:
111
+ \"\"\"
112
+ Add a text to reminders.
113
+
114
+ Args:
115
+ - reminder_text: The text to add to reminders
116
+
117
+ Returns:
118
+ - Whether the reminder was successfully created or not.
119
+ \"\"\"
120
+ pass
121
+ """
122
+
123
+ # Helper function to create the system prompt for our model
124
+ def format_prompt(tools: str):
125
+ return SYSTEM_PROMPT.format(functions_schema=tools)
126
+
127
+ system_prompt = SYSTEM_PROMPT.replace("{{functions_schema}}", get_sample_data)
128
+
129
+ USER_QUERY = """
130
+ Can you check if I have tomorrow 10:00-12:00 available and make an appointment for a meeting
131
+ with my thesis supervisor if so? If you made the appointment, please add it to my reminders.
132
+ """
133
+
134
  messages = [
135
+ {"role": "system", "content": system_prompt},
136
+ {"role": "user", "content": USER_QUERY},
137
  ]
138
+
139
  text = tokenizer.apply_chat_template(
140
  messages,
141
  tokenize=False,
 
145
 
146
  generated_ids = model.generate(
147
  **model_inputs,
148
+ max_new_tokens=2048
149
  )
150
  generated_ids = [
151
  output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
152
  ]
153
 
154
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
155
+ print(response)
156
+ ````
157
 
158
+ The output should be something like:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
 
160
+ ````
161
+ To accomplish this task, we need to follow these steps:
 
 
162
 
163
+ 1. Check if the time slot from 10:00 to 12:00 on tomorrow is available using the `check_availability` function.
164
+ 2. If the slot is available, proceed to make an appointment using the `make_appointment` function.
165
+ 3. If the appointment is successfully made, add it to the reminders using the `add_to_reminders` function.
166
 
167
+ Let's break down the code into these steps:
168
 
169
+ ```python
170
+ from datetime import datetime, timedelta
171
 
172
+ # Get today's date and calculate tomorrow's date
173
+ today = datetime.now()
174
+ tomorrow = today + timedelta(days=1)
175
+ tomorrow_str = tomorrow.strftime('%Y-%m-%d')
176
 
177
+ # Define the time slots
178
+ start_time = '10:00'
179
+ end_time = '12:00'
180
 
181
+ # Step 1: Check availability
182
+ is_available = check_availability(tomorrow_str, start_time, end_time)
183
+
184
+ # Step 2: Make appointment if available
185
+ if is_available:
186
+ appointment_result = make_appointment(tomorrow_str, start_time, end_time, "Meeting with Thesis Supervisor")
187
+
188
+ # Step 3: Add to reminders if appointment is made
189
+ if appointment_result['appointment_made']:
190
+ reminder_text = f"Appointment made for {appointment_result['day']} from {appointment_result['start_time']} to {appointment_result['end_time']}."
191
+ add_to_reminders(reminder_text)
192
  ```
193
+
194
+ This code will first determine if the specified time slot is available tomorrow. If it is, it will attempt to make the appointment and then add it to the reminders if successful.
195
+ ````
196
+
197
+ ## Evaluation & Performance
198
+
199
+ We evaluate the model on the following benchmarks:
200
+
201
+ 1. Berkeley Function Calling Leaderboard (BFCL)
202
+ 2. MMLU-Pro
203
+ 3. **Dria-Pythonic-Agent-Benchmark (DPAB):** The benchmark we curated with a synthetic data generation +model-based validation + filtering and manual selection to evaluate LLMs on their Pythonic function calling ability, spanning multiple scenarios and tasks. More detailed information about the benchmark and the Github repo will be released soon.
204
+
205
+ Below are the BFCL results: evaluation results for ***Qwen2.5-Coder-3B-Instruct***, ***Dria-Agent-α-3B*** and ***gpt-4o-2024-11-20***
206
+
207
+ | Metric | Qwen/Qwen2.5-3B-Instruct | Dria-Agent-a-3B | Dria-Agent-a-7B | gpt-4o-2024-11-20 (Prompt) |
208
+ |---------------------------------------|-----------|-----------|-----------|-----------|
209
+ | **Non-Live Simple AST** | 75.50% | 75.08% | 77.83% | 79.42% |
210
+ | **Non-Live Multiple AST** | 90.00% | 93.00% | 94.50% | 95.50% |
211
+ | **Non-Live Parallel AST** | 80.00% | 85.00% | 87.00% | 94.00% |
212
+ | **Non-Live Parallel Multiple AST** | 78.50% | 79.00% | 88.00% | 83.50% |
213
+ | **Non-Live Simple Exec** | 82.07% | 87.57% | 80.00% | 100.00% |
214
+ | **Non-Live Multiple Exec** | 86.00% | 85.14% | 84.00% | 94.00% |
215
+ | **Non-Live Parallel Exec** | 82.00% | 90.00% | 70.00% | 86.00% |
216
+ | **Non-Live Parallel Multiple Exec** | 80.00% | 88.00% | 65.00% | 77.50% |
217
+ | **Live Simple AST** | 68.22% | 70.16% | 82.95% | 83.72% |
218
+ | **Live Multiple AST** | 66.00% | 67.14% | 78.25% | 79.77% |
219
+ | **Live Parallel AST** | 62.50% | 50.00% | 81.25% | 87.50% |
220
+ | **Live Parallel Multiple AST** | 66.67% | 70.83% | 70.83% | 70.83% |
221
+ | **Relevance Detection** | 88.89% | 100.00% | 100.00% | 83.33% |
222
+
223
+ and the MMLU-Pro and DPAB results:
224
+
225
+ | Benchmark Name | Qwen2.5-Coder-7B-Instruct | Dria-Agent-α-7B |
226
+ |----------------|---------------------------|-----------------|
227
+ | MMLU-Pro | 45.6 ([Self Reported](https://arxiv.org/pdf/2409.12186)) | TBD |
228
+ | DPAB (Pythonic, Strict) | 24 | 51 |
229
+
230
+ **\*Note:** The model tends to use Pythonic function calling for a lot of the test cases in STEM-related fields (math, physics, chemistry, etc.) in the MMLU-Pro benchmark, which isn't captured by the evaluation framework and scripts provided in their [Github repository](https://github.com/TIGER-AI-Lab/MMLU-Pro/tree/main). We haven't modified the script for evaluation, and leave it for the future iterations of this model. However, by performing qualitative analysis on the model responses, we suspect that the model's score will increase instead of suffering a ~6% decrease.