RichardErkhov commited on
Commit
f406b67
·
verified ·
1 Parent(s): 355c0f4

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +515 -0
README.md ADDED
@@ -0,0 +1,515 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ xLAM-8x22b-r - GGUF
11
+ - Model creator: https://huggingface.co/Salesforce/
12
+ - Original model: https://huggingface.co/Salesforce/xLAM-8x22b-r/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [xLAM-8x22b-r.Q2_K.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q2_K | 48.54GB |
18
+ | [xLAM-8x22b-r.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q3_K_S | 57.29GB |
19
+ | [xLAM-8x22b-r.Q3_K.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q3_K | 63.14GB |
20
+ | [xLAM-8x22b-r.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q3_K_M | 63.14GB |
21
+ | [xLAM-8x22b-r.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q3_K_L | 67.61GB |
22
+ | [xLAM-8x22b-r.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | IQ4_XS | 71.12GB |
23
+ | [xLAM-8x22b-r.Q4_0.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q4_0 | 74.06GB |
24
+ | [xLAM-8x22b-r.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | IQ4_NL | 74.96GB |
25
+ | [xLAM-8x22b-r.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q4_K_S | 74.96GB |
26
+ | [xLAM-8x22b-r.Q4_K.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q4_K | 79.72GB |
27
+ | [xLAM-8x22b-r.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q4_K_M | 54.3GB |
28
+ | [xLAM-8x22b-r.Q4_1.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q4_1 | 82.19GB |
29
+ | [xLAM-8x22b-r.Q5_0.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q5_0 | 90.33GB |
30
+ | [xLAM-8x22b-r.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q5_K_S | 90.33GB |
31
+ | [xLAM-8x22b-r.Q5_K.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q5_K | 93.11GB |
32
+ | [xLAM-8x22b-r.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q5_K_M | 93.11GB |
33
+ | [xLAM-8x22b-r.Q5_1.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q5_1 | 98.46GB |
34
+ | [xLAM-8x22b-r.Q6_K.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q6_K | 107.61GB |
35
+ | [xLAM-8x22b-r.Q8_0.gguf](https://huggingface.co/RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf/tree/main/) | Q8_0 | 139.17GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ extra_gated_heading: Acknowledge to follow corresponding license to access the repository
43
+ extra_gated_button_content: Agree and access repository
44
+ extra_gated_fields:
45
+ First Name: text
46
+ Last Name: text
47
+ Country: country
48
+ Affiliation: text
49
+ license: cc-by-nc-4.0
50
+ datasets:
51
+ - Salesforce/xlam-function-calling-60k
52
+ language:
53
+ - en
54
+ pipeline_tag: text-generation
55
+ tags:
56
+ - function-calling
57
+ - LLM Agent
58
+ - tool-use
59
+ - mistral
60
+ - pytorch
61
+ library_name: transformers
62
+ ---
63
+
64
+ <p align="center">
65
+ <img width="500px" alt="xLAM" src="https://huggingface.co/datasets/jianguozhang/logos/resolve/main/xlam-no-background.png">
66
+ </p>
67
+ <p align="center">
68
+ <a href="https://www.salesforceairesearch.com/projects/xlam-large-action-models">[Homepage]</a> |
69
+ <a href="https://arxiv.org/abs/2409.03215">[Paper]</a> |
70
+ <a href="https://github.com/SalesforceAIResearch/xLAM">[Github]</a> |
71
+ <a href="https://discord.gg/tysWwgZyQ2">[Discord]</a> |
72
+ <a href="https://blog.salesforceairesearch.com/large-action-model-ai-agent/">[Blog]</a> |
73
+ <a href="https://huggingface.co/spaces/Tonic/Salesforce-Xlam-7b-r">[Community Demo]</a>
74
+ </p>
75
+ <hr>
76
+
77
+
78
+ Welcome to the xLAM model family! [Large Action Models (LAMs)](https://blog.salesforceairesearch.com/large-action-models/) are advanced large language models designed to enhance decision-making and translate user intentions into executable actions that interact with the world. LAMs autonomously plan and execute tasks to achieve specific goals, serving as the brains of AI agents. They have the potential to automate workflow processes across various domains, making them invaluable for a wide range of applications.
79
+ **The model release is exclusively for research purposes. A new and enhanced version of xLAM will soon be available exclusively to customers on our Platform.**
80
+
81
+ ## Table of Contents
82
+ - [Model Series](#model-series)
83
+ - [Repository Overview](#repository-overview)
84
+ - [Benchmark Results](#benchmark-results)
85
+ - [Usage](#usage)
86
+ - [Basic Usage with Huggingface](#basic-usage-with-huggingface)
87
+ - [License](#license)
88
+ - [Citation](#citation)
89
+
90
+ ## Model Series
91
+
92
+ We provide a series of xLAMs in different sizes to cater to various applications, including those optimized for function-calling and general agent applications:
93
+
94
+ | Model | # Total Params | Context Length | Download Model | Download GGUF files |
95
+ |------------------------|----------------|----------------|----------------|----------|
96
+ | xLAM-1b-fc-r | 1.35B | 16k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-1b-fc-r) | [🤗 Link](https://huggingface.co/Salesforce/xLAM-1b-fc-r-gguf) |
97
+ | xLAM-7b-fc-r | 6.91B | 4k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-fc-r) | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-fc-r-gguf) |
98
+ | xLAM-7b-r | 7.24B | 32k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-r) | -- |
99
+ | xLAM-8x7b-r | 46.7B | 32k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-8x7b-r) | -- |
100
+ | xLAM-8x22b-r | 141B | 64k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-8x22b-r) | -- |
101
+
102
+
103
+
104
+
105
+
106
+
107
+ For our Function-calling series (more details are included at [here](https://huggingface.co/Salesforce/xLAM-7b-fc-r)), we also provide their quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) files for efficient deployment and execution. GGUF is a file format designed to efficiently store and load large language models, making GGUF ideal for running AI models on local devices with limited resources, enabling offline functionality and enhanced privacy.
108
+
109
+ For more details, check our [GitHub](https://github.com/SalesforceAIResearch/xLAM) and [paper]().
110
+
111
+
112
+ ## Repository Overview
113
+
114
+ This repository is about the general tool use series. For more specialized function calling models, please take a look into our `fc` series [here](https://huggingface.co/Salesforce/xLAM-7b-fc-r).
115
+
116
+ The instructions will guide you through the setup, usage, and integration of our model series with HuggingFace.
117
+ ### Framework Versions
118
+
119
+ - Transformers 4.41.0
120
+ - Pytorch 2.3.0+cu121
121
+ - Datasets 2.19.1
122
+ - Tokenizers 0.19.1
123
+
124
+ ## Usage
125
+
126
+ ### Basic Usage with Huggingface
127
+
128
+ To use the model from Huggingface, please first install the `transformers` library:
129
+ ```bash
130
+ pip install transformers>=4.41.0
131
+ ```
132
+
133
+ Please note that, our model works best with our provided prompt format.
134
+ It allows us to extract JSON output that is similar to the [function-calling mode of ChatGPT](https://platform.openai.com/docs/guides/function-calling).
135
+
136
+ We use the following example to illustrate how to use our model for 1) single-turn use case, and 2) multi-turn use case
137
+
138
+ #### 1. Single-turn use case
139
+
140
+ ````python
141
+ import json
142
+ import torch
143
+ from transformers import AutoModelForCausalLM, AutoTokenizer
144
+
145
+ torch.random.manual_seed(0)
146
+
147
+ model_name = "Salesforce/xLAM-7b-r"
148
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
149
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
150
+
151
+ # Please use our provided instruction prompt for best performance
152
+ task_instruction = """
153
+ Based on the previous context and API request history, generate an API request or a response as an AI assistant.""".strip()
154
+
155
+ format_instruction = """
156
+ The output should be of the JSON format, which specifies a list of generated function calls. The example format is as follows, please make sure the parameter type is correct. If no function call is needed, please make
157
+ tool_calls an empty list "[]".
158
+ ```
159
+ {"thought": "the thought process, or an empty string", "tool_calls": [{"name": "api_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}]}
160
+ ```
161
+ """.strip()
162
+
163
+ # Define the input query and available tools
164
+ query = "What's the weather like in New York in fahrenheit?"
165
+
166
+ get_weather_api = {
167
+ "name": "get_weather",
168
+ "description": "Get the current weather for a location",
169
+ "parameters": {
170
+ "type": "object",
171
+ "properties": {
172
+ "location": {
173
+ "type": "string",
174
+ "description": "The city and state, e.g. San Francisco, New York"
175
+ },
176
+ "unit": {
177
+ "type": "string",
178
+ "enum": ["celsius", "fahrenheit"],
179
+ "description": "The unit of temperature to return"
180
+ }
181
+ },
182
+ "required": ["location"]
183
+ }
184
+ }
185
+
186
+ search_api = {
187
+ "name": "search",
188
+ "description": "Search for information on the internet",
189
+ "parameters": {
190
+ "type": "object",
191
+ "properties": {
192
+ "query": {
193
+ "type": "string",
194
+ "description": "The search query, e.g. 'latest news on AI'"
195
+ }
196
+ },
197
+ "required": ["query"]
198
+ }
199
+ }
200
+
201
+ openai_format_tools = [get_weather_api, search_api]
202
+
203
+ # Helper function to convert openai format tools to our more concise xLAM format
204
+ def convert_to_xlam_tool(tools):
205
+ ''''''
206
+ if isinstance(tools, dict):
207
+ return {
208
+ "name": tools["name"],
209
+ "description": tools["description"],
210
+ "parameters": {k: v for k, v in tools["parameters"].get("properties", {}).items()}
211
+ }
212
+ elif isinstance(tools, list):
213
+ return [convert_to_xlam_tool(tool) for tool in tools]
214
+ else:
215
+ return tools
216
+
217
+ def build_conversation_history_prompt(conversation_history: str):
218
+ parsed_history = []
219
+ for step_data in conversation_history:
220
+ parsed_history.append({
221
+ "step_id": step_data["step_id"],
222
+ "thought": step_data["thought"],
223
+ "tool_calls": step_data["tool_calls"],
224
+ "next_observation": step_data["next_observation"],
225
+ "user_input": step_data['user_input']
226
+ })
227
+
228
+ history_string = json.dumps(parsed_history)
229
+ return f"\n[BEGIN OF HISTORY STEPS]\n{history_string}\n[END OF HISTORY STEPS]\n"
230
+
231
+
232
+ # Helper function to build the input prompt for our model
233
+ def build_prompt(task_instruction: str, format_instruction: str, tools: list, query: str, conversation_history: list):
234
+ prompt = f"[BEGIN OF TASK INSTRUCTION]\n{task_instruction}\n[END OF TASK INSTRUCTION]\n\n"
235
+ prompt += f"[BEGIN OF AVAILABLE TOOLS]\n{json.dumps(xlam_format_tools)}\n[END OF AVAILABLE TOOLS]\n\n"
236
+ prompt += f"[BEGIN OF FORMAT INSTRUCTION]\n{format_instruction}\n[END OF FORMAT INSTRUCTION]\n\n"
237
+ prompt += f"[BEGIN OF QUERY]\n{query}\n[END OF QUERY]\n\n"
238
+
239
+ if len(conversation_history) > 0: prompt += build_conversation_history_prompt(conversation_history)
240
+ return prompt
241
+
242
+ # Build the input and start the inference
243
+ xlam_format_tools = convert_to_xlam_tool(openai_format_tools)
244
+
245
+ conversation_history = []
246
+ content = build_prompt(task_instruction, format_instruction, xlam_format_tools, query, conversation_history)
247
+
248
+ messages=[
249
+ { 'role': 'user', 'content': content}
250
+ ]
251
+
252
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
253
+
254
+ # tokenizer.eos_token_id is the id of <|EOT|> token
255
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
256
+ agent_action = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
257
+ ````
258
+
259
+ Then you should be able to see the following output string in JSON format:
260
+
261
+ ```shell
262
+ {"thought": "I need to get the current weather for New York in fahrenheit.", "tool_calls": [{"name": "get_weather", "arguments": {"location": "New York", "unit": "fahrenheit"}}]}
263
+ ```
264
+
265
+ #### 2. Multi-turn use case
266
+
267
+ We also support multi-turn interaction with our model series. Here is the example of next round of interaction from the above example:
268
+
269
+ ````python
270
+ def parse_agent_action(agent_action: str):
271
+ """
272
+ Given an agent's action, parse it to add to conversation history
273
+ """
274
+ try: parsed_agent_action_json = json.loads(agent_action)
275
+ except: return "", []
276
+
277
+ if "thought" not in parsed_agent_action_json.keys(): thought = ""
278
+ else: thought = parsed_agent_action_json["thought"]
279
+
280
+ if "tool_calls" not in parsed_agent_action_json.keys(): tool_calls = []
281
+ else: tool_calls = parsed_agent_action_json["tool_calls"]
282
+
283
+ return thought, tool_calls
284
+
285
+ def update_conversation_history(conversation_history: list, agent_action: str, environment_response: str, user_input: str):
286
+ """
287
+ Update the conversation history list based on the new agent_action, environment_response, and/or user_input
288
+ """
289
+ thought, tool_calls = parse_agent_action(agent_action)
290
+ new_step_data = {
291
+ "step_id": len(conversation_history) + 1,
292
+ "thought": thought,
293
+ "tool_calls": tool_calls,
294
+ "step_id": len(conversation_history),
295
+ "next_observation": environment_response,
296
+ "user_input": user_input,
297
+ }
298
+
299
+ conversation_history.append(new_step_data)
300
+
301
+ def get_environment_response(agent_action: str):
302
+ """
303
+ Get the environment response for the agent_action
304
+ """
305
+ # TODO: add custom implementation here
306
+ error_message, response_message = "", ""
307
+ return {"error": error_message, "response": response_message}
308
+
309
+ # ------------- before here are the steps to get agent_response from the example above ----------
310
+
311
+ # 1. get the next state after agent's response:
312
+ # The next 2 lines are examples of getting environment response and user_input.
313
+ # It is depended on particular usage, we can have either one or both of those.
314
+ environment_response = get_environment_response(agent_action)
315
+ user_input = "Now, search on the Internet for cute puppies"
316
+
317
+ # 2. after we got environment_response and (or) user_input, we want to add to our conversation history
318
+ update_conversation_history(conversation_history, agent_action, environment_response, user_input)
319
+
320
+ # 3. we now can build the prompt
321
+ content = build_prompt(task_instruction, format_instruction, xlam_format_tools, query, conversation_history)
322
+
323
+ # 4. Now, we just retrieve the inputs for the LLM
324
+ messages=[
325
+ { 'role': 'user', 'content': content}
326
+ ]
327
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
328
+
329
+ # 5. Generate the outputs & decode
330
+ # tokenizer.eos_token_id is the id of <|EOT|> token
331
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
332
+ agent_action = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
333
+ ````
334
+
335
+ This would be the corresponding output:
336
+ ```shell
337
+ {"thought": "I need to get the current weather for New York in fahrenheit.", "tool_calls": [{"name": "get_weather", "arguments": {"location": "New York", "unit": "fahrenheit"}}]}
338
+ ```
339
+
340
+ We highly recommend to use our provided prompt format and helper functions to yield the best function-calling performance of our model.
341
+
342
+ #### Example multi-turn prompt and output
343
+
344
+ Prompt:
345
+ ````json
346
+ [BEGIN OF TASK INSTRUCTION]
347
+ Based on the previous context and API request history, generate an API request or a response as an AI assistant.
348
+ [END OF TASK INSTRUCTION]
349
+
350
+ [BEGIN OF AVAILABLE TOOLS]
351
+ [
352
+ {
353
+ "name": "get_fire_info",
354
+ "description": "Query the latest wildfire information",
355
+ "parameters": {
356
+ "location": {
357
+ "type": "string",
358
+ "description": "Location of the wildfire, for example: 'California'",
359
+ "required": true,
360
+ "format": "free"
361
+ },
362
+ "radius": {
363
+ "type": "number",
364
+ "description": "The radius (in miles) around the location where the wildfire is occurring, for example: 10",
365
+ "required": false,
366
+ "format": "free"
367
+ }
368
+ }
369
+ },
370
+ {
371
+ "name": "get_hurricane_info",
372
+ "description": "Query the latest hurricane information",
373
+ "parameters": {
374
+ "name": {
375
+ "type": "string",
376
+ "description": "Name of the hurricane, for example: 'Irma'",
377
+ "required": true,
378
+ "format": "free"
379
+ }
380
+ }
381
+ },
382
+ {
383
+ "name": "get_earthquake_info",
384
+ "description": "Query the latest earthquake information",
385
+ "parameters": {
386
+ "magnitude": {
387
+ "type": "number",
388
+ "description": "The minimum magnitude of the earthquake that needs to be queried.",
389
+ "required": false,
390
+ "format": "free"
391
+ },
392
+ "location": {
393
+ "type": "string",
394
+ "description": "Location of the earthquake, for example: 'California'",
395
+ "required": false,
396
+ "format": "free"
397
+ }
398
+ }
399
+ }
400
+ ]
401
+ [END OF AVAILABLE TOOLS]
402
+
403
+ [BEGIN OF FORMAT INSTRUCTION]
404
+ Your output should be in the JSON format, which specifies a list of function calls. The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make tool_calls an empty list '[]'.
405
+ ```{"thought": "the thought process, or an empty string", "tool_calls": [{"name": "api_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}]}```
406
+ [END OF FORMAT INSTRUCTION]
407
+
408
+ [BEGIN OF QUERY]
409
+ User: Can you give me the latest information on the wildfires occurring in California?
410
+ [END OF QUERY]
411
+
412
+ [BEGIN OF HISTORY STEPS]
413
+ [
414
+ {
415
+ "thought": "Sure, what is the radius (in miles) around the location of the wildfire?",
416
+ "tool_calls": [],
417
+ "step_id": 1,
418
+ "next_observation": "",
419
+ "user_input": "User: Let me think... 50 miles."
420
+ },
421
+ {
422
+ "thought": "",
423
+ "tool_calls": [
424
+ {
425
+ "name": "get_fire_info",
426
+ "arguments": {
427
+ "location": "California",
428
+ "radius": 50
429
+ }
430
+ }
431
+ ],
432
+ "step_id": 2,
433
+ "next_observation": [
434
+ {
435
+ "location": "Los Angeles",
436
+ "acres_burned": 1500,
437
+ "status": "contained"
438
+ },
439
+ {
440
+ "location": "San Diego",
441
+ "acres_burned": 12000,
442
+ "status": "active"
443
+ }
444
+ ]
445
+ },
446
+ {
447
+ "thought": "Based on the latest information, there are wildfires in Los Angeles and San Diego. The wildfire in Los Angeles has burned 1,500 acres and is contained, while the wildfire in San Diego has burned 12,000 acres and is still active.",
448
+ "tool_calls": [],
449
+ "step_id": 3,
450
+ "next_observation": "",
451
+ "user_input": "User: Can you tell me about the latest earthquake?"
452
+ }
453
+ ]
454
+
455
+ [END OF HISTORY STEPS]
456
+ ````
457
+
458
+ Output:
459
+ ````json
460
+ {"thought": "", "tool_calls": [{"name": "get_earthquake_info", "arguments": {"location": "California"}}]}
461
+ ````
462
+
463
+ ## Benchmark Results
464
+ Note: **Bold** and <u>Underline</u> results denote the best result and the second best result for Success Rate, respectively.
465
+
466
+ ### Berkeley Function-Calling Leaderboard (BFCL)
467
+ ![xlam-bfcl](media/xlam-bfcl.png)
468
+ *Table 1: Performance comparison on BFCL-v2 leaderboard (cutoff date 09/03/2024). The rank is based on the overall accuracy, which is a weighted average of different evaluation categories. "FC" stands for function-calling mode in contrast to using a customized "prompt" to extract the function calls.*
469
+
470
+ ### Webshop and ToolQuery
471
+ ![xlam-webshop_toolquery](media/xlam-webshop_toolquery.png)
472
+ *Table 2: Testing results on Webshop and ToolQuery. Bold and Underline results denote the best result and the second best result for Success Rate, respectively.*
473
+
474
+ ### Unified ToolQuery
475
+ ![xlam-unified_toolquery](media/xlam-unified_toolquery.png)
476
+ *Table 3: Testing results on ToolQuery-Unified. Bold and Underline results denote the best result and the second best result for Success Rate, respectively. Values in brackets indicate corresponding performance on ToolQuery*
477
+
478
+ ### ToolBench
479
+ ![xlam-toolbench](media/xlam-toolbench.png)
480
+ *Table 4: Pass Rate on ToolBench on three distinct scenarios. Bold and Underline results denote the best result and the second best result for each setting, respectively. The results for xLAM-8x22b-r are unavailable due to the ToolBench server being down between 07/28/2024 and our evaluation cutoff date 09/03/2024.*
481
+
482
+ ## License
483
+ The model is distributed under the CC-BY-NC-4.0 license.
484
+
485
+ ## Citation
486
+
487
+ If you find this repo helpful, please consider to cite our papers:
488
+
489
+ ```bibtex
490
+ @article{zhang2024xlam,
491
+ title={xLAM: A Family of Large Action Models to Empower AI Agent Systems},
492
+ author={Zhang, Jianguo and Lan, Tian and Zhu, Ming and Liu, Zuxin and Hoang, Thai and Kokane, Shirley and Yao, Weiran and Tan, Juntao and Prabhakar, Akshara and Chen, Haolin and others},
493
+ journal={arXiv preprint arXiv:2409.03215},
494
+ year={2024}
495
+ }
496
+ ```
497
+
498
+ ```bibtex
499
+ @article{liu2024apigen,
500
+ title={Apigen: Automated pipeline for generating verifiable and diverse function-calling datasets},
501
+ author={Liu, Zuxin and Hoang, Thai and Zhang, Jianguo and Zhu, Ming and Lan, Tian and Kokane, Shirley and Tan, Juntao and Yao, Weiran and Liu, Zhiwei and Feng, Yihao and others},
502
+ journal={arXiv preprint arXiv:2406.18518},
503
+ year={2024}
504
+ }
505
+ ```
506
+
507
+ ```bibtex
508
+ @article{zhang2024agentohana,
509
+ title={AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning},
510
+ author={Zhang, Jianguo and Lan, Tian and Murthy, Rithesh and Liu, Zhiwei and Yao, Weiran and Tan, Juntao and Hoang, Thai and Yang, Liangwei and Feng, Yihao and Liu, Zuxin and others},
511
+ journal={arXiv preprint arXiv:2402.15506},
512
+ year={2024}
513
+ }
514
+ ```
515
+