Building good agents
Thereβs a world of difference between building an agent that works and one that doesnβt. How to build into this latter category? In this guide, weβre going to see best practices for building agents.
If youβre new to building agents, make sure to first read the intro to agents and the guided tour of smolagents.
The best agentic systems are the simplest: simplify the workflow as much as you can
Giving an LLM some agency in your workflow introducessome risk of errors.
Well-programmed agentic systems have good error logging and retry mechanisms anyway, so the LLM engine has a chance to self-correct their mistake. But to reduce the risk of LLM error to the maximum, you should simplify your worklow!
Letβs take again the example from [intro_agents]: a bot that answers user queries on a surf trip company. Instead of letting the agent do 2 different calls for βtravel distance APIβ and βweather APIβ each time they are asked about a new surf spot, you could just make one unified tool βreturn_spot_informationβ, a functions that calls both APIs at once and returns their concatenated outputs to the user.
This will reduce costs, latency, and error risk!
The main guideline is: Reduce the number of LLM calls as much as you can.
This leads to a few takeaways:
- Whenever possible, group 2 tools in one, like in our example of the two APIs.
- Whenever possible, logic should be based on deterministic functions rather than agentic decisions.
Improve the information flow to the LLM engine
Remember that your LLM engine is like a ~intelligent~ robot, tapped into a room with the only communication with the outside world being notes passed under a door.
It wonβt know of anything that happened if you donβt explicitly put that into its prompt.
So first start with making your task very clear! Since an agent is powered by an LLM, minor variations in your task formulation might yield completely different results.
Then, improve the information flow towards your agent in tool use.
Particular guidelines to follow:
- Each tool should log (by simply using
print
statements inside the toolβsforward
method) everything that could be useful for the LLM engine.- In particular, logging detail on tool execution errors would help a lot!
For instance, hereβs a tool that :
First, hereβs a poor version:
import datetime
from smolagents import tool
def get_weather_report_at_coordinates(coordinates, date_time):
# Dummy function, returns a list of [temperature in Β°C, risk of rain on a scale 0-1, wave height in m]
return [28.0, 0.35, 0.85]
def get_coordinates_from_location(location):
# Returns dummy coordinates
return [3.3, -42.0]
@tool
def get_weather_api(location: str, date_time: str) -> str:
"""
Returns the weather report.
Args:
location: the name of the place that you want the weather for.
date_time: the date and time for which you want the report.
"""
lon, lat = convert_location_to_coordinates(location)
date_time = datetime.strptime(date_time)
return str(get_weather_report_at_coordinates((lon, lat), date_time))
Why is it bad?
- thereβs no precision of the format that should be used for
date_time
- thereβs no detail on how location should
- thereβs no logging mechanism tying to explicit failure cases like location not being in a proper format, or date_time not being properly formatted.
- the output format is hard to understand
If the tool call fails, the error trace logged in memory can help the LLM reverse engineer the tool to fix the errors. But why leave it so much heavy lifting to do?
A better way to build this tool would have been the following:
@tool
def get_weather_api(location: str, date_time: str) -> str:
"""
Returns the weather report.
Args:
location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco".
date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.
"""
lon, lat = convert_location_to_coordinates(location)
try:
date_time = datetime.strptime(date_time)
except Exception as e:
raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e))
temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)
return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}Β°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m."
In general, to ease the load on your LLM, the good question to ask yourself is: βHow easy would it be for me, if I was dumb and using this tool for the first time ever, to program with this tool and correct my own errors?β.
Give more arguments to the agent
To pass some additional objects to your agent than thes smple string that tells it the task to run, you can use argument additional_args
to pass any type of object:
from smolagents import CodeAgent, HfApiModel
model_id = "meta-llama/Llama-3.3-70B-Instruct"
agent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True)
agent.run(
"Why does Mike not know many people in New York?",
additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}
)
For instance, you can use this additional_args
argument to pass images or strings that you want your agent to leverage.
How to debug your agent
1. Use a stronger LLM
In an agentic workflows, some of the errors are actual errors, some other are the fault of your LLM engine not reasoning properly.
For instance, consider this trace for an CodeAgent
that I asked to make me a car picture:
==================================================================================================== New task ====================================================================================================
Make me a cool car picture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ New step ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Agent is executing the code below: βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic")
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Last output from code snippet: βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
Step 1:
- Time taken: 16.35 seconds
- Input tokens: 1,383
- Output tokens: 77
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ New step ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Agent is executing the code below: βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png")
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Print outputs:
Last output from code snippet: βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
Final answer:
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
The user sees, instead of an image being returned, a path being returned to them. It could look like a bug from the system, but actually the agentic system didnβt cause the error: itβs just that the LLM engine tid the mistake of not saving the image output into a variable. Thus it cannot access the image again except by leveraging the path that was logged while saving the image, so it returns the path instead of an image.
The first step to debugging your agent is thus βUse a more powerful LLMβ. Alternatives like Qwen2/5-72B-Instruct
wouldnβt have made that mistake.
2. Provide more guidance / more information
Then you can also use less powerful models but guide them better.
Put yourself in the shoes if your model: if you were the model solving the task, would you struggle with the information available to you (from the system prompt + task formulation + tool description) ?
Would you need some added claritications ?
To provide extra information, we do not recommend to change the system prompt right away: the default system prompt has many adjustments that you do not want to mess up except if you understand the prompt very well. Better ways to guide your LLM engine are:
- If it βs about the task to solve: add all these details to the task. The task could be 100s of pages long.
- If itβs about how to use tools: the description attribute of your tools.
If after trying the above, you still want to change the system prompt, your new system prompt passed to system_prompt
upon agent initialization needs to contain the following placeholders that will be used to insert certain automatically generated descriptions when running the agent:
"{{tool_descriptions}}"
to insert tool descriptions."{{managed_agents_description}}"
to insert the description for managed agents if there are any.- For
CodeAgent
only:"{{authorized_imports}}"
to insert the list of authorized imports.
3. Extra planning
We provide a model for a supplementary planning step, that an agent can run regularly in-between normal action steps. In this step, there is no tool call, the LLM is simply asked to update a list of facts it knows and to reflect on what steps it should take next based on those facts.
from smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool
from dotenv import load_dotenv
load_dotenv()
# Import tool from Hub
image_generation_tool = load_tool("m-ric/text-to-image", cache=False)
search_tool = DuckDuckGoSearchTool()
agent = CodeAgent(
tools=[search_tool],
model=HfApiModel("Qwen/Qwen2.5-72B-Instruct"),
planning_interval=3 # This is where you activate planning!
)
# Run it!
result = agent.run(
"How long would a cheetah at full speed take to run the length of Pont Alexandre III?",
)