10 - Building an Agent
📋 Jump to TakeawaysWhat Is an Agent
In the previous lesson, you asked the model a question, it called one tool, you sent the result back, and it answered. One round trip, done.
An agent does multiple round trips. You ask a question, and the model decides on its own how many tools to call and in what order. It might list a directory, then read a file, then search for a pattern, then finally answer. You didn't plan those steps. The model did.
Function calling (lesson 09):
You ask → Model calls 1 tool → You execute
→ Model answers
(you control the flow)
Agent:
You ask → Model calls tool A → You execute
→ Model calls tool B → You execute
→ Model calls tool C → You execute
→ Model answers
(the model controls the flow)The difference: with function calling, your code decides what happens. With an agent, the model decides. Your code just executes whatever the model asks for, in a loop, until the model says "I'm done."
The Agent Loop
The entire agent pattern is one loop:
- Send messages to the model (with tools)
- If the model returns tool calls, execute them and add results to the conversation
- If the model returns text with no tool calls, it's done
That's it. Here's the loop:
func agentLoop(messages []Message, tools []ToolDef) (string, error) {
for i := 0; i < 10; i++ {
body, _ := json.Marshal(map[string]any{
"model": "llama3.2",
"messages": messages,
"stream": false,
"tools": tools,
})
resp, err := http.Post("http://localhost:11434/api/chat",
"application/json", bytes.NewReader(body))
if err != nil {
return "", err
}
data, _ := io.ReadAll(resp.Body)
resp.Body.Close()
var result ChatResponse
json.Unmarshal(data, &result)
// No tool calls means the agent is done
if len(result.Message.ToolCalls) == 0 {
return result.Message.Content, nil
}
// Add the assistant's response to history
messages = append(messages, Message{
Role: "assistant", Content: result.Message.Content,
})
// Execute each tool and add results
for _, tc := range result.Message.ToolCalls {
toolResult := executeToolCall(tc)
messages = append(messages, Message{
Role: "tool", Content: toolResult,
})
}
// Loop back: model sees the results and picks the next step
}
return "max iterations reached", nil
}The for i := 0; i < 10 is a safety limit. Without it, a confused model could loop forever. 10 iterations is enough for most tasks.
Each iteration, the conversation grows. The model sees everything: the original question, every tool call it made, every result it got back. It uses all of that to decide what to do next.
Giving the Agent Tools
An agent needs tools to be useful. Let's give it two simple ones: read a file and list a directory. These are enough to explore a codebase.
tools := []ToolDef{
makeToolDef("read_file",
"Read the contents of a file",
`{"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}`),
makeToolDef("list_directory",
"List files and folders in a directory",
`{"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}`),
}The makeToolDef helper keeps tool definitions short:
func makeToolDef(name, desc, params string) ToolDef {
td := ToolDef{Type: "function"}
td.Function.Name = name
td.Function.Description = desc
td.Function.Parameters = json.RawMessage(params)
return td
}Implementing the Tools
Each tool is a real Go function. When the model asks to read a file, you actually read it. When it asks to list a directory, you actually list it.
func executeToolCall(tc ToolCall) string {
var args map[string]string
json.Unmarshal(tc.Function.Arguments, &args)
switch tc.Function.Name {
case "read_file":
content, err := os.ReadFile(args["path"])
if err != nil {
return fmt.Sprintf("error: %s", err)
}
return string(content)
case "list_directory":
entries, err := os.ReadDir(args["path"])
if err != nil {
return fmt.Sprintf("error: %s", err)
}
var names []string
for _, e := range entries {
name := e.Name()
if e.IsDir() {
name += "/"
}
names = append(names, name)
}
return strings.Join(names, "\n")
}
return "unknown tool"
}Notice the error handling. If a file doesn't exist, the tool returns the error as a string. The model sees "error: no such file" and tries a different approach. It doesn't crash.
Running the Agent
Give the agent a system prompt that tells it to use tools, and ask a question.
messages := []Message{
{Role: "system", Content: `You are a code assistant.
Use your tools to answer questions about code.
Always start by calling list_directory with path ".".
Use real file paths. Never guess or make up paths.`},
{Role: "user", Content: "What does the main function do?"},
}
answer, _ := agentLoop(messages, tools)
fmt.Println(answer)Tool calling quality varies by model. llama3.2 (3B) works for simple cases but sometimes passes wrong arguments. Larger models like llama3.1:8b are more reliable with tools.
The agent figures out the steps on its own:
1. Model calls list_directory(".") → sees project files
2. Model calls read_file("main.go") → reads the code
3. Model responds: "The main function initializes
a config and starts an HTTP server on port 8080."You asked one question. The model made two tool calls, read the results, and synthesized an answer. You didn't tell it to list the directory first. It decided that.
The ReAct Pattern
This pattern has a name: ReAct (Reason + Act). The model alternates between thinking about what to do and doing it.
Thought: I need to find the main function.
Let me list the project files.
Action: list_directory(".")
Result: main.go, config.go, handler.go
Thought: main.go probably has it. Let me read it.
Action: read_file("main.go")
Result: package main...
Thought: I can see the main function now.
Answer: "The main function initializes a config..."Some models output this reasoning explicitly. Others do it internally. The loop structure is the same either way.
What Can Go Wrong
Infinite loops. The model keeps calling tools without making progress. The iteration limit (i < 10) prevents this.
Context overflow. Each iteration adds messages. A file read might add thousands of tokens. After several iterations, the conversation can exceed the context window. For production agents, truncate large tool results.
Wrong tool choice. The model picks the wrong tool or passes bad arguments. Good tool descriptions help. So does returning clear error messages that the model can learn from.
Tool errors. A file doesn't exist, a command fails, an API times out. Don't crash. Return the error as a normal tool result. The model reads it and tries a different approach.
Tool result: "error: open config.yaml: no such file"
Model thinks: "That file doesn't exist.
Let me try config.json instead."
→ calls read_file("config.json")This works because the agent loop doesn't distinguish between success and failure. It just sends the result back. The model decides what to do with it.
What Makes a Good Agent
Focused purpose. "Answer questions about this codebase" is better than "do anything." Narrow scope means better tool selection.
Good tool descriptions. The model picks tools based on descriptions. "Read the contents of a file given its path" is better than "read file."
Clear system prompt. Tell the agent what to do, how to approach problems, and when to stop.
Error handling. Tools fail. Return errors as tool results and the model adapts (see "What Can Go Wrong" above).
Key Takeaways
- An agent is function calling in a loop. The model decides what to do next
- The loop: send messages, check for tool calls, execute, add results, repeat
- Always set a max iteration count to prevent infinite loops
- The ReAct pattern: reason about what to do, act, observe the result, repeat
- Good agents have focused purpose, clear tool descriptions, and error handling
- The model controls the flow. Your code just executes and reports back
- Coding assistants, research agents, and AI tools all use this same pattern