09 - Function Calling

📋 Jump to Takeaways

The Model Decides, You Execute

Without function calling, an LLM can only generate text. Ask it for the weather and it says "I don't have access to real-time data." It can't check an API, query a database, or read a file.

Function calling changes that. You tell the model what functions are available. When the user asks something that needs a function, the model returns "call this function with these arguments." Your code runs the function and sends the result back. The model uses the result to write its response.

User: "What's the weather in Tokyo?"

Without tools:
  Model: "I can't access weather data."

With tools:
  Model: → call get_weather(city: "Tokyo")
  You:   → run the function, get {temp: 18}
  Model: "It's 18°C in Tokyo."

The model didn't fetch the weather. It told your code to fetch it, got the result, and turned it into a sentence.

The Flow

Every function call follows the same five steps:

  1. Define tools: You describe each function the model can use (name, description, parameter schema)
  2. Send request: You send the user's message along with the tool definitions to the model
  3. Model responds: The model either answers with text (no tool needed) or returns a tool call with a function name and arguments
  4. Execute: You run the function yourself and get the result
  5. Send result back: You add the result to the conversation and call the model again. It uses the result to write a final answer

The rest of this lesson builds each step in Go.

Step 1: Define a Tool

A tool needs three things: a name, a description, and a JSON schema for its parameters.

type ToolDef struct {
	Type     string `json:"type"`
	Function struct {
		Name        string          `json:"name"`
		Description string          `json:"description"`
		Parameters  json.RawMessage `json:"parameters"`
	} `json:"function"`
}

Parameters is json.RawMessage, which is just []byte under a different name. It tells Go's JSON encoder "don't parse this, send it as-is." JSON Schema is a nested format that doesn't map cleanly to a Go struct, so keeping it as raw bytes is simpler.

Here's a weather tool:

weatherTool := ToolDef{Type: "function"}
weatherTool.Function.Name = "get_weather"
weatherTool.Function.Description = "Get current weather for a city"
weatherTool.Function.Parameters = json.RawMessage(`{
	"type": "object",
	"properties": {
		"city": {"type": "string", "description": "City name"}
	},
	"required": ["city"]
}`)

The description matters more than you'd think. The model reads it to decide which tool fits the user's request. "Get current weather for a city" is clear. "Weather stuff" is not.

Step 2: Send the Request

Send the user's message with the tool definitions attached. The response has a tool_calls field when the model wants to call a function.

Two new structs for the response:

type ToolCall struct {
	Function struct {
		Name      string          `json:"name"`
		Arguments json.RawMessage `json:"arguments"`
	} `json:"function"`
}

type ChatResponse struct {
	Message struct {
		Role      string     `json:"role"`
		Content   string     `json:"content"`
		ToolCalls []ToolCall `json:"tool_calls,omitempty"`
	} `json:"message"`
}

Now send the request:

body, _ := json.Marshal(map[string]any{
	"model":    "llama3.2",
	"messages": []Message{
		{Role: "user", Content: "What's the weather in Tokyo?"},
	},
	"stream": false,
	"tools":  []ToolDef{weatherTool},
})

resp, _ := http.Post("http://localhost:11434/api/chat",
	"application/json", bytes.NewReader(body))
defer resp.Body.Close()
data, _ := io.ReadAll(resp.Body)

var result ChatResponse
json.Unmarshal(data, &result)

Check what came back. If ToolCalls is empty, the model answered with text. If it has entries, the model wants you to run a function.

if len(result.Message.ToolCalls) > 0 {
	tc := result.Message.ToolCalls[0]
	fmt.Printf("Tool call: %s(%s)\n",
		tc.Function.Name, tc.Function.Arguments)
	// Tool call: get_weather({"city":"Tokyo"})
} else {
	fmt.Println("Text response:", result.Message.Content)
}

Step 3: Execute the Function

The model asked for get_weather({"city": "Tokyo"}). Now you run the actual function. Parse the arguments from JSON, do the work, return the result as a JSON string.

This example returns hardcoded data. In a real app, you'd call a weather API.

func executeToolCall(tc ToolCall) string {
	switch tc.Function.Name {
	case "get_weather":
		var args struct {
			City string `json:"city"`
		}
		json.Unmarshal(tc.Function.Arguments, &args)
		// Hardcoded. In production: call a real API.
		return fmt.Sprintf(
			`{"temp": 18, "condition": "cloudy", "city": %q}`,
			args.City)
	default:
		return fmt.Sprintf(`{"error": "unknown tool: %s"}`,
			tc.Function.Name)
	}
}

The default case handles unknown tools gracefully. The model might hallucinate a tool name that doesn't exist.

Step 4: Send the Result Back

Add two messages to the conversation: the assistant's tool call (so the model knows what it asked for) and the tool result. Then call the model again.

messages := []Message{
	{Role: "user", Content: "What's the weather in Tokyo?"},
	{Role: "assistant", Content: ""},
	{Role: "tool", Content: executeToolCall(result.Message.ToolCalls[0])},
}

The role: "tool" message is how the model receives function results. It reads the JSON and uses it to write a human-friendly answer.

Call the model again with the updated conversation:

body2, _ := json.Marshal(map[string]any{
	"model":    "llama3.2",
	"messages": messages,
	"stream":   false,
})
resp2, _ := http.Post("http://localhost:11434/api/chat",
	"application/json", bytes.NewReader(body2))
defer resp2.Body.Close()
data2, _ := io.ReadAll(resp2.Body)

var final ChatResponse
json.Unmarshal(data2, &final)
fmt.Println(final.Message.Content)
// "It's currently 18°C and cloudy in Tokyo."

The model turned {"temp": 18, "condition": "cloudy"} into a sentence. Your code got the data, the model made it readable.

Multiple Tools

Real applications have several tools. Define each one and pass them all. The model picks the right one based on the question.

tools := []ToolDef{weatherTool, searchTool, calcTool}

// "What's the weather?" → get_weather
// "How do I reset my password?" → search_docs
// "What's 15% of 340?" → calculate
// "Hello!" → no tool, just text

The model can also decide no tool is needed and respond with plain text.

Security: You Are the Gatekeeper

The model suggests tool calls. You decide whether to run them. Two rules:

Validate inputs. The model might send unexpected arguments. Check them before executing.

if len(args.City) > 100 {
	return `{"error": "city name too long"}`
}

Control what you return. The model sees everything in the tool result. Only return what it needs.

// Good: only the data the model needs
return fmt.Sprintf(`{"temp": %d, "condition": %q}`,
	data.Temp, data.Condition)

// Bad: leaking internal data
return fmt.Sprintf(`{"temp": %d, "api_key": %q, "internal_id": %q}`,
	data.Temp, data.APIKey, data.InternalID)

A Note on MCP

In this lesson, you defined the tool, wrote the execute function, and wired it all up yourself. MCP (Model Context Protocol) lets someone else do that. A separate server advertises its tools ("I have get_metrics, list_incidents") and your agent discovers them at runtime.

The flow is identical: model picks a tool, you execute it, send the result back. The difference is who owns the tool. With function calling, you write everything. With MCP, tool authors (Datadog, GitHub, Slack) publish their tools as MCP servers, and any compatible agent can use them without knowing the internals.

Key Takeaways

  • Function calling lets the model invoke your code instead of just generating text
  • Five steps: define tools, send request, model picks a tool, you execute, send result back
  • The model never executes anything. It returns JSON with a function name and arguments
  • Tool descriptions matter. The model reads them to choose the right tool
  • Always validate arguments and control what data you return in tool results
  • The model can call tools, skip tools, or call multiple tools in sequence

🚀 Ready to run?

Complete examples for this lesson. Copy and run locally.

📝 Ready to test your knowledge?

Answer the quiz below to mark this lesson complete.

© 2026 ByteLearn.dev. Free courses for developers. · Privacy