06 - Structured Output

📋 Jump to Takeaways

Why Structured Output

The Ollama API always returns JSON, but that's just the wrapper. The model's actual answer lives inside message.content as a plain string. By default, that string is free-form text:

{
  "message": {
    "role": "assistant",
    "content": "The sentiment is positive with a confidence of about 85%."
  }
}

The API response is structured. The model's answer is not. You can't json.Unmarshal a sentence. Structured output means getting the model to put JSON inside that content field:

{
  "message": {
    "role": "assistant",
    "content": "{\"sentiment\": \"positive\", \"confidence\": 0.85}"
  }
}

Now you can parse content into a Go struct and use it in your code.

Asking for JSON in the Prompt

The simplest approach: tell the model to respond in JSON.

messages := []Message{
	{Role: "system", Content: `Analyze the sentiment of the user's message.
Respond with a JSON object: {"sentiment": "positive|negative|neutral", "confidence": 0.0-1.0}
Return only the JSON. No explanation.`},
	{Role: "user", Content: "I love this product, it works perfectly!"},
}

reply, _ := chat(messages)
fmt.Println(reply)
// {"sentiment": "positive", "confidence": 0.92}

This works most of the time. But "most of the time" isn't good enough for production. The model might add a preamble ("Here's the JSON:"), wrap it in markdown code fences, or return slightly different keys.

Ollama's JSON Mode

Ollama has a format parameter that forces the model to output valid JSON. No preamble, no code fences, just JSON.

type ChatRequest struct {
	Model    string    `json:"model"`
	Messages []Message `json:"messages"`
	Stream   bool      `json:"stream"`
	Format   string    `json:"format,omitempty"`
}

req := ChatRequest{
	Model: "llama3.2",
	Messages: []Message{
		{Role: "system", Content: `Extract the name and age from the user's message.
Return JSON: {"name": "string", "age": number}`},
		{Role: "user", Content: "My name is Alex and I'm 28 years old."},
	},
	Stream: false,
	Format: "json",
}
// Guaranteed valid JSON output
// {"name": "Alex", "age": 28}

You still need to describe the schema in the system prompt. The format parameter ensures valid JSON syntax, but the model decides the structure based on your instructions.

Parsing into Go Structs

The model returns a JSON string. To use it in your code, define a Go struct that matches the JSON shape, then unmarshal into it.

Say you want to analyze sentiment. First, define the struct with the fields you expect back:

type SentimentResult struct {
	Sentiment  string  `json:"sentiment"`
	Confidence float64 `json:"confidence"`
}

Then ask the model for JSON matching that shape, and parse the response:

messages := []Message{
	{Role: "system", Content: `Analyze the sentiment of the user's message.
Return JSON: {"sentiment": "positive|negative|neutral", "confidence": 0.0-1.0}
Return only the JSON. No explanation.`},
	{Role: "user", Content: "I love this product, it works perfectly!"},
}

reply, _ := chat(messages) // returns: {"sentiment": "positive", "confidence": 0.92}

var result SentimentResult
if err := json.Unmarshal([]byte(reply), &result); err != nil {
	fmt.Println("Failed to parse:", err)
	return
}

fmt.Printf("Sentiment: %s (%.0f%% confident)\n", result.Sentiment, result.Confidence*100)
// Sentiment: positive (92% confident)

The struct and the prompt must agree on the field names. You define the struct first, then write the prompt to match it.

Extracting Structured Data

Sentiment analysis has a fixed set of outputs (positive, negative, neutral). Extraction is harder. The model has to find specific pieces of information buried in free-form text and map them to the right fields.

type ContactInfo struct {
	Name    string `json:"name"`
	Email   string `json:"email"`
	Company string `json:"company"`
	Role    string `json:"role"`
}

messages := []Message{
	{Role: "system", Content: `Extract contact information from the text.
Return JSON: {"name": "string", "email": "string", "company": "string", "role": "string"}
Use empty string for missing fields.`},
	{Role: "user", Content: "Hi, I'm Sarah Chen, CTO at Dataflow Labs. Reach me at [email protected]"},
}

reply, _ := chat(messages)

var contact ContactInfo
json.Unmarshal([]byte(reply), &contact)
fmt.Printf("%s (%s) at %s - %s\n", contact.Name, contact.Role, contact.Company, contact.Email)
// Sarah Chen (CTO) at Dataflow Labs - [email protected]

The "use empty string for missing fields" instruction matters. Without it, the model might omit keys or write "N/A", and your json.Unmarshal could fail or give you garbage data.

Returning Arrays

So far every example returns a single JSON object. But sometimes you need a list: multiple issues in a code review, multiple entities extracted from a document, multiple suggestions.

The trick is telling the model to return the array inside a JSON object. Models with format: "json" tend to return objects, not bare arrays. Wrapping the array in a key like {"issues": [...]} is more reliable.

Always specify what to return when there are no results, or the model might return null, "none", or an explanation instead of an empty array.

type Issue struct {
	Line        int    `json:"line"`
	Severity    string `json:"severity"`
	Description string `json:"description"`
}

messages := []Message{
	{Role: "system", Content: `Review the Go code for bugs and issues.
Return JSON: {"issues": [{"line": number, "severity": "high|medium|low", "description": "string"}]}
If no issues, return {"issues": []}. Return only the JSON.`},
	{Role: "user", Content: `func divide(a, b int) int {
    return a / b
}

func main() {
    result := divide(10, 0)
    fmt.Println(result)
}`},
}

reply, _ := chat(messages)

var wrapper struct {
	Issues []Issue `json:"issues"`
}
json.Unmarshal([]byte(reply), &wrapper)

if len(wrapper.Issues) == 0 {
	fmt.Println("No issues found.")
} else {
	for _, issue := range wrapper.Issues {
		fmt.Printf("Line %d [%s]: %s\n", issue.Line, issue.Severity, issue.Description)
	}
}
// Line 2 [high]: Division by zero not handled. b could be 0.

Handling Failures

Even with format: "json", things go wrong. The model might:

  • Add text before the JSON: "Here's the result: {"sentiment": "positive"}"
  • Wrap it in markdown code fences: ```json\n{...}\n```
  • Return a different structure than you asked for
  • Return incomplete JSON if the response was cut off by a token limit

Don't assume the response will parse. Always check the error.

var result SentimentResult
if err := json.Unmarshal([]byte(reply), &result); err != nil {
	fmt.Println("Parse failed:", err)
	fmt.Println("Raw response:", reply)
	// Option 1: retry the request
	// Option 2: fall back to a default value
	// Option 3: return an error to the caller
}

A common problem is the model wrapping JSON in markdown code fences. This helper strips them before parsing:

func cleanJSON(raw string) string {
	trimmed := strings.TrimSpace(raw)
	if strings.HasPrefix(trimmed, "```") {
		lines := strings.Split(trimmed, "\n")
		if len(lines) > 2 {
			return strings.Join(lines[1:len(lines)-1], "\n")
		}
	}
	return trimmed
}

Use it before parsing:

cleaned := cleanJSON(reply)
var result SentimentResult
if err := json.Unmarshal([]byte(cleaned), &result); err != nil {
	fmt.Println("Still failed after cleanup:", err)
}

The best defense is a good prompt. Be explicit: "Return only the JSON. No explanation. No code fences." That prevents most failures. The cleanup code is your safety net for the rest.

OpenAI's Structured Output

Ollama's format: "json" guarantees valid JSON syntax, but the model still decides the structure. If you ask for {"sentiment", "confidence"} and the model returns {"result": "positive"}, that's valid JSON but the wrong shape.

OpenAI solves this with a response_format parameter where you pass the exact JSON schema. The model is forced to match it. Wrong keys, missing fields, wrong types become impossible.

// Ollama: you describe the schema in the prompt
// The model usually follows it, but can deviate
"format": "json"

// OpenAI: you pass the schema as a parameter
// The model is constrained to match it exactly
"response_format": {
  "type": "json_schema",
  "json_schema": {
    "name": "sentiment",
    "schema": {
      "type": "object",
      "properties": {
        "sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]},
        "confidence": {"type": "number"}
      },
      "required": ["sentiment", "confidence"]
    }
  }
}

With Ollama, your prompt is the schema and the cleanup code is your safety net. With OpenAI, the API enforces the schema for you. Both approaches work. The concepts are the same.

Key Takeaways

  • Structured output turns LLM text into parseable data (JSON)
  • Describe the exact JSON schema in your system prompt
  • Use Ollama's format: "json" to guarantee valid JSON syntax
  • Define Go structs that match your expected output, then json.Unmarshal
  • Models can return objects, arrays, or nested structures
  • Always handle parse failures. Retry or fall back when JSON is malformed
  • OpenAI offers stricter schema enforcement, but prompt-based schemas work well with Ollama

🚀 Ready to run?

Complete examples for this lesson. Copy and run locally.

📝 Ready to test your knowledge?

Answer the quiz below to mark this lesson complete.

© 2026 ByteLearn.dev. Free courses for developers. · Privacy