AI Engineering Concepts

Free AI engineering course — LLMs tokens prompting streaming RAG agents and evals explained with Go and Ollama

Welcome to AI Engineering Concepts

LLMs are everywhere. Most people use them through ChatGPT and call it a day. This course teaches you what's actually happening, how to build with these models, and when not to.

We cover the full landscape at overview depth: how LLMs work, how to talk to them from code, prompt engineering, streaming, structured output, embeddings, RAG, function calling, agents, evals, and production concerns. Each lesson teaches one concept with working code you can run locally.

What You'll Learn

  • LLM Foundations: How models generate text, what tokens are, why context windows matter
  • API Integration: Calling LLMs from Go, handling responses, streaming tokens in real time
  • Prompt Engineering: System prompts, few-shot examples, chain-of-thought, getting reliable output
  • Structured Output: Making models return JSON you can parse and use in your code
  • Embeddings & Vector Search: Turning text into numbers, finding similar content
  • RAG: Retrieval-augmented generation, giving models knowledge they weren't trained on
  • Function Calling: Letting models invoke your code, tool definitions, execution flow
  • Agents: The loop that turns an LLM into something that plans and acts
  • Evals: Testing AI output, catching regressions, knowing if your system actually works
  • Production Patterns: Cost management, latency, guardrails, when to skip the LLM entirely

Why Go?

This course uses Go, the simplest typed language you can pick. Minimal by design, more readable than Python, and no magic behind the scenes.

No SDKs hiding the mechanics. No dependencies to manage. Just net/http and encoding/json. You see every request, every response, every streaming token.

When you need concurrency (streaming, parallel tool calls, agent loops), Go handles it with goroutines and channels instead of callbacks and promises.

When you parse an LLM response, you unmarshal into a typed struct. The compiler catches wrong field names before you run the code. In Python, you'd access response["choices"][0]["message"]["content"] and hope the key exists.

The patterns you learn here transfer to any language and any provider.

Why This Course?

  • Concepts first: Every AI topic in one place, at a depth that builds real understanding
  • Go + Ollama: All examples run locally with Ollama. No API keys, no cloud bills, no waiting
  • No frameworks: Raw HTTP calls and standard library. You see exactly what happens
  • Overview depth: Deep enough to build with, shallow enough to finish in a weekend

Prerequisites

Basic Go knowledge (functions, structs, HTTP calls). If you've done Go Essentials, you're ready. No AI or machine learning background needed.

Lessons + Examples

Each lesson explains one concept with short code snippets. Want to run the code? Check the Examples tab. Every key lesson has a complete, working program you can copy and run with go run.

Course Structure

12 lessons. The first half covers foundations: how LLMs work, calling them from code, prompting, streaming. The second half covers systems: RAG, agents, evals, production. Each lesson builds on the last, but they also work as standalone references.

Choose your first lesson from the sidebar.

Start First Lesson
© 2026 ByteLearn.dev. Free courses for developers. · Privacy