14 - Concurrency Cheat Sheet

📋 Jump to Takeaways

A quick reference for every pattern in this course. Bookmark this page.

Concepts & Patterns at a Glance

Core Patterns

Pattern What It Does When to Use
Pipeline Chain stages: A → B → C, each transforms data Multi-step data processing (parse → filter → format)
Generator Function that returns a channel, produces values in background First stage of a pipeline, decoupling producer from consumer
Fan-Out / Fan-In Parallelize one stage, merge results CPU or I/O bound stage that benefits from multiple workers
Worker Pool N identical workers sharing one input/output channel Job queues, background processing, controlled parallelism
Producer-Consumer Producers write to shared channel, consumers read Message queue pattern, decoupling work generation from processing

Concurrency Control

Technique What It Does When to Use
Semaphore Spawn goroutines freely, gate how many run at once Bounding parallel I/O (HTTP calls, file handles, DB connections)
Rate Limiting Control operations per unit of time External API limits, throttling requests
Token Bucket Buffered channel as burst-capable rate limiter Allow initial burst, then steady rate

Lifecycle Management

Technique What It Does When to Use
Quit Signal Dedicated channel to tell goroutines to stop Pre-context shutdown (legacy code)
Context Cancellation context.WithCancel / WithTimeout for goroutine lifecycle Modern Go — timeouts, deadlines, propagated cancellation

Channel Patterns

Pattern What It Does When to Use
Restore Sequence Embed wait channel in messages to enforce ordering Turn-taking across fan-in
Daisy Chain Chain N goroutines, each passes value to next Incremental processing, demonstrating goroutine cheapness
Ping-Pong Two goroutines pass value back and forth Turn-based coordination
Ring Buffer Buffered channel that drops oldest when full Metrics, event streams, "latest N" scenarios
Subscription Fetch + buffer + cancel in one select loop Polling APIs, event feeds, RSS readers

Classic Problems

Problem What It Teaches
Dining Philosophers Deadlock, lock ordering, resource contention

Choosing the Right Pattern

Do you have multiple processing steps?
  YES → Pipeline
    └─ Is one step the bottleneck?
         YES → Fan-Out/Fan-In that stage
         NO  → Keep it sequential

Do you have a batch of identical jobs?
  YES → How many goroutines?
    ├─ Fixed, reusable     → Worker Pool
    ├─ One per job, bounded → Semaphore
    └─ One per job, throttled → Rate Limiter

Do you need to limit throughput (not just concurrency)?
  YES → Rate Limiter (steady) or Token Bucket (burst + steady)

Do you need to stop goroutines?
  YES → context.WithCancel / WithTimeout
    └─ Need to wait until they've finished? → WaitGroup or errgroup

Do you need ordered results from parallel work?
  YES → Index-wrap results + sort (batch) or reorder buffer (streaming)

Channels: Unbuffered vs Buffered

Use Channel Type Why
Pipeline stages Unbuffered Backpressure flows upstream naturally
Fan-in / fan-out Unbuffered Synchronization is the point
Quit signal Unbuffered You want confirmation
Semaphore Buffered (n) Buffer size IS the concurrency limit
Rate limiter / token bucket Buffered Tokens accumulate up to a cap
Ring buffer Buffered Needs buffer to hold items before dropping

Default to unbuffered. Add a buffer only for bounding, throttling, or decoupling.

Concurrency Control Comparison

Worker Pool Fan-Out/Fan-In Semaphore
Goroutines Few, long-lived Few, long-lived Many, short-lived
Output Shared channel Per-worker channels, merged Each handles its own
Composability Standalone Chains into pipelines Standalone
Best for Job processing Pipeline parallelism Bounding parallel I/O

sync Primitives Quick Reference

Primitive Purpose Use When
sync.Mutex Exclusive lock Multiple fields updated together
sync.RWMutex Read-many, write-exclusive Read-heavy shared state
sync.WaitGroup Wait for goroutines to finish Coordinating goroutine completion
sync.Once Run something exactly once Lazy initialization
sync.Pool Reuse temporary objects Reducing GC pressure on hot paths
sync.Map Concurrent map (lock-free reads) Stable keys, read-heavy, profiling says so
sync.Cond Condition variable Waiting on shared condition (prefer channels)
sync/atomic Lock-free single values Counters, flags, simple state

Error Handling in Concurrent Code

Approach Use When
Error channel Simple fan-out, collect errors manually
errgroup Fan-out with automatic WaitGroup + first error
errgroup.WithContext Same + cancel remaining goroutines on first error
Panic recovery (safeGo) Long-running services where one goroutine shouldn't crash everything

Common Pitfalls

Pitfall Fix
Goroutine leak (blocked on channel) Always use context for cancellation
Data race on shared state Mutex, atomic, or communicate via channels
Deadlock from nested locks Always acquire locks in the same order
Sending on closed channel (panic) Only the sender closes; use sync.Once if multiple senders
Forgetting wg.Add before go Always Add before launching the goroutine
time.Sleep for synchronization Use channels, WaitGroup, or context instead

Key Takeaways

  • Start simple. A for loop is often enough. Reach for concurrency patterns when the problem genuinely calls for it — not because you can.
  • Channels for communication, mutexes for state. Don't force channels where a mutex is clearer. Don't share memory where a channel is simpler.
  • Default to unbuffered channels. They synchronize for free. Add a buffer only when you have a specific reason.
  • Always manage goroutine lifetime. Every goroutine you start must have a way to stop — context, done channel, or closed input channel. Leaked goroutines are silent memory leaks.
  • Pick the simplest pattern that fits. Worker pool before fan-out/fan-in. Semaphore before worker pool. Plain loop before any of them.
  • Concurrency is about structure, not speed. A well-structured concurrent program is easier to reason about, test, and extend — even if it's not faster.
  • Profile before optimizing. Don't reach for sync.Map, sync.Pool, or atomic operations until profiling tells you to. Correctness first, performance second.
  • The race detector is your friend. Run go test -race and go run -race during development. It catches bugs you won't find by reading code.

📝 Ready to test your knowledge?

Answer the quiz below to mark this lesson complete.

© 2026 ByteLearn.dev. Free courses for developers. · Privacy