11 - Classic Problems
📋 Jump to TakeawaysComputer science has a handful of concurrency problems that keep showing up in different forms. You won't implement the Dining Philosophers at work — but you'll implement its solution under a different name. Understanding these gives you patterns you'll recognize everywhere. We'll build two: the Dining Philosophers and Producer-Consumer.
The Dining Philosophers
Here's the setup. Five philosophers sit at a round table. Each has a plate of spaghetti. Between each pair of plates is a fork. A philosopher needs both the left and right fork to eat. They alternate between thinking and eating. Sounds simple, right?
The challenge: if every philosopher picks up their left fork at the same time, nobody can pick up their right fork. Deadlock. Everyone starves.
Naive Implementation (Deadlocks)
type Philosopher struct {
name string
leftFork *sync.Mutex
rightFork *sync.Mutex
}
func (p Philosopher) dine(wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < 3; i++ { // each philosopher eats 3 times
fmt.Printf("%s is thinking\n", p.name)
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // simulate thinking
p.leftFork.Lock()
fmt.Printf("%s picked up left fork\n", p.name)
p.rightFork.Lock()
fmt.Printf("%s picked up right fork — eating\n", p.name)
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // simulate eating
p.rightFork.Unlock()
p.leftFork.Unlock()
}
}This can deadlock. All five grab their left fork simultaneously, then wait for the right fork forever.
Dijkstra's Solution: Ordered Locking
Always pick up the lower-numbered fork first. This breaks the circular wait.
type Philosopher struct {
name string
leftFork int
rightFork int
}
func (p Philosopher) dine(forks map[int]*sync.Mutex, wg *sync.WaitGroup) {
defer wg.Done()
// Always lock the lower-numbered fork first
first, second := p.leftFork, p.rightFork
if first > second {
first, second = second, first
}
for i := 0; i < 3; i++ {
fmt.Printf("%s is thinking\n", p.name)
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // simulate thinking
forks[first].Lock()
forks[second].Lock()
fmt.Printf("%s is eating\n", p.name)
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // simulate eating
forks[second].Unlock()
forks[first].Unlock()
}
fmt.Printf("%s is done\n", p.name)
}
func main() {
forks := make(map[int]*sync.Mutex)
for i := 0; i < 5; i++ {
forks[i] = &sync.Mutex{}
}
// 5 philosophers sit around a round table, forks numbered 0–4
// each philosopher has a left and right fork
philosophers := []Philosopher{
{"Plato", 4, 0},
{"Socrates", 0, 1},
{"Aristotle", 1, 2},
{"Pascal", 2, 3},
{"Locke", 3, 4},
}
var wg sync.WaitGroup
for _, p := range philosophers {
wg.Add(1)
go p.dine(forks, &wg)
}
wg.Wait()
fmt.Println("everyone finished")
}Plato has left fork 4 and right fork 0. Since 0 < 4, Plato picks up fork 0 first. This breaks the cycle — Plato and Socrates compete for fork 0 instead of creating a circular dependency.
Channel-Based Solution
Use a channel as a semaphore to limit how many philosophers can try to eat at the same time. The forks still use mutexes for exclusive access, but the semaphore prevents deadlock by ensuring at most N-1 philosophers reach for forks simultaneously.
func (p Philosopher) dineWithSemaphore(forks map[int]*sync.Mutex, sem chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < 3; i++ {
fmt.Printf("%s is thinking\n", p.name)
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // simulate thinking
sem <- struct{}{} // only N-1 philosophers can try to eat at once
forks[p.leftFork].Lock()
forks[p.rightFork].Lock()
fmt.Printf("%s is eating\n", p.name)
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // simulate eating
forks[p.rightFork].Unlock()
forks[p.leftFork].Unlock()
<-sem
}
fmt.Printf("%s is done\n", p.name)
}
func main() {
// ... same setup ...
sem := make(chan struct{}, 4) // allow at most 4 of 5 to try eating
for _, p := range philosophers {
wg.Add(1)
go p.dineWithSemaphore(forks, sem, &wg)
}
// ...
}If at most 4 philosophers try to eat, at least one will always get both forks. No deadlock possible.
Which Approach Is Better?
| Ordered Locking (Dijkstra) | Semaphore (N-1) | |
|---|---|---|
| Parallelism | Maximum — all can try | Reduced — one always waits |
| Extra mechanism | None, just a rule | Channel semaphore on top of mutexes |
| Complexity | Must define a lock ordering | Simpler — just cap concurrency |
Dijkstra's solution is more elegant for this problem — no extra mechanism, maximum concurrency. But in real Go code, the semaphore approach is more common because real problems rarely have a clean lock ordering. When you can't easily order your locks, limiting concurrency is the pragmatic choice.
Producer-Consumer
One or more producers generate work. One or more consumers process it. A channel sits between them as a buffer. If you've ever used a message queue like Kafka or RabbitMQ, this is the same idea.
Single Producer, Single Consumer
func producer(ch chan<- int) {
for i := 0; i < 10; i++ {
fmt.Println("producing:", i)
ch <- i
}
close(ch)
}
func consumer(ch <-chan int, done chan<- bool) {
for val := range ch {
fmt.Println("consuming:", val)
time.Sleep(100 * time.Millisecond) // simulate processing time
}
done <- true
}
func main() {
ch := make(chan int, 5)
done := make(chan bool)
go producer(ch)
go consumer(ch, done)
<-done
}The buffered channel decouples producer speed from consumer speed. The producer can get ahead by up to 5 items.
Multiple Producers, Multiple Consumers
func producer(id int, ch chan<- string, wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < 5; i++ {
msg := fmt.Sprintf("producer-%d: item-%d", id, i)
ch <- msg
}
}
func consumer(id int, ch <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for msg := range ch {
fmt.Printf("consumer-%d got %s\n", id, msg)
time.Sleep(50 * time.Millisecond) // simulate processing time
}
}
func main() {
ch := make(chan string, 10)
var producers sync.WaitGroup
for i := 0; i < 3; i++ {
producers.Add(1)
go producer(i, ch, &producers)
}
var consumers sync.WaitGroup
for i := 0; i < 2; i++ {
consumers.Add(1)
go consumer(i, ch, &consumers)
}
// Close channel when all producers are done
go func() {
producers.Wait()
close(ch)
}()
consumers.Wait()
}Key detail — and this trips people up: only close the channel after ALL producers finish. Use a separate WaitGroup for producers and consumers. If you close too early, producers panic.
Producer-Consumer with Context
Add cancellation for graceful shutdown.
func producer(ctx context.Context, id int, ch chan<- string, wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; ; i++ {
msg := fmt.Sprintf("p%d-item%d", id, i)
select {
case ch <- msg:
case <-ctx.Done():
fmt.Printf("producer %d stopping\n", id)
return
}
time.Sleep(100 * time.Millisecond) // simulate production delay
}
}
func consumer(ctx context.Context, id int, ch <-chan string, wg *sync.WaitGroup) {
defer wg.Done()
for {
select {
case msg, ok := <-ch:
if !ok {
return
}
fmt.Printf("consumer %d: %s\n", id, msg)
case <-ctx.Done():
fmt.Printf("consumer %d stopping\n", id)
return
}
}
}This version runs indefinitely until the context is cancelled. Useful for streaming systems, message queues, and event processing.
Patterns in the Real World
These classic problems map directly to real systems:
| Classic problem | Real-world equivalent |
|---|---|
| Dining Philosophers | Database connection pools, resource contention |
| Producer-Consumer | Message queues (Kafka, RabbitMQ), job processing |
| Ordered locking | Transaction systems, distributed locks |
| Semaphore limiting | API rate limiting, connection pooling |
You won't implement the Dining Philosophers at work. But you'll implement its solution — ordered locking, semaphores, resource limiting — constantly.
Key Takeaways
- Dining Philosophers demonstrates deadlock from circular resource waiting
- Dijkstra's fix: always acquire locks in a consistent order (lower number first)
- Semaphore fix: limit concurrent access so deadlock is impossible
- Producer-Consumer decouples work generation from processing via a buffered channel
- Close the channel only after ALL producers finish — use a separate WaitGroup
- Add context for cancellation in long-running producer-consumer systems
- Classic problems teach patterns you'll use in real systems under different names