07 - Semaphore & Bounded Concurrency
📋 Jump to TakeawaysSometimes you don't need a full worker pool. You just want to say: "run as many goroutines as you want, but no more than N at a time." That's a semaphore.
Think of it like a bouncer at a bar — only N people can be inside at once. Others wait at the door until someone leaves.
The simplest way to think about it:
Worker Pool — you hire 3 employees. They sit at their desks waiting. Work arrives in a pile. They grab one task at a time from the pile. When the pile is empty, they wait for more. The employees are always there.
Semaphore — you have 100 tasks and 100 volunteers ready to go. But the room only fits 3 people. So 3 go in, do their thing, come out. Next 3 go in. The volunteers are temporary — they show up, do one task, and leave.
Why Semaphores
A worker pool creates a fixed number of goroutines upfront. They sit there waiting for jobs through a channel. A semaphore is the opposite — goroutines spawn freely, but only N can run at the same time. The rest block until a slot opens.
| Worker Pool | Semaphore | |
|---|---|---|
| Goroutines | Few, long-lived, reused | Many, short-lived, one per task |
| Job distribution | Shared channel | Each goroutine does its own thing |
| Setup | Channels, WaitGroups, worker loop | A buffered channel or semaphore.Weighted |
| Overhead | Reuses goroutines | Creates/destroys goroutines |
| Best for | Stream of jobs, long-running processing | Batch of independent tasks, limiting parallel I/O |
Use semaphores when:
- You have a batch of tasks and want to limit how many run at once
- You're making parallel HTTP calls and don't want to open 1,000 connections
- You need to protect a resource with limited capacity (file handles, DB connections)
- A full worker pool feels like too much ceremony for the job
Channel-Based Semaphore
The simplest semaphore in Go is a buffered channel.
func main() {
sem := make(chan struct{}, 3) // max 3 concurrent
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
sem <- struct{}{} // acquire — blocks if 3 are already running
go func(id int) {
defer wg.Done()
defer func() { <-sem }() // release
fmt.Printf("task %d running\n", id)
time.Sleep(time.Second) // simulate work
}(i)
}
wg.Wait()
}The buffer size is the concurrency limit. sem <- struct{}{} acquires a slot. <-sem releases it. When the buffer is full, the next acquire blocks until a slot opens.
Semaphore with Context
Add cancellation so you don't block forever waiting for a slot.
func acquire(ctx context.Context, sem chan struct{}) error {
select {
case sem <- struct{}{}:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
sem := make(chan struct{}, 3)
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
if err := acquire(ctx, sem); err != nil {
fmt.Println("cancelled:", err)
break
}
wg.Add(1)
go func(id int) {
defer wg.Done()
defer func() { <-sem }()
fmt.Printf("task %d running\n", id)
time.Sleep(2 * time.Second) // simulate work
}(i)
}
wg.Wait()
}If the context expires while waiting for a slot, the loop breaks instead of blocking forever.
golang.org/x/sync/semaphore
The official weighted semaphore package. More flexible than a channel — supports acquiring multiple slots at once.
go get golang.org/x/sync/semaphoreimport "golang.org/x/sync/semaphore"
func main() {
ctx := context.Background()
sem := semaphore.NewWeighted(3) // max 3 concurrent
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
if err := sem.Acquire(ctx, 1); err != nil {
fmt.Println("acquire error:", err)
break
}
wg.Add(1)
go func(id int) {
defer wg.Done()
defer sem.Release(1)
fmt.Printf("task %d running\n", id)
time.Sleep(time.Second) // simulate work
}(i)
}
wg.Wait()
}Acquire(ctx, n) takes n slots. Release(n) returns them. This lets you model tasks that need different amounts of resources.
Weighted Semaphore
Some tasks are heavier than others. A weighted semaphore lets heavy tasks take more slots.
func main() {
ctx := context.Background()
sem := semaphore.NewWeighted(10) // 10 total units
tasks := []struct {
name string
weight int64
}{
{"light-1", 1},
{"light-2", 1},
{"heavy-1", 5},
{"medium-1", 3},
{"heavy-2", 5},
{"light-3", 1},
}
var wg sync.WaitGroup
for _, task := range tasks {
if err := sem.Acquire(ctx, task.weight); err != nil {
break
}
wg.Add(1)
go func(name string, w int64) {
defer wg.Done()
defer sem.Release(w)
fmt.Printf("%s (weight %d) running\n", name, w)
time.Sleep(time.Second) // simulate work
}(task.name, task.weight)
}
wg.Wait()
}A heavy task (weight 5) blocks half the capacity. Light tasks (weight 1) can fill the remaining slots. This models real scenarios like database connection pools or memory-limited processing.
Practical Example: Bounded File Processing
Process files concurrently, but limit to 5 at a time to avoid opening too many file handles.
func processFile(ctx context.Context, path string) error {
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
// simulate processing
time.Sleep(200 * time.Millisecond) // simulate file I/O
fmt.Println("processed:", path)
return nil
}
func processAll(ctx context.Context, paths []string, maxConcurrent int64) error {
sem := semaphore.NewWeighted(maxConcurrent)
var wg sync.WaitGroup
errCh := make(chan error, len(paths))
for _, path := range paths {
if err := sem.Acquire(ctx, 1); err != nil {
return err
}
wg.Add(1)
go func(p string) {
defer wg.Done()
defer sem.Release(1)
if err := processFile(ctx, p); err != nil {
errCh <- fmt.Errorf("%s: %w", p, err)
}
}(path)
}
wg.Wait()
close(errCh)
for err := range errCh {
return err // return first error
}
return nil
}Simple, effective, and prevents resource exhaustion.
Key Takeaways
- Buffered channel = simple semaphore. Buffer size = concurrency limit
sem <- struct{}{}to acquire,<-semto release- Always use context with semaphores to avoid blocking forever
golang.org/x/sync/semaphorefor weighted semaphores — tasks can take multiple slots- Semaphores limit concurrency without the setup of a worker pool
- Use semaphores for bounding I/O (file handles, HTTP connections, DB queries)
- Use worker pools when goroutines are long-lived and process many jobs