09 - sync & atomic Primitives
📋 Jump to TakeawaysThe sync and sync/atomic packages have more than just Mutex and WaitGroup. Most Go developers don't explore beyond those two. That's a mistake — these lesser-known primitives solve specific concurrency problems cleanly.
sync.Once
Run something exactly once, no matter how many times or from where it's called — different goroutines, different packages, different functions. You'll use this more than you'd expect.
var (
instance *Database
once sync.Once
)
func GetDB() *Database {
once.Do(func() {
fmt.Println("initializing database...")
instance = &Database{conn: connect()}
})
return instance
}once.Do guarantees the function runs exactly once. All other callers block until it completes, then get the result. Perfect for lazy initialization — database connections, config loading, singletons.
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
db := GetDB() // only the first call initializes
_ = db
}()
}
wg.Wait()
// "initializing database..." prints exactly once
}Common misuse: creating a new sync.Once per call. This defeats the purpose — each Once tracks its own state.
❌ Wrong — initializes every time:
func GetDB() *Database {
var once sync.Once // new Once each call!
once.Do(func() {
instance = connect() // runs every time
})
return instance
}Always declare sync.Once at the package level, not inside the function.
sync.RWMutex
A read-write mutex. Multiple readers can hold the lock simultaneously, but writers get exclusive access.
Think of a bank account: during the day, thousands of users check their balance (reads). Late at night, a batch job processes transactions and updates balances (writes). You don't want balance checks to block each other, but when a transaction update runs, nobody should read stale data.
type Account struct {
mu sync.RWMutex
balance int
}
func (a *Account) Balance() int {
a.mu.RLock()
defer a.mu.RUnlock()
return a.balance // many goroutines can read at once
}
func (a *Account) ProcessTransaction(amount int) {
a.mu.Lock()
defer a.mu.Unlock()
a.balance += amount // exclusive access, all readers wait
}RLock / RUnlock for reads. Lock / Unlock for writes. When a writer holds the lock, all readers and writers block. When readers hold the lock, other readers proceed but writers block.
The same pattern works for caching — many goroutines read from the cache, occasional writes to update it:
type SafeCache struct {
mu sync.RWMutex
items map[string]string
}
func (c *SafeCache) Get(key string) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
val, ok := c.items[key]
return val, ok
}
func (c *SafeCache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
}| Mutex | RWMutex | |
|---|---|---|
| Readers concurrent | ❌ No | ✅ Yes |
| Writer exclusive | ✅ Yes | ✅ Yes |
| Use when | Writes are frequent | Reads far outnumber writes |
Use RWMutex when your data is read-heavy. If reads and writes are roughly equal, a regular Mutex is simpler and has less overhead.
sync.Map
A concurrent map that doesn't need external locking. Built into the standard library.
var cache sync.Map
// Store
cache.Store("key1", "value1")
// Load
val, ok := cache.Load("key1")
if ok {
fmt.Println(val.(string))
}
// LoadOrStore — load if exists, store if not
actual, loaded := cache.LoadOrStore("key2", "default")
fmt.Println(actual, loaded) // "default", false (was stored)
// Delete
cache.Delete("key1")
// Range — iterate over all entries
cache.Range(func(key, value any) bool {
fmt.Printf("%s: %s\n", key, value)
return true // return false to stop
})Both sync.Map and map + RWMutex give you concurrent map access. The difference is internal: RWMutex still acquires a read lock on every read, while sync.Map reads from a lock-free map internally — zero locking on reads.
In practice, use map + RWMutex by default. It's type-safe and straightforward. sync.Map exists for rare, high-concurrency cases where even read-lock overhead matters — you'll know when you need it because profiling will tell you. We'll cover profiling in the upcoming Go in Practice course.
sync.Pool
Reuse temporary objects to reduce garbage collection pressure. This is a performance tool — you won't need it often, but when you do, it makes a big difference.
var created atomic.Int64
var bufferPool = sync.Pool{
New: func() any {
created.Add(1)
return new(bytes.Buffer)
},
}
func processRequest(data []byte) string {
buf := bufferPool.Get().(*bytes.Buffer)
defer func() {
buf.Reset()
bufferPool.Put(buf)
}()
buf.Write(data)
buf.WriteString(" processed")
return buf.String()
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
processRequest([]byte(fmt.Sprintf("request-%d", id)))
}(i)
}
wg.Wait()
fmt.Printf("1000 requests, only %d buffers created\n", created.Load())
}Get retrieves an object from the pool (or creates one via New). Put returns it. The pool is goroutine-safe. The output shows that far fewer than 1000 buffers were allocated — the rest were reused.
Important: the GC can clear the pool at any time. Don't rely on objects persisting. The pool doesn't prevent garbage collection — it reduces how often it needs to happen by reusing objects instead of allocating new ones every time.
Why not just use a global variable? A global gives you one shared object — with concurrent access, you'd need a mutex and only one goroutine can use it at a time. sync.Pool hands out multiple objects to different goroutines simultaneously, no locking needed.
Common uses:
bytes.Bufferpools for string building- Temporary slices for processing
- Encoder/decoder objects
sync/atomic
A regular count++ is not safe when multiple goroutines do it at the same time. It's actually three steps: read the value, add one, write it back. Two goroutines can read the same value, both add one, and you lose an increment.
Atomic operations solve this with a single hardware instruction — no locks, no race conditions.
func main() {
var counter atomic.Int64
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Add(1) // safe — single hardware instruction
}()
}
wg.Wait()
fmt.Println(counter.Load()) // always 1000
}Without atomic, counter could end up less than 1000 due to lost increments. With atomic, it's guaranteed correct.
Atomic Types (Go 1.19+)
var counter atomic.Int64
var flag atomic.Bool
var config atomic.Value
// Int64
counter.Add(1)
counter.Add(-1)
fmt.Println(counter.Load()) // read
// Bool
flag.Store(true)
if flag.Load() {
fmt.Println("flag is set")
}
// Value — store any type (but always the same type)
type Config struct {
Debug bool
Workers int
}
config.Store(Config{Debug: true, Workers: 4})
cfg := config.Load().(Config)
fmt.Println(cfg.Workers)Compare-And-Swap (CAS)
CAS answers: "If the value is still what I think it is, update it. Otherwise, someone else got there first." It's how you build lock-free logic when multiple goroutines race to change the same value.
A practical example — only one goroutine should initialize a resource:
var initialized atomic.Int32 // 0 = not started, 1 = done
func maybeInit() {
// Only the goroutine that swaps 0→1 does the work
if initialized.CompareAndSwap(0, 1) {
fmt.Println("I'm initializing!")
// ... do expensive setup
} else {
fmt.Println("already initialized, skipping")
}
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
maybeInit()
}()
}
wg.Wait()
}Five goroutines race to initialize. Exactly one prints "I'm initializing!" — the one that successfully swapped 0 to 1. The rest see that the value is already 1 and skip. No mutex needed.
Atomic vs Mutex
| Mutex | Atomic | |
|---|---|---|
| Speed | Slower (lock/unlock overhead) | Faster (hardware instruction) |
| Complexity | Can protect any critical section | Single variable operations only |
| Use case | Multiple variables, complex logic | Counters, flags, simple state |
Rule: if you're protecting a single integer or boolean, use atomic. If you're protecting multiple variables or complex logic, use a mutex.
sync.Cond
A condition variable. Goroutines wait until a condition is signaled. You'll rarely use this directly — channels are usually cleaner — but it's good to know it exists.
var (
mu sync.Mutex
cond = sync.NewCond(&mu)
ready bool
)
func waiter(id int, wg *sync.WaitGroup, startWg *sync.WaitGroup) {
defer wg.Done()
cond.L.Lock()
startWg.Done() // signal that this waiter is ready
for !ready {
cond.Wait() // releases lock, waits, re-acquires lock
}
cond.L.Unlock()
fmt.Printf("waiter %d proceeding\n", id)
}
func main() {
var wg, startWg sync.WaitGroup
for i := 0; i < 3; i++ {
wg.Add(1)
startWg.Add(1)
go waiter(i, &wg, &startWg)
}
startWg.Wait() // all waiters are locked and waiting
cond.L.Lock()
ready = true
cond.L.Unlock()
cond.Broadcast() // wake all waiters
wg.Wait()
}Signal() wakes one waiter. Broadcast() wakes all. In practice, channels are usually cleaner for signaling. sync.Cond is for cases where you need to wake goroutines based on a shared condition.
Quick Reference
| Primitive | Purpose | When to use |
|---|---|---|
sync.Once |
Run once | Lazy initialization, singletons |
sync.RWMutex |
Read-write locking | Read-heavy shared data |
sync.Map |
Concurrent map | Stable keys, disjoint access |
sync.Pool |
Object reuse | Reduce GC pressure, temp buffers |
atomic.Int64 |
Lock-free counter | Simple counters, metrics |
atomic.Bool |
Lock-free flag | Feature flags, shutdown signals |
atomic.Value |
Lock-free config | Hot-reloadable configuration |
sync.Cond |
Condition variable | Wait for shared condition |
Key Takeaways
sync.Onceguarantees exactly-once execution — use for initializationsync.RWMutexallows concurrent readers — use when reads dominate writessync.Mapis for specific patterns — usuallymap+RWMutexis bettersync.Poolreduces allocations — objects can be cleared by GC at any timesync/atomicis faster than mutex for single-variable operations- CAS (CompareAndSwap) enables lock-free state transitions
- Use atomic for counters and flags, mutex for complex critical sections
sync.Condexists but channels are usually cleaner for signaling