15 - Makefile, Docker & Deployment
📋 Jump to TakeawaysYou've built a bookmarks API with routing, middleware, error handling, a database, templates, embedded assets, graceful shutdown, and profiling. Now ship it. This lesson covers the three things every Go project needs for deployment: a Makefile for local workflows, a Dockerfile for containerization, and docker-compose for running the full stack.
Makefile
A Makefile gives your team one place to find every command. No more "how do I run the linter?" questions in Slack.
# Makefile
.PHONY: build run test lint clean
APP_NAME := bookmarks
BUILD_DIR := ./bin
build:
go build -o $(BUILD_DIR)/$(APP_NAME) .
run: build
$(BUILD_DIR)/$(APP_NAME)
test:
go test -v -race ./...
lint:
go vet ./...
staticcheck ./...
clean:
rm -rf $(BUILD_DIR)Usage:
make build # compile the binary
make run # build and run
make test # run all tests with race detector
make lint # vet + staticcheck
make clean # remove build artifacts.PHONY tells make these aren't file targets. Without it, if a file named test exists, make test would do nothing.
The -race flag on tests enables the race detector. It catches concurrent access bugs. Always use it in CI.
Multi-Stage Dockerfile
Go's single-binary output makes Docker images tiny. Use a multi-stage build: compile in a full Go image, copy the binary into a minimal runtime image.
# Dockerfile
FROM golang:1.24-alpine AS builder
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/bookmarks .
FROM alpine:3.21
COPY --from=builder /bin/bookmarks /bookmarks
EXPOSE 8080
ENTRYPOINT ["/bookmarks"]Walk through it:
- Start from
golang:1.24-alpinefor the build stage - Copy
go.modandgo.sumfirst and rungo mod download. This layer is cached. Dependencies don't change often, so rebuilds are fast - Copy the rest of the source and build with
CGO_ENABLED=0for a fully static binary - The runtime image is
alpine— tiny (~7MB), has a shell for debugging, and includes CA certificates for HTTPS - Copy the binary from the builder stage
The final image is around 15-20MB. Compare that to a Node.js image at 300MB+.
Choosing a runtime image:
| Image | Size | Shell | Best for |
|---|---|---|---|
alpine |
~7MB | ✅ Yes | Most projects — small, debuggable |
gcr.io/distroless/static |
~2MB | ❌ No | Maximum security — nothing to exploit |
scratch |
0MB | ❌ No | Absolute minimum — but no CA certs, no timezone data |
Alpine is the practical default. If you need to docker exec into a container to debug, you can. For security-hardened production where you never shell in, switch to distroless.
.dockerignore
Keep build context small:
# .dockerignore
bin/
*.md
.git/
.env*
docs/Without this, Docker sends everything to the daemon, including your .git directory and documentation. Slower builds, larger context, no benefit.
Docker Compose
Run the API and Postgres together:
# docker-compose.yml
services:
db:
image: postgres:17-alpine
environment:
POSTGRES_DB: bookmarks
POSTGRES_USER: bookmarks
POSTGRES_PASSWORD: localdev
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U bookmarks"]
interval: 5s
timeout: 3s
retries: 5
api:
build: .
ports:
- "8080:8080"
environment:
PORT: "8080"
DATABASE_URL: "postgres://bookmarks:localdev@db:5432/bookmarks?sslmode=disable"
depends_on:
db:
condition: service_healthy
volumes:
pgdata:Key details:
depends_onwithcondition: service_healthymeans the API waits for Postgres to be ready, not just started. Without the health check, the API might start before Postgres accepts connections- The
DATABASE_URLusesdbas the hostname. Docker Compose creates a network where services reach each other by name pgdatais a named volume. Data survives container restartssslmode=disablebecause it's local dev. Don't do this in production
Run it:
docker compose up --buildThe API is at http://localhost:8080. Postgres is at localhost:5432. Stop with Ctrl+C. Tear everything down with:
docker compose down # stop containers
docker compose down -v # stop and delete volumes (wipes DB)Environment Variables
The API reads config from environment variables (lesson 2). Docker Compose sets them in the environment block. For production, you'd use your platform's secret management instead of hardcoded values.
For local development with different configs, use an .env file:
# .env
PORT=8080
DATABASE_URL=postgres://bookmarks:localdev@localhost:5432/bookmarks?sslmode=disableDocker Compose picks up .env automatically. Don't commit this file.
Health Checks
The Postgres health check uses pg_isready. Add one for the API too:
api:
# ...
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8080/health"]
interval: 10s
timeout: 3s
retries: 3Since we're using alpine, wget is available. This hits the /health endpoint we already have. In production with Kubernetes, you'd use liveness and readiness probes instead — they hit your /health endpoint directly without needing anything inside the container.
Putting It All Together
The final project structure:
bookmarks/
├── go.mod
├── go.sum
├── Makefile
├── Dockerfile
├── .dockerignore
├── docker-compose.yml
├── .env
├── main.go
├── config.go
├── handler.go
├── handler_html.go
├── store.go
├── middleware.go
├── templates/
│ ├── layout.html
│ └── list.html
└── static/
└── style.cssBuild and run locally:
make runBuild and run with Docker:
docker compose up --buildRun tests:
make testThat's the whole project. A REST API with HTML views, structured logging, graceful shutdown, and a containerized deployment. All built on the Go standard library with one external dependency: the Postgres driver.
Applying to Our Project
Add the four deployment files to the project root:
Makefilefor build, run, test, lint, cleanDockerfilewith multi-stage build.dockerignoreto keep the build context smalldocker-compose.ymlwith the API and Postgres
Test the full flow:
docker compose up --build
curl http://localhost:8080/health
curl http://localhost:8080/api/bookmarksIf all three commands work, you're done. The bookmarks API is ready to deploy anywhere that runs containers.
Deploying for Real
Docker Compose is for local dev. To get this running in production, you have options:
- VPS (DigitalOcean, Hetzner, Linode): SSH in, install Docker, run
docker compose up -d. Cheapest option. You manage updates and uptime yourself. - Fly.io / Railway: Push your Dockerfile and they handle the rest.
fly launchreads your Dockerfile and deploys. Easiest path from zero to production. - Cloud containers (AWS ECS, Google Cloud Run): Push your image to a registry, configure the service. More setup, but scales automatically.
For a side project or small API, a $5 VPS or Fly.io's free tier is plenty. Don't over-engineer the infrastructure.
CI/CD
Automate the boring parts. A minimal CI pipeline runs on every push:
make lint
make test
docker build -t bookmarks .If all three pass, you're safe to deploy. GitHub Actions, GitLab CI, or any CI tool can run this. The Makefile you already have does the heavy lifting — CI just calls it.
Key Takeaways
- A Makefile centralizes build, test, lint, and run commands. Use
.PHONYfor non-file targets - Multi-stage Dockerfiles keep images small. Build in
golang, run inalpine(debuggable) ordistroless(hardened) CGO_ENABLED=0produces a static binary that runs anywhere without libc- Copy
go.mod/go.sumfirst in the Dockerfile to cache dependency downloads - Docker Compose runs the full stack locally. Use health checks so services start in the right order
- Use named volumes for database persistence across container restarts
- Don't hardcode secrets in compose files. Use
.envfor local dev, platform secrets for production - The final project is a single binary backed by Postgres, containerized and ready to ship