08 - Database with database/sql
📋 Jump to TakeawaysTime to store bookmarks for real. Go's database/sql package is the standard interface for SQL databases. It handles connection pooling, prepared statements, and transactions. You bring the driver, it does the rest.
The Driver
database/sql is just an interface. You need a driver for your specific database. For PostgreSQL, there are two main drivers:
lib/pq |
pgx |
|
|---|---|---|
| Install | go get github.com/lib/pq |
go get github.com/jackc/pgx/v5/stdlib |
| Import | _ "github.com/lib/pq" |
_ "github.com/jackc/pgx/v5/stdlib" |
| Driver name | "postgres" |
"pgx" |
| Imported by | 54,000+ packages | 3,000+ packages |
| Strengths | Simple, massive adoption | More features (COPY, LISTEN/NOTIFY, batch) |
Both work behind database/sql — your SQL code is identical either way. We'll use lib/pq in this course. If you ever need to switch, change the import and the driver name in sql.Open. That's it. The rest of your code stays the same. That's the point of database/sql.
How to evaluate a driver (or any dependency): check pkg.go.dev for the last published date, import count, and whether it's actively maintained. A package with recent releases and thousands of importers is a safe bet.
go get github.com/lib/pqYes, this is an external package. Rule 1 says use the stdlib. But database/sql was designed to work with drivers. The driver is the one exception everyone agrees on. You can't talk to Postgres without one.
Import it with a blank identifier to register the driver:
import (
"database/sql"
_ "github.com/lib/pq"
)The _ import runs the package's init() function, which registers the driver with database/sql.
Opening a Connection
func OpenDB(dsn string) (*sql.DB, error) {
db, err := sql.Open("postgres", dsn)
if err != nil {
return nil, fmt.Errorf("open db: %w", err)
}
db.SetMaxOpenConns(25)
db.SetMaxIdleConns(5)
db.SetConnMaxLifetime(5 * time.Minute)
if err := db.Ping(); err != nil {
return nil, fmt.Errorf("ping db: %w", err)
}
return db, nil
}sql.Open doesn't actually connect. It just validates the driver name and DSN. db.Ping makes the real connection. Always ping after opening.
The pool settings matter:
MaxOpenConns: total connections to the database. Set this. The default is unlimited, which can overwhelm your databaseMaxIdleConns: connections kept alive when idle. Saves the cost of reconnectingConnMaxLifetime: how long a connection lives before being recycled. Prevents stale connections
The Bookmark Model
Before writing Go code, create the table. Run this in your Postgres client (psql, pgAdmin, or any SQL tool):
CREATE TABLE IF NOT EXISTS bookmarks (
id SERIAL PRIMARY KEY,
url TEXT NOT NULL,
title TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);Now define the matching Go struct:
type Bookmark struct {
ID int `json:"id"`
URL string `json:"url"`
Title string `json:"title"`
CreatedAt time.Time `json:"created_at"`
}The BookmarkStore
Group all database operations in a struct:
type BookmarkStore struct {
db *sql.DB
}
func NewBookmarkStore(db *sql.DB) *BookmarkStore {
return &BookmarkStore{db: db}
}CRUD Operations
Create:
func (s *BookmarkStore) Create(ctx context.Context, url, title string) (Bookmark, error) {
var b Bookmark
err := s.db.QueryRowContext(ctx,
"INSERT INTO bookmarks (url, title) VALUES ($1, $2) RETURNING id, url, title, created_at",
url, title,
).Scan(&b.ID, &b.URL, &b.Title, &b.CreatedAt)
if err != nil {
return Bookmark{}, errx.WrapError(err)
}
return b, nil
}QueryRowContext for single-row results. Always use the Context variants so requests can be cancelled.
Get by ID:
func (s *BookmarkStore) GetByID(ctx context.Context, id int) (Bookmark, error) {
var b Bookmark
err := s.db.QueryRowContext(ctx,
"SELECT id, url, title, created_at FROM bookmarks WHERE id = $1", id,
).Scan(&b.ID, &b.URL, &b.Title, &b.CreatedAt)
if errors.Is(err, sql.ErrNoRows) {
return Bookmark{}, ErrNotFound
}
if err != nil {
return Bookmark{}, errx.WrapError(err)
}
return b, nil
}sql.ErrNoRows is the sentinel error for "no results." Convert it to your own ErrNotFound so your handlers don't depend on database/sql.
List all:
func (s *BookmarkStore) List(ctx context.Context) ([]Bookmark, error) {
rows, err := s.db.QueryContext(ctx,
"SELECT id, url, title, created_at FROM bookmarks ORDER BY created_at DESC",
)
if err != nil {
return nil, errx.WrapError(err)
}
defer rows.Close()
var bookmarks []Bookmark
for rows.Next() {
var b Bookmark
if err := rows.Scan(&b.ID, &b.URL, &b.Title, &b.CreatedAt); err != nil {
return nil, errx.WrapError(err)
}
bookmarks = append(bookmarks, b)
}
return bookmarks, rows.Err()
}Always defer rows.Close(). Always check rows.Err() after the loop. The loop can exit early if scanning fails, but rows.Err() catches errors from the database itself (network issues, cancelled context).
Delete:
func (s *BookmarkStore) Delete(ctx context.Context, id int) error {
result, err := s.db.ExecContext(ctx, "DELETE FROM bookmarks WHERE id = $1", id)
if err != nil {
return errx.WrapError(err)
}
rows, _ := result.RowsAffected()
if rows == 0 {
return ErrNotFound
}
return nil
}ExecContext for statements that don't return rows. Check RowsAffected to know if anything was actually deleted.
Transactions
When you need multiple operations to succeed or fail together:
func (s *BookmarkStore) CreateBatch(ctx context.Context, bookmarks []Bookmark) error {
tx, err := s.db.BeginTx(ctx, nil)
if err != nil {
return errx.WrapError(err)
}
defer tx.Rollback()
for _, b := range bookmarks {
_, err := tx.ExecContext(ctx,
"INSERT INTO bookmarks (url, title) VALUES ($1, $2)", b.URL, b.Title,
)
if err != nil {
return errx.WrapError(err)
}
}
return tx.Commit()
}defer tx.Rollback() is safe even after a commit. If Commit succeeds, Rollback is a no-op. This pattern guarantees cleanup.
Why Not an ORM?
GORM, ent, sqlc, sqlx. There are many options. Here's the thing: database/sql is 90% of what you need. You write SQL directly, you know exactly what queries run, and you don't fight an abstraction layer.
ORMs add magic. Magic is great until it does something you don't expect. A query that looks simple generates three JOINs and a subquery. A save operation triggers cascading updates you didn't ask for.
Write SQL. It's a skill worth having. If your queries get complex, consider sqlc (generates Go code from SQL) or sqlx (adds struct scanning). But start with database/sql.
Applying to Our Project
Wire the database into main.go:
func main() {
cfg := LoadConfig()
slogr.Setup(cfg.LogLevel, "text")
db, err := OpenDB(cfg.DatabaseURL)
if err != nil {
slog.Error("database connection failed", "err", err)
os.Exit(1)
}
defer db.Close()
store := NewBookmarkStore(db)
mux := http.NewServeMux()
registerBookmarkRoutes(mux, store)
slog.Info("listening", "addr", ":"+cfg.Port)
log.Fatal(http.ListenAndServe(":"+cfg.Port, mux))
}Pass the store to your route registration:
func registerBookmarkRoutes(mux *http.ServeMux, store *BookmarkStore) {
mux.HandleFunc("GET /bookmarks", func(w http.ResponseWriter, r *http.Request) {
bookmarks, err := store.List(r.Context())
if err != nil {
slog.Error("list bookmarks", "err", err)
writeError(w, http.StatusInternalServerError, "internal error")
return
}
writeJSON(w, http.StatusOK, bookmarks)
})
mux.HandleFunc("GET /bookmarks/{id}", func(w http.ResponseWriter, r *http.Request) {
id, err := strconv.Atoi(r.PathValue("id"))
if err != nil {
writeError(w, http.StatusBadRequest, "invalid id")
return
}
bookmark, err := store.GetByID(r.Context(), id)
if errors.Is(err, ErrNotFound) {
writeError(w, http.StatusNotFound, "bookmark not found")
return
}
if err != nil {
slog.Error("get bookmark", "err", err)
writeError(w, http.StatusInternalServerError, "internal error")
return
}
writeJSON(w, http.StatusOK, bookmark)
})
}Handlers use r.Context() so database queries are cancelled if the client disconnects.
Key Takeaways
database/sqlis the stdlib interface. Bring a driver (lib/pqfor Postgres)- Always set
MaxOpenConns. The default (unlimited) will bite you QueryRowContextfor single rows,QueryContextfor multiple,ExecContextfor no results- Convert
sql.ErrNoRowsto your own sentinel error at the store layer - Always
defer rows.Close()and checkrows.Err()after scanning defer tx.Rollback()is safe after commit. Use it as a cleanup guarantee- Pass
r.Context()to database calls so they respect request cancellation - Start with raw SQL. Add tools like sqlc or sqlx only when you need them