09 - HTTP Client
📋 Jump to TakeawaysGo's net/http package is both a server and a client. The server side is solid. The client side has a footgun that catches everyone at least once.
The Default Client Problem
resp, err := http.Get("https://example.com")This uses http.DefaultClient. It has no timeout. None. If the remote server hangs, your goroutine hangs forever. In a web server, that's a connection leak waiting to happen.
Never use http.DefaultClient in production.
Build a Proper Client
client := &http.Client{
Timeout: 10 * time.Second,
}That's the minimum. Timeout covers the entire request: DNS, connect, TLS handshake, sending the request, reading the response. If anything takes longer than 10 seconds total, the request is cancelled.
For more control:
client := &http.Client{
Timeout: 10 * time.Second,
Transport: &http.Transport{
MaxIdleConns: 100,
MaxIdleConnsPerHost: 10,
IdleConnTimeout: 90 * time.Second,
},
}The Transport manages connection pooling — these aren't timeouts for requests, they control how idle connections are reused:
MaxIdleConns: total keep-alive connections pooled across all hostsMaxIdleConnsPerHost: keep-alive connections pooled per hostIdleConnTimeout: how long an unused connection sits in the pool before being closed
Reusing connections avoids the overhead of TCP and TLS handshakes on every request. The Timeout on the client (10s) is the only one that limits how long a request takes.
Context Cancellation
Timeout is a hard limit on the client — every request gets the same deadline. Context gives you per-request control. Use http.NewRequestWithContext to attach a context to a single request:
func fetch(ctx context.Context, client *http.Client, url string) ([]byte, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return nil, fmt.Errorf("create request: %w", err)
}
resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("fetch %s: %w", url, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("fetch %s: status %d", url, resp.StatusCode)
}
return io.ReadAll(resp.Body)
}If the context is cancelled (request timeout, user disconnect), the HTTP request is aborted immediately. This is how you prevent hanging requests in your handlers.
We wrap this in a fetch function because the pattern — create request, check status, read body, handle errors — repeats every time you call an external API. Extract it once, reuse it everywhere.
Reading Response Bodies
Always close the body. Always.
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()If you don't close it, the underlying TCP connection can't be reused. Your connection pool fills up and new requests start failing.
For JSON responses:
var result struct {
Title string `json:"title"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return fmt.Errorf("decode response: %w", err)
}For large responses, limit what you read:
body, err := io.ReadAll(io.LimitReader(resp.Body, 1<<20)) // 1MB maxWithout LimitReader, a malicious server could send gigabytes and exhaust your memory.
Retry with Backoff
Network requests fail. Servers return 503. Connections reset. These are often temporary — if you try again in a moment, it works.
A naive retry would hammer the server immediately. Exponential backoff adds a growing delay between attempts: 100ms, 400ms, 900ms. This gives the server time to recover instead of making things worse.
func fetchWithRetry(ctx context.Context, client *http.Client, url string, maxRetries int) ([]byte, error) {
var lastErr error
for i := 0; i <= maxRetries; i++ {
if i > 0 {
// Wait longer between each attempt: 100ms, 400ms, 900ms...
wait := time.Duration(i*i) * 100 * time.Millisecond
select {
case <-time.After(wait):
case <-ctx.Done():
return nil, ctx.Err() // Caller gave up, stop retrying
}
}
data, err := fetch(ctx, client, url)
if err == nil {
return data, nil // Success, return the response
}
lastErr = err
slog.Warn("fetch retry", "url", url, "attempt", i+1, "err", err)
}
return nil, fmt.Errorf("after %d retries: %w", maxRetries, lastErr) // All attempts failed
}The select on ctx.Done() ensures retries stop if the context is cancelled. No point retrying if the caller already gave up.
Practical Example: Checking Bookmark URLs
Let's put this all together. Our bookmarks API accepts URLs from users — but how do we know the URL is actually valid and reachable? We can make a quick HTTP request to check before saving it.
func checkURL(ctx context.Context, client *http.Client, url string) (bool, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodHead, url, nil)
if err != nil {
return false, fmt.Errorf("create request: %w", err)
}
resp, err := client.Do(req)
if err != nil {
return false, nil // URL unreachable, but that's not a business error
}
defer resp.Body.Close()
return resp.StatusCode < 400, nil
}Use HEAD instead of GET. It fetches headers only, no body. Faster and cheaper for both sides.
Applying to Our Project
Add a URL checker to the bookmark creation flow:
var httpClient = &http.Client{Timeout: 5 * time.Second}
func createBookmark(w http.ResponseWriter, r *http.Request) {
var input struct {
URL string `json:"url"`
Title string `json:"title"`
}
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
writeError(w, http.StatusBadRequest, "invalid JSON")
return
}
alive, err := checkURL(r.Context(), httpClient, input.URL)
if err != nil {
slog.Warn("url check failed", "url", input.URL, "err", err)
}
if !alive {
slog.Info("bookmark url unreachable", "url", input.URL)
}
bookmark, err := store.Create(r.Context(), input.URL, input.Title)
if err != nil {
slog.Error("create bookmark", "err", err)
writeError(w, http.StatusInternalServerError, "internal error")
return
}
writeJSON(w, http.StatusCreated, bookmark)
}We check the URL but still save the bookmark even if the check fails. The URL might be temporarily down, or behind a firewall that blocks HEAD requests. Log it, don't block on it.
Key Takeaways
- Never use
http.DefaultClientin production. It has no timeout - Set
Timeouton yourhttp.Client. It covers the entire request lifecycle - Use
http.NewRequestWithContextfor per-request cancellation - Always
defer resp.Body.Close(). Unclosed bodies leak connections - Use
io.LimitReaderto cap response size from untrusted sources - Retry with exponential backoff for transient failures. Respect context cancellation in the retry loop
- Use
HEADrequests when you only need to check if a URL is alive