04. Writing Prompts That Work Everywhere
📋 Jump to TakeawaysUniversal Principles
Some prompting techniques are model-specific. But the fundamentals work on every model: GPT, Claude, Gemini, Llama, Qwen, DeepSeek. These are the principles that transfer no matter what you're using.
Be Specific
The single most important rule. Vague prompts get vague results on every model.
❌ "Help me with my code"
✅ "This Go function should return the sum of even numbers in the slice,
but it returns 0 for [2, 4, 6]. Find the bug."
❌ "Write a good email"
✅ "Write a 3-sentence email declining a meeting on Thursday.
Tone: professional but friendly. Suggest rescheduling to next week."Specificity means telling the model exactly what you want, in what format, with what constraints.
Structure Your Input
Models process text sequentially. Clear structure helps every model parse your intent faster and more accurately.
Task: Review this code for bugs.
Language: Go
Context: This is a rate limiter for an HTTP API.
Focus: Thread safety and edge cases.
Code:func (r *RateLimiter) Allow() bool {
r.mu.Lock()
defer r.mu.Unlock()
now := time.Now()
if now.Sub(r.lastReset) > r.window {
r.count = 0
r.lastReset = now
}
if r.count >= r.limit {
return false
}
r.count++
return true
}Labels like "Task:", "Context:", "Code:" work universally. The model knows exactly what each section is and how to use it.
Constrain the Output
Tell the model what to produce AND what not to produce.
Respond with:
- A one-line summary of the bug
- The fixed code
- Nothing else. No explanations, no alternatives.Without constraints, models ramble. Every model does this. Explicit constraints fix it across the board.
Show, Don't Tell
One example is worth 100 words of description. This works on every model because the model infers the pattern from your examples.
Convert these to slug format:
"Hello World" → "hello-world"
"My First Post!" → "my-first-post"
"What is AI?" → "what-is-ai"
Now convert: "Go Concurrency Patterns"The model sees the pattern and follows it. No need to explain "lowercase, replace spaces with hyphens, remove punctuation." The examples say it all.
One Task Per Prompt
Models handle one clear task better than three bundled together.
❌ "Summarize this article, translate it to Spanish,
and suggest 5 tweet-length quotes from it."
✅ Three separate prompts:
1. "Summarize this article in 3 sentences."
2. "Translate this summary to Spanish."
3. "Extract 5 tweet-length quotes (under 280 chars) from this article."Bundled tasks lead to one being done well and the others being rushed or incomplete. Split them.
Give Context, Not Everything
Models have limited attention. More context isn't always better. In fact, too much context dilutes the model's focus.
❌ Pasting your entire 2000-line file + "find the bug"
(Model tries to consider everything, gets confused or focuses on the wrong part)
✅ Pasting the 30-line function + the error message + what you expected
(Model focuses on exactly what matters)The rule: include everything the model needs to answer correctly, exclude everything it doesn't.
Use Roles Effectively
Every model supports system/user roles. Use them consistently:
System: WHO the model is and HOW it should behave
User: WHAT you want it to do right nowSystem: "You are a senior Go developer. You give concise code reviews.
You focus on bugs and performance, not style."
User: "Review this function: [code]"The system prompt persists across the conversation. The user prompt is the current task.
The Universal Prompt Template
This structure works on any model:
- Role/Context — who are you, what's the situation
- Task — what exactly to do
- Format — how to structure the output
- Constraints — what to avoid
- Examples — if needed
- Input — the actual data to process
Example:
You are a technical writer for developer documentation.
Task: Write a description for this API endpoint.
Format:
- One sentence summary
- Parameters table (name, type, required, description)
- Example request and response
Constraints:
- No marketing language
- Assume the reader knows HTTP basics
- Keep under 200 words
Endpoint: POST /api/users
Creates a new user account. Accepts name, email, and password.
Returns the created user object with an ID.Iteration, Not Perfection
Your first prompt won't be perfect. That's fine. Iterate:
- Write a prompt
- See the output
- Identify what's wrong (too long? wrong format? missing detail?)
- Add a constraint or example to fix it
- Repeat
Attempt 1: "Summarize this article"
→ Too long, includes opinions
Attempt 2: "Summarize this article in 3 bullet points. Facts only, no opinions."
→ Good length, but too technical for the audience
Attempt 3: "Summarize this article in 3 bullet points.
Facts only. Write for a non-technical audience."
→ Perfect.Each iteration adds one constraint. Don't rewrite the whole prompt. Just add what's missing.
Key Takeaways
- Be specific. Vague prompts get vague results on every model.
- Structure input with clear labels (Task, Context, Code, Format).
- Constrain output. Tell the model what NOT to do.
- Show examples instead of describing formats in words.
- One task per prompt. Split complex requests into separate calls.
- Give relevant context only. More isn't always better.
- Use the universal template: Role → Task → Format → Constraints → Examples → Input.
- Iterate by adding constraints, not rewriting from scratch.