Why this matters
A good prompt is a specification. The clearer the spec, the better the output. This module gives you a five-part formula that works for any task, with any model.
The formula
Every strong prompt has five parts:
- Context — Who you are, what you’re doing, what the model needs to know
- Specific Details — The actual content of the task
- Intent — What outcome you want
- Desired Format — How the output should be structured
- Constraints — Length, tone, what to avoid
You don’t need every part for every prompt. But missing parts tend to be where output quality breaks down.
Walking through the formula
Bad prompt:
“Write a marketing email.”
Better prompt, with the formula applied:
[Context] I run a small mobile car detailing business in western North Carolina. My customers are mostly homeowners aged 35-65 who value convenience.
[Specific Details] I’m launching a winter package: undercarriage rust protection, interior deep clean, and headlight restoration for $199 (regular price $275).
[Intent] Write a marketing email to my existing customer list announcing this package.
[Desired Format] Subject line + 3 short paragraphs + a clear call-to-action button text.
[Constraints] Conversational tone. Under 200 words. No emojis. Avoid the word “exclusive.”
The second prompt produces email copy you can send. The first produces generic filler.
Prompts as specifications, not conversations
A common mistake: treating AI like a coworker you can hint at, then iterate with. That works for casual use, but for anything you want to repeat or rely on, write the prompt as a specification.
A specification is something you could hand to ten different writers and get ten similar results. A casual conversation is something only you understand.
This shift — from “chatting with AI” to “writing specs for AI” — is an effective mindset shift you can make.
Live example: vague to structured
Vague:
“Help me with my LinkedIn profile.”
Structured:
I’m a former Marine and current IT support engineer transitioning into AI engineering roles. I’ve built three AI projects, learned n8n automation, and run a local LLM lab.
Rewrite my LinkedIn headline to position me for remote AI engineer roles. Output 5 options, each under 150 characters. Use confident but not arrogant tone. Avoid buzzwords like “passionate” and “synergy.”
The first prompt gives you a template. The second gives you actual usable headlines.
Common mistakes and how to fix them
Mistake 1: Stacking contradictory tones
“Write something friendly, professional, brief, thorough, casual, and authoritative.”
The model will pick one and ignore the rest. Pick a clear tone.
Mistake 2: Asking for too many things at once
“Give me a marketing plan, a budget, a hiring plan, and a 5-year forecast.”
Break it into separate prompts. Each one gets sharper output.
Mistake 3: Vague success criteria
“Make it good.”
The model has no idea what “good” means to you. Spell it out: “good = under 200 words, no jargon, ends with a question.”
Mistake 4: Forgetting the audience
“Explain quantum computing.”
To whom? A child? A physicist? A CEO? Specify the audience and you’ll get the right level.
Mistake 5: Skipping context entirely
“Is this email good?”
Good for what? Sent to whom? With what goal? Context isn’t optional.
Key takeaways
- Five-part formula: Context, Specific Details, Intent, Desired Format, Constraints
- Treat prompts as specifications, not casual messages
- The clearer the spec, the more reliable the output
- Most bad output is the result of missing context, not bad models
Quick Check
1. Which of these is NOT one of the five parts of the Structured Prompt Formula?
2. "Treat prompts as specifications, not conversations" means:
3. Which of these is a clear specificity upgrade?
4. What's the issue with "Write something friendly, professional, brief, thorough, and authoritative"?
5. Most bad AI output comes from: