- 1 Build Your Repeatable Email Prompt
Objective. Build one reusable prompt template for the email you write most, use it five times in a week, and track what changed.
Concepts to keep in mind
- Templates beat one-off prompts. The lift comes from not having to think about prompt structure every time you write a routine email.
- Pick your most common email type first — the one you write 3+ times a week. Status updates, declines, follow-ups, customer replies. The frequency is what makes the template pay back.
- Bullet-to-draft is the workhorse pattern. You think in bullets, AI expands to prose, you edit. Faster than dictating from blank.
- Track edit percentage. If you keep less than ~50% of the AI draft, the template needs work. If you keep 90%+, the AI is leaning on a pattern that may eventually feel off.
Scenario
It’s Monday morning and you have six emails to write before lunch. Most of them follow patterns you’ve written hundreds of times. Today you build the template that drafts them for you for the rest of the year.
Prompts to try
The template skeleton — fill the bracketed parts once for your most common email type, then save it:
[Context] I'm a [your role] at [type of org]. I write emails of this type [N] times per week to [audience]. [Specific Details] The bullets I want turned into a draft: - [bullet 1] - [bullet 2] - [bullet 3] [Intent] Turn these bullets into a [warm / neutral / direct] email that I can send with light editing. [Desired Format] - Subject line - 1 short greeting line - 2–4 short paragraphs (no walls of text) - Closing line that ends with a clear next step (or no next step needed, if it's a status update) [Constraints] - Tone: [your default — e.g., warm but efficient] - No filler ("I hope this email finds you well", "Just wanted to reach out") - Do not invent context, dates, or commitments not in my bullets - Under [N] wordsTone-shift variant — append this when the situation calls for it:
Rewrite this draft in a [more direct / softer / more formal] tone. Do not change the substance, only the register.Reply pattern — for inbound emails:
Below is an email I received and the bullets of my reply. Draft my reply. [Paste their email] My bullets: - [bullet] - [bullet] [Same Format/Constraints as above]Deliverable
By Friday:
- Your finalized template, saved somewhere you’ll actually find it (Notes app, Notion, a
prompt.txton the desktop) - A 3-bullet retro: how much time you saved per email (rough), edit % across the 5 uses, whether anyone noticed the difference
Signs of success
- You used the template at least five times this week without re-reading it.
- Your edit percentage settled into a range (likely 60–80%). The AI drafts are useful but you’re still editing — that’s the right place.
- One coworker noticed something about your emails this week. Could be good (“you’re faster”) or revealing (“you wrote like a robot once”) — either way you got a signal.
- You already know which second email type you’re going to template next week. The habit is taking.
Deliverable. A finalized email prompt template (plain text or note-app entry) plus a 3-bullet end-of-week retro.
- 2 Meeting Prep and Notes Workflow
Objective. Apply pre-meeting and post-meeting prompts to every meeting for one week and document what changed.
Concepts to keep in mind
- The prep is a compounding edge. Walking into a meeting with a one-page brief beats walking in cold every time — and AI generates the brief in 90 seconds.
- Action-item extraction catches things you’d miss. Especially in meetings you ran — your attention was on running, not capturing.
- Failure is data. Note where the prompts produce bad output. That’s where the next iteration improves.
- Don’t dump confidential meeting notes into a personal AI account. If notes contain client names, salaries, internal numbers — use whatever your org has approved, or sanitize first.
Scenario
Your week has six meetings. Some are critical, some are recurring slogs. Today you set up two prompts you’ll run on each — one before, one after — and see what changes.
Prompts to try
Pre-meeting prep:
[Context] I have a meeting in [N] minutes about [topic] with [who's there + their role]. The goal of the meeting is [stated goal] and what I personally need out of it is [your goal — may differ]. [Specific Details] Background I'd want walking in: - [bullet — e.g., last time we met they said X] - [bullet — e.g., the related project status is Y] - [bullet — anything else relevant] [Intent] Prepare me to walk in sharp. Surface what I might be missing. [Desired Format] - One-sentence framing of what this meeting actually is (not what the calendar invite says) - Three questions I should be ready to answer - Three questions I should consider asking - One thing I should NOT say (based on the goal) - A 2-line opener I could use if I'm starting the meeting [Constraints] Honest, not flattering. If you think my goal is wrong, say so.Post-meeting action items:
[Context] I just left a meeting about [topic] with [attendees]. Below are my notes (rough). [Specific Details] [Paste raw notes — typos, fragments, all of it] [Intent] Extract the actual decisions and action items. I'll forget half by tomorrow if you don't. [Desired Format] - Decisions made (bulleted; flag any that felt soft as "[unconfirmed]") - Action items: who, what, by when (if a "when" wasn't stated, mark "[no due date set]") - Things mentioned that *should* have been action items but weren't (the "loose ends" list) - One question I should follow up on this week [Constraints] Don't invent action items. If something is ambiguous in my notes, mark it [unclear] rather than guessing.Deliverable
By Friday, one page covering:
- How many meetings you ran the prep prompt on (target: all of them)
- One specific moment where the prep changed your behavior in a meeting
- One action item the post-meeting prompt caught that you would’ve forgotten
- One place a prompt failed (output was off, hallucinated, or just unhelpful) — and what you’d change
Signs of success
- At least one meeting felt different because you walked in prepared.
- The action-item extraction caught something real. (If it didn’t, your notes were too thin — capture more next time, or run it on a meeting transcript.)
- You can name the failure mode of each prompt. (“Pre-meeting tends to over-prepare for collaborative meetings; works best for ones with a stated decision.”) That’s the iteration muscle.
- You’re already adjusting one of the two prompts for next week.
Deliverable. A short writeup (one page max) covering what you changed, what worked, what you'd modify next iteration.
- 3 Research Synthesis on a Real Decision
Objective. Run a multi-source synthesis prompt on a real decision you're working through, verify two of AI's claims, and decide.
Concepts to keep in mind
- AI synthesis is fastest where the sources disagree. That’s where the lift is — surfacing contradictions, not summarizing consensus.
- Verify the claims that would change your decision. You can’t verify everything; pick the load-bearing ones.
- Hallucinations are confident. If a claim feels surprising and well-supported, that’s exactly when to verify.
- The decision is yours. AI synthesizes inputs; humans weigh tradeoffs. Don’t outsource the call.
Scenario
You’re deciding something this month — a vendor, a new tool, a hire, a strategy change, a personal financial move. You’ve been gathering inputs but haven’t synthesized them. Today you do, with AI helping.
Source material
Pick one real decision in front of you. Then gather 3–5 inputs:
- Articles, reports, vendor docs, transcripts of conversations with people who’ve decided this before
- Internal docs (sanitize first if confidential)
- Reddit / forum threads on the same decision
- Your own notes / pros-cons list
Prompts to try
Multi-source synthesis:
[Context] I'm deciding [the decision], with criteria [your criteria — e.g., total cost, time to value, risk, fit with existing tools]. Below are [N] inputs I've gathered. [Specific Details] Source 1: [title or one-line summary] [paste content or paraphrase] Source 2: [title] [content] [... etc] [Intent] Synthesize across these sources. I want to see *agreements* and *disagreements* clearly, not blended. [Desired Format] - Where the sources agree: 3–5 bullets - Where they disagree: 2–4 disagreements, framed as "Source X says A, Source Y says B" - Claims that would change my decision if true: 2–3, each marked [verifiable] with one specific way to verify - The two open questions the inputs don't answer - Tentative recommendation with one line of reasoning (you can be wrong) [Constraints] Do not blend disagreements into "both/and." Surface them. If a source's claim contradicts another, name it.Verification follow-up (use after picking which claim to verify):
I'm verifying this claim from Source X: "[paste the exact claim]" Help me design a 5-minute verification: what to search for, where to look, what evidence would confirm or refute it.Explain-like-I’m-reviewing (when you want a reality check):
I'm leaning toward [tentative decision]. Argue the strongest case *against* it. Use the sources I gave you. Don't be polite — I need the actual counterargument, not a hedge.Deliverable
A short note (one page) with:
- The decision and your criteria
- The synthesis output (agreements / disagreements / verifiable claims)
- The two claims you actually verified, and what you found
- Your final decision
- One sentence: did AI’s synthesis match your own reading, or did it shift you?
Signs of success
- AI surfaced at least one disagreement you hadn’t fully registered.
- You verified at least two claims yourself. At least one of them was off in some way (overstated, missing context, slightly wrong number).
- The decision is yours, not AI’s — but the synthesis made it faster to make.
- You’d run this pipeline again on the next real decision instead of stewing on it for two weeks.
- You don’t trust AI synthesis blindly going forward. (Hopefully: yes, that’s the goal.)
Deliverable. Your final decision plus a short note on how AI shaped it (or didn't).
- 4 Build Your Role-Specific Prompt Set
Objective. Build three prompts customized for your role, use them across one workweek, and refine based on what didn't work.
Concepts to keep in mind
- Generic prompts are throw-away. Role-specific prompts are tools. The lift comes from baking your domain context in once, so you don’t rewrite it each time.
- Reps reveal failure modes. A prompt looks great on paper, then fails the third time you use it on a real input. That’s what the week of use is for.
- Edit-pass prompts are underrated. “Rewrite this in the voice of [you]” is often more useful than another draft prompt.
- The teach-a-coworker test. If you couldn’t hand the prompt to someone in your role and have it work for them, it’s not a tool yet — it’s still a one-off.
Scenario
Pick three of your most repeated tasks at work. Build a prompt for each. Use them all week. By Friday you’ll have three tools you’ll use forever (or know exactly why they didn’t survive contact with reality).
Source material
Pick three tasks you do repeatedly, customized for your role. Examples by track:
- PM: writing a launch brief · distilling a customer interview into themes · turning a design doc into a status update
- Data: writing a SQL question into proper SQL · explaining a chart caption · reviewing your own analysis for missed angles
- Customer-facing: drafting a support reply that’s empathetic but firm · summarizing a long ticket thread · writing a follow-up after a tough call
- Ops: writing an incident summary · turning a checklist into an SOP doc · drafting a vendor request
(Or your own — three tasks that recur in your specific role.)
Prompts to try
The role-prompt template — fill the bracketed parts for each of your three tasks:
[Context] I'm a [role] at [type of org]. I do this task [N] times per [week/month]. The audience for the output is [who reads it]. The thing they care most about is [reader's actual need, not the formal goal]. [Specific Details] The input I'm working from: [paste the input — meeting notes, raw data, customer message, half-formed idea, etc.] [Intent] Produce [the artifact this task creates: brief / reply / SOP / status update / etc.] [Desired Format] [Be specific to the artifact. e.g., for a status update: "Subject + 3 bullets + 1-line risk callout"] [Constraints] - Tone: [your default] - [Anything specific to your role — e.g., "no marketing-speak," "must be skimmable in 30 seconds," "always include the open questions section"] - Do not invent facts not in the input. If something is missing, list it as "missing" rather than filling it in.Edit-pass prompt — the second-most-used in any role kit:
Rewrite the draft below in the voice of [your default tone — e.g., "warm but efficient," "direct but not curt," "scholarly but readable"]. Do not change the substance, only the register. Cut filler. If a sentence is doing two jobs, split it. [paste draft]Self-review prompt — for catching your own blind spots:
I just drafted [the artifact]. Below is the draft. Below that is the audience and what they need. [Draft] [Audience and need] What did I miss? What would the audience push back on? What's one sentence I should cut, and why?Deliverable
By Friday, save:
- Three prompts (in a Notes app, Notion, a
prompts.mdsomewhere you’ll find them) - One-paragraph note on what you’d teach a coworker in your role: which prompt earned its keep, which needed surgery, which you’d skip teaching them entirely
Signs of success
- All three prompts got used at least 3 times this week.
- One of them clearly survived (you’ll keep using it). One probably needed surgery (you rewrote it mid-week). One may not have survived — that’s fine, kill it.
- The “teach a coworker” note is concrete: you can name which prompt you’d hand them and why.
- The role-prompt template itself becomes the meta-tool: when a fourth recurring task shows up next week, you build prompt #4 from this template in five minutes.
Deliverable. Three finalized prompts saved where you'll find them, plus a one-paragraph note on what you'd teach a coworker in the same role.
- 5 Daily Triage System
Objective. Run AI through one capture surface (inbox, notes, scratchpad) twice a day for a week, sorting items into four options, and surfacing the highest-priority next action.
Concepts to keep in mind
- Triage is judgment with structure. AI doesn’t make the call — it forces you to label every item, which is the work most people avoid.
- Four options, not five. Big step / Little step / Keep / Delete. More categories = decision fatigue.
- The “next priority” call surfaces what matters. After labeling, AI proposes one priority. You override it if needed; the proposal alone breaks the freeze.
- Twice a day is the minimum cadence. Once a day = backlog grows between runs. Twice keeps the surface light.
- Volume of “Delete” is the most useful signal. It’s the quiet measure of how much noise you’ve been carrying.
Scenario
You have at least one place where stray items pile up — an inbox you don’t fully clear, a notes app, a scratchpad, a Slack DM with yourself. Today you set up the triage prompt; this week you live with it.
Source material
Pick one capture surface to start. Don’t try to triage three at once.
- Personal inbox (work or personal) — stuff you haven’t actioned but haven’t archived
- A notes app where stray ideas land
- A scratchpad /
.txtfile you dump into during the day - A specific Slack channel that piles up
Prompts to try
The triage prompt (your reusable template):
[Context] I'm running a twice-a-day triage on my [capture surface]. I want every item sorted into one of four options, then a single highest-priority next action surfaced. [Specific Details] Items currently sitting on the surface: 1. [item — paste subject line, snippet, or a one-line description] 2. [item] 3. [item] [... up to all of them] [Intent] For each item, return ONE of these labels and a one-line rationale: 1. **Big step** — a meaningful next action that would take more than 5 min and matters this week 2. **Little step** — a quick under-5-minute task to clear it 3. **Keep in [location]** — file it, no action needed 4. **Delete** — not worth holding After labeling all items, return ONE recommended next action: the single highest-leverage thing on this list to do *now*. Pick from the Big Step or Little Step categories only. [Desired Format] A table: | # | Item (1-line) | Label | Rationale | Then, below the table, the recommended next action with one sentence on why. [Constraints] - If an item is ambiguous, default to "Little step" or ask me one clarifying question. Don't guess. - "Keep" must include where to file it. "Delete" must be confident — if it's borderline, choose "Keep" instead. - The recommended next action cannot be a meta-task ("review your inbox") — it must be a concrete thing.Override pattern — when AI’s pick isn’t right:
Override: [reason — e.g., "the recommended next action requires waiting on someone else; pick the next-best one I can act on alone"]End-of-week pattern — Friday only:
This week I ran 10 triage sessions on [surface]. The labels distributed roughly: [count Big / Little / Keep / Delete]. What does that distribution tell you about my workload shape, and what should I change next week?Deliverable
By Friday:
- Your finalized triage prompt template, saved
- Three-bullet retro:
- Surprise — what you noticed about your work that you didn’t expect (often the Delete count, sometimes the Little Step backlog)
- Change — one thing you’d modify in the prompt for next week
- Keep going? — yes/no/modified, with one line on why
Signs of success
- You ran it at least 8 times in 7 days. (Two skipped sessions is normal; more than that means the cadence is wrong, not the prompt.)
- The Delete column held a real count by midweek. The cleanup happened.
- At least once, the recommended next action was something you would not have picked on your own — and turned out to be right.
- You can name your workload shape (“mostly little steps,” “all big steps stacking up,” “I’m drowning in Keep”) in one sentence by Friday.
- You either keep it as is, modify the prompt, or kill it on purpose. Any of those three is a successful outcome.
Deliverable. Your finalized triage prompt template plus a 3-bullet end-of-week retro.