Why this matters
AI is an incredible research assistant — and a dangerous one if you trust it blindly. This module covers how to use AI for research the right way: as a starting point that you verify, refine, and challenge.
Deep research workflows: Claude + Perplexity
A two-tool workflow that beats either tool alone:
-
Start with Perplexity — for grounding in current, cited information
- “What are the current best practices for [topic]?”
- “Compare [option A] and [option B] in 2026.”
- Save the sources Perplexity cites.
-
Move to Claude — for synthesis, analysis, and applied thinking
- Paste Perplexity’s output (or upload as a document)
- Ask Claude to: identify gaps, challenge assumptions, apply the info to your specific situation, draft something based on it
This combo gives you accurate raw material (Perplexity) plus deep applied thinking (Claude). Either tool alone leaves money on the table.
Iterative refinement
The first AI response is rarely the best one. Good researchers treat AI like a thinking partner — they push back, ask for more, narrow the focus.
Useful follow-up patterns:
Going deeper:
- “Go deeper on point 2.”
- “Provide sources or data for this claim.”
- “What are the strongest counterarguments?”
Adjusting context:
- “Now consider this from a beginner’s perspective.”
- “Apply this to a small business context.”
- “How would this change in three years?”
Testing the answer:
- “What might be biased or incomplete here?”
- “Play devil’s advocate.”
- “What would an expert disagree with in this answer?”
Synthesizing:
- “Combine all of the above into one takeaway.”
- “Create a comparison table.”
- “Summarize this at a 5th-grade level.”
The pattern: don’t accept the first answer. Refine until it’s actually useful.
Source comparison and bias detection
When AI gives you research, ask:
- “What sources support this?”
- “What are the limitations of these sources?”
- “Is there a contradicting view?”
- “Who funds or publishes the dominant sources here?”
For Perplexity specifically: look at the cited sources. If they’re all from the same publisher, the same political perspective, or the same time period, that’s a signal. Get more variety.
The verification habit
A powerful prompt addition you can build into your habits:
“Give me the answer, then list two things I should double-check before trusting it.”
This forces the model to surface its own weak points. It can’t pretend to be 100% confident — it has to flag uncertainty.
Other variations:
- “What’s the strongest reason this answer might be wrong?”
- “If you had to bet against this, what would you say?”
- “What would change this conclusion?”
A truth principle: AI confidence is a writing style, not a measurement. The model sounds equally confident whether it’s right or hallucinating. Build the verification habit into every important task.
Try it: spot the hallucination
Same prompt, three different AI outputs. One of them is confidently wrong. Pick the hallucinating one and notice the tells.
Tell me three facts about penguins.
Three AI outputs. One of them is hallucinating. Click the one you think is wrong, then reveal.
Key takeaways
- Combine Perplexity (grounding) with Claude (synthesis) for serious research
- Iterate — first answers are starting points, not endings
- Always check sources, especially when they all agree
- Build verification into your prompts: “list two things to double-check”
Quick Check
1. In the Claude + Perplexity research workflow, what does each tool do?
2. The best way to handle the first AI answer to a research question is to:
3. A useful follow-up prompt to surface model uncertainty is:
4. If every source Perplexity cites comes from the same publisher, you should:
5. The "truth principle" from this module is: