Module 6 of 9

Module 6: Research and Critical Thinking

Why this matters

AI is an incredible research assistant — and a dangerous one if you trust it blindly. This module covers how to use AI for research the right way: as a starting point that you verify, refine, and challenge.

Deep research workflows: Claude + Perplexity

A two-tool workflow that beats either tool alone:

  1. Start with Perplexity — for grounding in current, cited information

    • “What are the current best practices for [topic]?”
    • “Compare [option A] and [option B] in 2026.”
    • Save the sources Perplexity cites.
  2. Move to Claude — for synthesis, analysis, and applied thinking

    • Paste Perplexity’s output (or upload as a document)
    • Ask Claude to: identify gaps, challenge assumptions, apply the info to your specific situation, draft something based on it

This combo gives you accurate raw material (Perplexity) plus deep applied thinking (Claude). Either tool alone leaves money on the table.

Iterative refinement

The first AI response is rarely the best one. Good researchers treat AI like a thinking partner — they push back, ask for more, narrow the focus.

Useful follow-up patterns:

Going deeper:

Adjusting context:

Testing the answer:

Synthesizing:

The pattern: don’t accept the first answer. Refine until it’s actually useful.

Source comparison and bias detection

When AI gives you research, ask:

For Perplexity specifically: look at the cited sources. If they’re all from the same publisher, the same political perspective, or the same time period, that’s a signal. Get more variety.

The verification habit

A powerful prompt addition you can build into your habits:

“Give me the answer, then list two things I should double-check before trusting it.”

This forces the model to surface its own weak points. It can’t pretend to be 100% confident — it has to flag uncertainty.

Other variations:

A truth principle: AI confidence is a writing style, not a measurement. The model sounds equally confident whether it’s right or hallucinating. Build the verification habit into every important task.

Try it: spot the hallucination

Same prompt, three different AI outputs. One of them is confidently wrong. Pick the hallucinating one and notice the tells.

Prompt

Tell me three facts about penguins.

Three AI outputs. One of them is hallucinating. Click the one you think is wrong, then reveal.

Key takeaways

Quick Check

1. In the Claude + Perplexity research workflow, what does each tool do?

2. The best way to handle the first AI answer to a research question is to:

3. A useful follow-up prompt to surface model uncertainty is:

4. If every source Perplexity cites comes from the same publisher, you should:

5. The "truth principle" from this module is: