Module 1 of 9

Module 1: What Is Generative AI?

Introduction

Before we start — a note on pacing. This course covers a lot. You’re not expected to retain it all on the first read. AI fluency comes from running these patterns on real work, not memorizing them. Pick what sticks, build the habit on one or two pieces, and come back for the rest when the next problem calls for it. The Practice Projects at the end are designed to help you do exactly that.

Why this matters

Most people use AI without understanding what it actually is. They treat it like a search engine, a magic 8-ball, or a person. None of those mental models work well. If you understand what AI actually does — and what it doesn’t — you’ll get dramatically better results from every interaction.

The four pillars of AI literacy

AI literacy is four things. Every module in this course builds toward one of them:

1. A correct mental model of how AI works. Not magic. Not a person. A pattern-prediction machine that’s only as good as the prompt you give it.

2. The ability to write clear specifications. Prompts as specs. Five-part formula. Iterate to refine.

3. A current map of the tool landscape. Which tool wins which job. When to pay for a wrapper. When to stick with a frontier model.

4. The judgment to verify and challenge AI output. Confidence ≠ correctness. Verification is your job. Always ask what to double-check.

This is what separates someone who uses AI from someone who is fluent in AI. The first group is everywhere now. The second group is rare — and getting hired, promoted, and trusted to do real work.

What AI is (and isn’t)

Generative AI is a class of software that creates new content — text, images, audio, video — based on patterns learned from massive amounts of training data.

It is not:

It is:

LLMs vs diffusion models

Two main families of generative AI power almost everything you’ll use:

Large Language Models (LLMs) — text generation

Diffusion Models — image and video generation

You don’t need to know the math. You need to know that they’re fundamentally different processes — and that’s why text AI and image AI behave so differently.

How prediction works

When you type a question into a Large Language Model like ChatGPT, the model isn’t “thinking” about your question. It’s calculating: given all the words that came before, what’s the most likely next word? Then it does that again. And again. One word at a time, building a response.

This is why:

Why hallucinations happen

A hallucination is when AI produces output that sounds confident but is factually wrong — a fake citation, a made-up statistic, a non-existent product feature.

Hallucinations happen because the model is generating plausible-sounding text, not retrieving facts. If a real fact and a fake fact would both sound equally plausible in context, the model might generate either one.

How to handle hallucinations:

Key takeaways

Quick Check

1. Generative AI is best described as:

2. Which family of model is primarily used for text generation?

3. When ChatGPT writes a response, what is it actually doing under the hood?

4. A "hallucination" in AI output is:

5. Which of these is the correct response to AI confidence?