How to Make AI Stop Making Stuff Up
Teaching students to ask better questions in the age of prediction engines
I teach film and media for a living, which means I’m always asking students to look closely—not just at what a film shows, but at what it assumes. Lately, that same ethos has migrated into my teaching about AI. Students will come to office hours, sigh dramatically, and say, “I tried using AI the way you told us to… but it made up things that weren’t real.”
My first question is always: “Did you ask it why?”
That question usually startles them. It shifts the moment away from “the AI misbehaved” and back toward human agency. Because underneath all the hype, a language model is not a librarian, not an oracle, not a mind-reading assistant. It’s a pattern engine. If the pattern is unclear, it predicts anyway. Asking “why did you say that?” forces a reflective loop—one that highlights the limits of the tool and the responsibilities of the user.
And this is the part students rarely realize: when an AI hallucinates, the fault isn’t moral. It’s architectural. It’s doing exactly what it was built to do—keep generating text. The trick is to design the conditions in which it stops guessing and starts grounding itself in evidence. That’s where the humans come in.
Here are the habits I teach my students. These aren’t magic spells; they’re literacy skills. They help turn AI from an overconfident improviser into a useful research companion. Accuracy isn’t a personality trait; it’s a relationship between the question, the constraints, and the material you feed the machine.
1. Give the model something real to hold onto
Hallucinations happen when AI has to fill a void. Paste the excerpt, article, dataset, or scene description you want it to use. Then say:
“Use ONLY the text provided. If something isn’t in the text, say so.”
This one sentence eliminates most guesswork.
2. Shrink the playground
The broader the prompt, the wilder the predictions.
The narrower the domain, the cleaner the answer.
“Explain how Naruse uses space in When a Woman Ascends the Stairs” is tight.
“Tell me about character development” is an open field full of creative weeds.
Bounded questions yield bounded answers.
3. Make uncertainty acceptable
AI gets strange when it thinks you expect omniscience.
Give it permission to say “I don’t know.”
“If you’re not sure, just tell me.”
Instant improvement.
4. Ask it to check its own work
Models catch many of their own errors if you ask them to.
“Review your answer and remove anything not supported by the text.”
This second pass forces the model to align with evidence rather than vibes.
5. Don’t request invented citations
Without search tools, a model will generate fake citations to satisfy your request. Avoid this entirely with:
“Cite only the text I pasted, or ask me for a source if you need one.”
No phantom journals, no imaginary scholars.
6. The Evidence Sandwich
For high-accuracy work, structure your prompt like this:
Provide the source.
Ask for analysis based only on that source.
Ask the model to verify and revise its answer.
It’s the AI version of writing a draft, then proofreading with a red pen.
7. Avoid vibe-based prompts when you want facts
“Tell me something interesting about early cinema” invites invention.
“Summarize what THIS TEXT says about early cinema” forces discipline.
Accuracy comes from constraints, not charisma.
8. The one rule that covers everything
AI guesses when you leave blank space.
AI stops guessing when you give it materials.
This is the shift our students have to make: it’s not about taming the machine; it’s about architecting the question.
Conclusion: The Pedagogy of Better Questions
What I want students to understand is that accuracy isn’t a mystery, and hallucinations aren’t proof of AI’s unreliability—they’re proof of ours when we don’t give it the conditions to succeed. The model is only as precise as the boundaries we set and the evidence we provide. Teaching AI literacy is teaching question literacy: specificity, grounding, transparency, and the humility to accept “I don’t know” as a valid answer.
If students learn anything from this moment in technological history, I hope it’s this: the better we design the question, the truer the answer becomes.


