There is one fear running through every staff room in 2026 — that AI will hollow out learning. A student types a homework prompt, the answer arrives in ninety seconds, and the part where the brain does the work is skipped. Teachers see the symptoms. Parents worry. Universities scramble.
The students who are learning the most with AI right now are doing something almost the opposite. They are not asking it for answers. They are asking it to ask them more questions. The pattern works, it is teachable, and it produces real learning — sometimes faster than a traditional tutor would.
1. The “be my tutor” prompt
The core prompt is short. Variations of it have appeared, independently, among students across subjects. The common shape:
“Be my tutor for [topic]. Ask me one question at a time. Don’t give me the answer. If I’m wrong, tell me I’m wrong, and ask me what I think is going wrong in my reasoning. Keep going until I can explain it back to you in my own words.”
That single instruction reorganizes the conversation. The model stops being an answer machine and becomes a patient interrogator. It is endlessly willing to walk through the same idea five different ways. It does not roll its eyes when you take three tries to get a definition right.
For the student, the work feels harder. That is not a bug. That is the learning.
2. Why Socratic beats lookup
The reason this method works is not new. Socratic teaching — leading a student to discover knowledge through guided questioning — has been understood as effective for two and a half thousand years. What is new is that every student now has access to a tireless tutor who will use it.
The cognitive science backs it up. Active recall, where the learner has to produce an answer rather than recognize one, produces dramatically better long-term retention. Spaced repetition, where the same idea is returned to over time, deepens it further. A well-prompted AI tutor can deliver both — without a human’s limited patience or schedule.
3. Subjects where it works best
Some subjects are particularly well-suited to this approach in 2026:
- Maths. The model can generate fresh problems endlessly, accept your worked solution, point out the exact step you got wrong, and ask what you think went wrong before showing you. Best for algebra, calculus, and probability.
- Languages. Modern AI can converse in nearly any major language at most levels. Set it to “respond only in [language] at A2 level, correct me at the end of each message” and you have a tireless conversation partner.
- Science. Biology, chemistry, and physics concepts respond well to the explain-it-back loop. The model can ask “what would happen if we changed X?” in a way that exposes shallow understanding.
- History. Less rote, more argumentative. The model can role-play opposing historical perspectives, helping students see how the same event looks different from different vantage points.
Subjects where the method needs more care: anything where current factual accuracy matters (the model may be slightly out of date), and high-stakes subjects like medicine where the cost of misunderstanding is real.
4. The trap: outsourcing thinking
The failure mode is real and worth naming. A student who uses AI well still has to do the work. A student who uses AI badly skips the struggle — they ask for the answer, paste it into their homework, and move on. They get full marks today and learn nothing for the exam in six weeks.
The way to tell the difference, for the student: can you explain the answer to someone else without looking at it? That is the test. If you cannot, the AI did the learning, and you are about to fail the next assessment.
The way to tell the difference, for the teacher: assessments that require live explanation, in person, of the reasoning behind an answer. Oral exams, whiteboard problems, in-class write-ups. These are getting more common in 2026, and for good reason.
5. For teachers: teach the prompt
The most useful thing a teacher can do this year is teach students the prompt. Not pretend AI does not exist. Not ban it. Show students, explicitly, how to set the model up as a tutor — and how to recognize when they are using it as a crutch.
A handout-sized version:
- The prompt template above.
- One worked example showing the right kind of conversation.
- One example of the wrong kind — asking for the answer directly.
- The “explain it back” test.
- A reminder that the model is sometimes wrong, especially on recent or specialized topics, and the importance of verifying.
Some schools have made this a five-minute intro on the first day of each term. The students who learn the method early treat AI fundamentally differently from those who do not.
6. For parents: setting good defaults
If you have school-age children, the conversation worth having is not about whether they use AI. They will. It is about how. A few defaults worth establishing:
- The model is the tutor, not the answer. The work still has to be done.
- For each piece of homework, the family rule is: explain it back, in your own words, before submitting.
- Disclose AI use to teachers. Hiding it is not allowed; using it well is.
- For younger children, prefer family-tier or constrained models over general-purpose ones.
This is not about restricting the tool. It is about teaching the practice. The students who come out of this generation having mastered the practice will have a real advantage. The ones who didn’t will have learned something else, and worse.
A note on assessment
The bigger question, behind all of this, is what assessment is for. If a homework assignment can be done by a model in ninety seconds, the assignment was probably testing the wrong thing in the first place. Schools that are adapting well are using AI as an excuse to ask harder, more interesting questions. The schools that are pretending nothing has changed will keep being surprised by their own grading curves.
Be the first to respond