Generative AI will grade, draft, and drill. It is efficient at repetition and indifferent to meaning. This essay distinguishes drill from education: machines can assist practice; they cannot bestow judgment. I outline a “human-in-front” pedagogy that uses AI for scaffolded rehearsal while reserving interpretation, critique, and ethical reasoning for human teachers. Assessment must pivot from product to process, from answers to provenance.
If you could hire an adjunct who never sleeps, answers emails instantly, drafts serviceable prose on demand, grades without complaint, and charges less than a coffee budget, would you? Universities already have. Generative AI is the tireless colleague in the corner—always available, occasionally brilliant, incurably literal, and completely indifferent to meaning. It optimizes the next word, not the best reason.
The Tireless Adjunct
This distinction matters. Drill and education are not the same species. Drill repeats until fluency appears; education decides when fluency becomes judgment. Drill is efficient; education is expensive in time and attention. Machines thrive on repetition and approximation; teachers traffic in interpretation and accountability. When we forget the difference, we confuse speed with progress and outsource the very thing degrees are supposed to certify: the capacity to justify decisions under uncertainty.
What machines can do superbly, at scale, is practice. They can generate endless variations of the same problem, adapt difficulty, give immediate hints, and correct syntax with enviable patience. What machines cannot do is accept responsibility for error, weigh consequences, or decide what ought to matter. They do not possess standards; they imitate them. They cannot tell you when to stop trusting them. They will apologize if prompted. The apology costs them nothing.
If we structure courses as contests of output, machines will increasingly win. If we structure them as cultivations of judgment, humans remain indispensable. We must retire the illusion that fluent, AI-generated text equals mastery. The pivot is not to ban the machine but to design it into the workshop without letting it sit in the chair of judgment.
Human-in-Front Pedagogy
Design, not lament, is the antidote. Call this “human-in-front” pedagogy: humans decide the purpose and standards; machines assist practice within those boundaries. Then we assess not only what appears on the page but how it came to be there.
Here is a compact architecture for this approach:
- Declare the purpose of AI per assignment. For every task, specify whether and how AI may be used. If permitted, state the use cases (brainstorming, drill) and the forbidden zones (fabricating sources, masking ignorance). If prohibited, justify it pedagogically (“foundational skill rehearsal”).
- Make provenance primary. Shift assessment from answers to how answers were produced. Require version histories, prompt logs, and brief rationale memos (“What did you ask the model? What did it get wrong? What did you change and why?”). Grade the process substantially. If product is 100% of the grade, you have invited a model to class.
- Normalize oral defense. A five-minute viva can rescue hours of suspicion. Randomly sample students for brief defenses: explain choices, justify sources, respond to a curveball. A design student, for instance, might use AI to propose three layouts, but must then present an ethical brief to peers: “Who benefits, who is burdened, what trade-offs are accepted?” The machine can simulate the defense; it cannot pass it.
- Use AI for drill—on purpose and in public. Build “AI practice blocks” into courses. In statistics, have students use AI to generate datasets with specified properties—missingness, messy outliers—and then they must diagnose and correct. The model is a mischievous collaborator; the student is the analyst who knows when not to trust the spreadsheet’s charm.
- Teach adversarial reading. Our students will live with systems that are fluent and wrong with unsettling confidence. Incorporate “error safaris”: assignments that ask students to elicit mistakes from models and catalogue failure modes. In a literature class, an AI might draft two competing interpretations; the student must then adjudicate, citing textual evidence and writing a short note: “What essential move did the model miss?”
- Treat prompts as hypotheses. Require students to pre-register a plan before using AI: what they seek, what constitutes a good answer, what sources they will use to verify. Prompting is not magic; it is method.
- Keep human-only zones, by design. Reserve portions of assessment for unaided performance, especially early in a sequence. Think of instrument ratings: sometimes you must fly without autopilot because one day the sky will insist. The point is not purity; it is calibration.
- Ban the secret grader. Use AI to draft feedback on low-stakes work; never let it be the sole grader on consequential assessments. If a machine proposes, a human must dispose—reading, adjusting, taking responsibility. The student’s name deserves more than an algorithm’s confidence score.
- Write an honesty policy for the machine age. Update academic integrity statements to include AI-specific norms: disclosure of assistance, prohibition on fabricated sources, and consequences calibrated to intent. Make the policy teachable, with examples.
- Provide equitable access or throttle expectations. If assignments assume AI assistance, ensure access to adequate tools or design alternatives. The worst inequity is to grade students on their ability to purchase a better autocomplete.
Of course, this adds workload—initially. But provenance grading reduces disputes and shrinks the time spent investigating “suspiciously good” work. The time saved on detection can fund design. Some students will game the process artifacts, which is precisely why oral defenses and random spot-checks must exist.
This is also an equity issue. As a practice assistant, AI can level rehearsal opportunities; as an invisible ghostwriter, it disguises inequities and then punishes them later when unaaided performance is required. Human-in-front pedagogy uses AI to equalize practice while keeping final judgments anchored in demonstrated understanding.
This is a challenge of translation without capture. Education is a dance of double contingency: teachers and students act while anticipating each other’s moves. Trust reduces this complexity; surveillance tries to replace it. AI tempts us to swap trust for telemetry. But telemetry does not know what silence in a seminar means or whether an elegant proof hides a conceptual hole. Meaning lives in the risk of interpretation. The human must remain in front not because of nostalgia but because of accountability.
Drill vs. Judgment
Picture, as a cautionary fable, a future in which we outsource grading to machines, practice to machines, drafting to machines, and, finally, goal-setting to the dashboard that optimizes engagement. The system will be efficient and empty. Graduates will carry portfolios of polished products and little memory of why they built them.
There is a more durable future. In it, AI is the workshop assistant: fetching materials, sketching options, posing drills, playing devil’s advocate on command. Teachers run the shop. Students learn to explain their choices to a human who can be persuaded—and persuaded to say no. Assessment asks not only “What did you make?” but “How did you make it, and why is it fit for purpose?”
Machines will keep improving at what they are built to do: generate plausible continuations. Our task is to improve at what we are built to do: generate reasons. Let the machine handle the drill. Keep the judgment human. And when it comes time to grade, resist the seduction of “looks right” and insist on the oldest button the university knows how to press: Because.
—
#re_v5 (Article 5 of 10 on global higher education issues: AI)