Categories
AI Commentary

The Steering Problem: Why Universities Are Teaching the Wrong Lessons About AI

Universities are treating AI as a crisis of cheating, when it’s really a crisis of assessment. The essay was never the learning itself, just a convenient proxy for it—and that proxy is now broken. Our job is to stop defending the old artifact and start teaching the far harder and more valuable skill of steering a powerful tool that cannot think. The future of education is not in the final answer, but in the architecture of the question.

The modern university is in a quiet state of panic over artificial intelligence and a it is responding with the only tools it knows: policing, prohibition, and performance metrics. The fear is palpable: that students will use this powerful new instrument to think for them, rendering our most cherished forms of assessment obsolete. This fear, while understandable, is a profound misreading of the situation. The crisis is not one of cheating, but of comprehension. By focusing on the danger of a machine that can write an essay, we are ignoring the fundamental nature of the tool itself. The problem is not that AI can think; the problem is that it cannot. Therefore, the university’s urgent new mission is not to build defenses against AI, but to become the one place on earth where humans learn the art of steering this profoundly powerful, and profoundly unintelligent, new cognitive engine.

A Summary of the Noise
The current debate surrounding AI in education is a flurry of earnest, well-intentioned, and largely irrelevant activity. We see universities investing in ever-more-sophisticated plagiarism detectors, a technological arms race they are destined to lose. We see faculty spending countless hours crafting elaborate policies on “responsible AI use,” turning syllabi into legal documents. We see panicked calls for a return to in-class, handwritten essays, a nostalgic retreat from a future that has already arrived. The entire conversation is framed as a moral problem, a battle to protect academic integrity from a flood of synthetic text. This is a system observing a new phenomenon and immediately classifying it as a threat to its existing operations. It is a perfectly normal immune response, but one that mistakes a revolutionary tool for a mere disease.

The Possible Futures
A deeper analysis reveals that the university system is currently facing a choice between three distinct futures, or “basins of attraction.”

  1. Systemic Collapse: The Policing Quagmire. This is the future we are currently building. It is a world of escalating suspicion, where the university’s primary function shifts from education to forensic verification. Lecturers become detectives, students become suspects, and trust, the essential currency of education, evaporates. The institution dedicates immense resources to proving authorship, while the actual process of learning withers under the weight of surveillance. This high-cost, low-trust model is unsustainable and represents the total failure of the university to adapt.
  2. Systemic Adaptation: The Augmented University. In this more optimistic future, the university successfully integrates AI as a helpful, if misunderstood, tool. AI becomes a better search engine, a more efficient proofreader, a helpful brainstorming partner. Students are taught to “use AI responsibly,” and it is folded into the curriculum as a useful accessory. The university preserves its structure and its methods, treating AI as just another technological upgrade, like the calculator or the word processor. This path ensures survival, but it is a failure of imagination that completely misses the tool’s transformative potential.
  3. Systemic Transformation: The University as a Steering Academy. This future sees the university undergo a radical redefinition of its purpose. It observes that AI has made the production of content cheap, but the structuring of inquiry more valuable than ever. The university ceases to be a place where students are sent to acquire information and instead becomes a place where they learn to command it. The core curriculum shifts from teaching what to think to teaching how to construct a framework for thinking. The professor is no longer the sage on the stage, but the master navigator, and the student is the apprentice pilot learning to steer a powerful, non-sentient vessel through oceans of data.

The Deeper Logic: A Map of Resonances
The reason the first two futures represent a form of failure is that they are based on a fundamental misunderstanding of what AI has actually broken. The harmonic principle that resonates across all these futures is the collapse of the proxy. For centuries, the written essay has served as the university’s primary proxy for learning. It was never the learning itself, but a convenient, scalable signal used to infer qualities like critical thinking, research skills, and diligence. AI has not broken learning; it has broken the reliability of this proxy. It can now generate a near-perfect signal without the underlying process having taken place.

The university’s panic is the panic of an institution that has just discovered its primary measurement tool is faulty. This is where the work of the sociologist Niklas Luhmann becomes so useful. A social system, like a university, cannot be directly controlled from the outside. You cannot simply command it to change. You can only “steer” it by altering the information in its environment to which it must respond. The university is currently responding to AI as a threat to its core function of validation. The strategic challenge is to re-frame AI not as a threat to validation, but as a new medium for validation.

The True Lever for Change
The true lever for change, therefore, is not to ban or police the tool, but to shift the entire locus of assessment from the static product to the dynamic process. We must stop grading the essay and start grading the “thinking” that produced it.

This means a radical shift in practice. It means the end of the take-home essay as a primary assessment tool and the rise of live, dynamic demonstrations of competence. Imagine a final exam where a student is given a novel problem and required to use an LLM to help solve it in real-time, submitting their prompt history and a live oral defense of their process as the final product. The assessment is not on the polish of the final text, but on the intellectual rigor of the student’s prompts, their ability to identify and correct the AI’s errors, and their capacity to synthesize the machine’s output into a coherent, original argument.

This leads to the counter-intuitive intervention: to improve academic rigor, the university must make the sophisticated use of AI a mandatory component of education. The most critical action a university leader could take right now is to defund their plagiarism detection software and reinvest that money in training faculty to become expert prompters and evaluators of AI-assisted work. The goal is to create an environment where using AI poorly is impossible to hide, and using it brilliantly is the highest form of academic achievement.

The Counter-Argument
Of course, one must acknowledge the strongest counter-argument to this vision. Let us call it the “Lens of Foundational Competency.” Its core axiom is that one cannot learn to effectively steer a tool until one has first mastered the fundamental skill that the tool automates. You cannot become a skilled architect by only using CAD software if you have never learned to draw by hand; you cannot become a good mathematician by only using a calculator if you do not understand arithmetic. This lens predicts a future where students who learn only to “steer” AI lack the foundational competencies in writing, research, and argumentation. They become skilled pilots with no knowledge of maps or destinations, capable of generating beautiful prose about subjects they do not truly understand. This is a serious risk, one that must be mitigated by ensuring that the teaching of steering is always coupled with the teaching of these foundational principles.

Conclusion
The arrival of powerful AI does not represent the end of academic rigor, but an opportunity to finally achieve it. For too long, we have settled for the proxy of a well-written essay. Now, the hollowness of that proxy has been exposed. We have a choice: we can either retreat into a defensive crouch, attempting to preserve an obsolete model of validation, or we can embrace our new role. The university of the future will be judged not on its ability to prevent students from using AI, but on its success in producing a new generation of thinkers who can masterfully and ethically steer it. The task is not to protect the old knowledge, but to cultivate the new wisdom required to command it.

Leave a Reply