Categories
Contemporary Issues Core Theory Key Insights

When Research Eats Teaching: The Metric That Devoured the Classroom

Universities say teaching matters, but incentives say otherwise. Publication counts, grant revenue, and rankings convert attention away from pedagogy. This piece documents how organizations become what they measure and offers a counter-incentive architecture: visible teaching portfolios, peer review of instruction, and time-protected courses that count for promotion. If we do not change the scoreboard, we merely narrate decline.

There is a horror film playing on many campuses. The creature is not supernatural; it is a spreadsheet. It lumbers down corridors labeled Impact, Grants, and Rankings, and anything that is not immediately countable gets quietly stepped on. Administrators swear that teaching matters. The budget whispers otherwise. In organizations, the whisper wins.

The Metric That Devoured the Classroom

What you measure becomes what you can decide. Universities are organizations, and organizations reduce complexity by turning numbers into premises for action. This is not wicked; it is how systems survive. But when the chosen numbers map only one part of the mission, the rest becomes eloquent rhetoric floating above a very decisive floor. Publication counts, grant revenue, and citation indices are crisp and comparable. Teaching is locally variable, temporally delayed, and scandalously narrative. Guess which one travels fastest to a board meeting.

We often console ourselves with a refined myth: research and teaching naturally nourish each other. Sometimes they do. Often they do not, because they live on different clocks. Research rewards novelty and speed; teaching requires stabilization and patience. Put these on a collision course with the currency of time, and the outcome is predictable. Wednesdays that might have held office hours now host writing sprints. Junior faculty are told quietly, pragmatically—”love your students, but love them later, after tenure.”

The paradox deepens. The more the institution proclaims “teaching excellence” with awards and glossy brochures, the more likely it is that excellence has migrated into ceremony because it cannot find shelter in decisions. This is condensation: the heavier the word “excellence” becomes in public, the lighter it is in budgets. We build ornate shrines to the gods we no longer obey.

This steady consumption eats preparation, as designing an honest course requires long stretches of uninterrupted time. It eats presence, as good teaching is not a performance but a relationship that depends on availability. It eats risk, because who risks rebuilding a course in week six when a quarterly output report is due? And it eats memory, as teaching’s iterative routines are replaced by the portable, modular logic of a paper or a grant.

Redesigning the Scoreboard

There is an exit. But it does not begin with exhortation. It begins with redesigning the scoreboard so that teaching is not a speech act but a decision premise. Think of it as counter-incentive architecture: changing what is visible, what is rewarded, and what is protected, until time flows differently.

Here is a minimal design for that new scoreboard:

  1. Mandate visible teaching portfolios. A portfolio must be more than a syllabus. It is a living record of design and effect: annotated syllabi, assignment prompts with rubrics, and examples of student work with commentary.
  2. Focus portfolios on iteration. Document revisions across offerings. What did you learn from round one? What did you change? Teaching quality is not a personality trait; it is a design practice.
  3. Prioritize downstream evidence. The unit of analysis is capability, not charm. Portfolios must include performance of students in subsequent courses, external assessments, or juried capstones.
  4. Institute trained peer review of instruction. Academic fields trust peer review for research; we should afford teaching the same dignity. Use trained, calibrated observers from across departments, using rubrics that attend to clarity, challenge, and feedback quality.
  5. Ground peer review in narrative judgments. Observers, paired with course materials, should write letters akin to external reviews for tenure, addressing design and delivery. These letters must carry weight, not as anecdotes but as reasoned assessments.
  6. Designate and protect “anchor courses.” Identify foundational courses per program, cap their enrollments, and assign experienced faculty to them.
  7. Protect anchor course time in contracts. No meetings during designated windows; no service creep; no research deadlines scheduled to collide.
  8. Grant course design sabbaticals. Offer micro-sabbaticals—one term every X years—to redesign a core course from the ground up. Treat this as a capital investment.
  9. Set non-substitutable weights in promotion. Fix a floor for teaching in tenure and promotion (e.g., 30-40%) that cannot be offset by extreme research success. Make the standard explicit: no portfolio, no promotion.
  10. Align the budget to teaching quality. Create a central “teaching quality fund” that rewards departments for documented capability gains (not enrollment alone). Use it to offset the revenue loss from capped anchor courses.
  11. Practice index hygiene for teaching. Publicly declare that raw student satisfaction scores will not be used as proxies for learning in promotion files.
  12. Publish a transparent teaching ledger. Annually report teaching portfolios completed, peer reviews conducted, and anchor courses supported, alongside research outputs. Let teaching enter the same public ledger as publications.

This will slow research, skeptics will say. Inevitably, a little. But the current arrangement slows education a great deal. Society does not ask the university to optimize one good at the expense of another; it asks for both in reasonable combination. Others will worry about equity. Good. This redesign helps. Portfolios favor reflective practice over charisma; peer review, when trained, dampens bias; protected time democratizes the ability to teach well by not reserving it for those with personal margins.

This is a problem of translation. Modern universities are functionally differentiated systems. They must translate external pressures (from markets, politics, media) into internal programs. The question is whether the program still speaks the university’s language: truth/not-truth in research, qualification/non-qualification in education. These codes are inconvenient and slow. That is why they require protection.

The Alternate Tenure Letter

Beware the ceremonial response: more awards, more slogans, new centers with inspirational names. The test is simple: Has any decision changed that will cost someone something? Has anyone received time in exchange for teaching, not in addition to it? Are there cases where brilliant research did not rescue indifferent teaching? If the answer to all three is no, the monster is merely amused.

I sometimes imagine an alternate tenure letter, written by a future student rather than a journal editor. “Professor X wasted my time in the moment and saved me years later. Their course was an obstacle; now it is an instrument.” This letter does not exist in our files, because we did not design a place for it. We can.

If we do not change the scoreboard, we will continue to narrate decline in eloquent memos while the creature eats lunch. But scoreboards can be redesigned. Make teaching visible enough to be discussed, serious enough to be judged, and valuable enough to be protected. Then close the door—not to students, but to the metric that devoured the classroom.


#re_v5 (Article 3 of 10 on global higher education issues: Research)