The ECP Label: South Africa’s Higher Education Lifeline or a Source of Stigma? A Luhmannian View

Extended Curriculum Programmes (ECPs) in South Africa aim to address educational disparities and promote equity in higher education. However, their structure may unintentionally perpetuate stigma and disadvantage, raising questions about their implementation and impact.

I. The ECP Paradox: A Lifeline or a Label?

A. Introducing Extended Curriculum Programmes (ECPs) in South Africa

Extended Curriculum Programmes (ECPs) in South Africa represent a significant systemic intervention within the higher education landscape. Supported by the national Department of Higher Education and Training (DHET), these programmes are explicitly designed to improve graduation and throughput rates for students identified as “educationally disadvantaged, underprepared, unprepared, and at-risk”. Their historical roots trace back to the 1980s, when various academic support or bridging courses emerged. These initiatives were later formalised in the early 2000s with the clear objective of promoting access and success for students who had been historically denied entry to quality higher education, a legacy of the apartheid era. A common feature of ECPs is the extension of the standard degree duration; for instance, a Bachelor’s degree typically completed in three years might be extended to four, allowing for the incorporation of foundational academic support.

The institutional rationale behind ECPs is firmly grounded in principles of social justice and equity. They aim “to create the curriculum space needed to enable talented but underprepared students to achieve sound foundations for success in higher education”. In a post-apartheid South Africa grappling with deep-seated educational disparities stemming from decades of unequal schooling, ECPs are conceptualised as a crucial mechanism for redress. They signify an institutional acknowledgement of the systemic inequalities that persist in the primary and secondary education systems, and an attempt to level the playing field at the point of entry into university.

B. The Inherent Tension: Good Intentions, Problematic Structures?

Despite the laudable goals of equity and enhanced access that underpin Extended Curriculum Programmes, a fundamental tension arises from their very structure. While ECPs are intended to be supportive lifelines, the central argument of this analysis is that the act of separating students into distinct ECP streams and, consequently, labeling them as “ECP students” can generate unintended and significant negative psychological consequences. This separation, often based on perceptions of academic deficit, may inadvertently create new forms of disadvantage.

This concern is not merely speculative. Critical observations have been made that “ECPs, as they stand, may further perpetuate racial differences as opposed to creating equal opportunities for success at university”. This suggests that the mechanism of differentiation, even if well-intentioned, can reinforce existing societal cleavages. The paradox, therefore, lies in the methodology: a programme designed for inclusion and to advance social justice might, through its operational reliance on separation and labeling, foster experiences of exclusion and psychological harm. This challenges the core mission of ECPs, suggesting that the way “support” is conceptualized and delivered—if predicated on a distinction that itself becomes a source of stigma—may be inherently flawed. The very “social justice” framing of ECPs creates an immediate ethical and practical dilemma if the chosen method undermines the psychological well-being of the students it aims to serve. This points to a deeper question about whether the underlying model of support is based on a deficit view of students which, by its nature, leads to “othering” and its associated negative impacts, regardless of the programme’s stated intentions.

II. The Weight of a Label: Stigma and the Psychological Scars for ECP Students

A. Documented Experiences of Stigmatization

The designation “ECP student” is not a neutral identifier within the university environment; it often acts as a potent social marker, carrying with it a significant burden of stigma. Research indicates that students enrolled in ECPs frequently report experiencing stigmatization, primarily from their peers in mainstream programmes, and sometimes implicitly from the institutional environment itself. These experiences include being perceived as academically inferior, less capable, or, as some students articulate, still operating at a “high school level”. Such perceptions contribute to profound feelings of exclusion, alienation, being underrated, and a diminished sense of belonging within the broader university community.

This stigmatization is not merely a series of isolated interpersonal incidents but appears to be a systemic issue rooted in the programme’s structural distinctness. The label “ECP” becomes a visible signifier of difference, one that is often interpreted negatively within the university’s social ecosystem. The problem is recognized beyond student experiences; even academic staff facilitating ECPs have noted “stigmatisation and lack of confidence” among their students. Furthermore, the challenge of “information asymmetry” regarding ECPs—where students and potentially others lack a comprehensive understanding of the programmes’ nature, criteria, and objectives—can create a vacuum that negative stereotypes readily fill. When the purpose and function of ECPs are not clearly communicated and understood, the institutional act of separating students can be easily misinterpreted by others as a definitive signal of deficiency, irrespective of the institution’s supportive intent. This ambiguity allows negative perceptions, such as that of ECP students being “academically inadequate,” to flourish and become entrenched.

B. The Psychological Fallout: Stress, Anxiety, and Diminished Self-Worth

The social experience of being labeled and stigmatized translates into tangible psychological distress for many ECP students. The constant navigation of an environment where one might be perceived as “lesser” contributes to a range of negative emotional and cognitive outcomes. Studies and student reports highlight increased levels of stress and anxiety, a negative self-perception often manifesting as students “feeling stupid” compared to their mainstream counterparts, pervasive insecurity, and diminished motivation.

These psychological burdens are not indicative of individual failings or inherent weaknesses. Rather, they are understandable responses to a challenging and often invalidating social and academic milieu. Research points to “consistently high levels of mental health issues” among students in extended programmes, noting that these students, already dealing with “self-esteem and capability challenges intensified by peer and institutional attitudes, might face heightened susceptibility to mental health issues”. The environment created by the ECP structure, intended to be a scaffold for academic success, can paradoxically become a source of chronic stress that erodes students’ confidence and overall mental well-being. The feeling of being an “outcast” or not truly belonging can be deeply corrosive to a student’s academic journey. This psychological distress is likely a critical mediating factor between the ECP label and its associated stigma, and the adverse academic outcomes, such as high dropout rates, observed among these students. It suggests that the difficulties faced by ECP students extend beyond their initial “underpreparedness”; they are also actively navigating an environment that can make them feel inadequate, which directly impacts their capacity to engage, persist, and succeed.

III. Through Luhmann’s Looking Glass: How We “See” ECP Students

A. Luhmann’s Second-Order Observation: An Accessible Explanation

To understand the complex dynamics of how ECP students are perceived and how they, in turn, perceive themselves, the sociological theory of Niklas Luhmann, particularly his concept of “second-order observation,” offers valuable insights. In essence, second-order observation is the act of observing how others observe. It moves beyond a direct, first-order observation of an object or event (e.g., “this student is in an ECP”) to an observation of the observation process itself (e.g., “how does the institution/mainstream student observe and categorize this ECP student, and based on what distinctions?”). It involves “watching the watchers” and understanding the assumptions, distinctions, and frameworks that shape their perceptions and constructions of reality. Luhmann’s theory posits that all social systems, including universities, construct their social reality based on such observations of observations. This lens allows for an analysis that goes beyond simply stating “stigma exists” to dissecting how the social system of the university, encompassing its structures, staff, and students, collectively constructs and perpetuates the meaning attached to being an “ECP student.”

B. The “ECP Student” as a Systemic Distinction

Social systems, according to Luhmann, operate and make sense of the world by drawing distinctions. A distinction creates a form with two sides: the “marked” side (that which is focused upon) and the “unmarked” side (the background or the norm). Within the South African higher education system, the ECP/mainstream divide functions as such a primary distinction. This act of differentiation “marks” ECP students, rendering them observable as different from their “mainstream” counterparts.

The institution, as a system, typically makes a first-order observation of these students through the lens of “underpreparedness,” “at-risk,” or in need of foundational support due to “the poor quality of their previous educational experiences”. This initial observation and the subsequent categorization are fundamental to the ECP’s existence. The ECP label, therefore, is not a neutral descriptor; it is a powerful systemic marker. It signifies that the student has been observed by the system according to a particular set of criteria (often related to prior academic performance or socio-economic background) and placed into a distinct category. This categorization then shapes how these students are perceived, resourced, and interacted with within the institutional environment.

C. The Vicious Cycle: Observing the Observation

The psychological impact on ECP students is significantly amplified through their own engagement in second-order observation. They are not passive recipients of the “ECP” label; they actively observe how the institution (including lecturers, administrative systems, and support services) and their mainstream peers observe, categorize, and treat them based on this distinction. They perceive what others say, and critically, what they do not say or how they act, in relation to their ECP status.

When ECP students observe that they are being “observed as lesser,” “different,” or “remedial” by others , this awareness reinforces and internalizes the stigma and feelings of alienation. The very structure of ECPs—separate classes, sometimes different campuses or administrative processes, extended programme durations—functions as ongoing communication from the institution that constantly re-affirms this distinction. This is analogous to how systems in other specialized fields, like special education, take responsibility for and categorize individuals based on observed differences, thereby shaping their experience within that system.

The university system, by creating the distinct category of “ECP student,” may unintentionally develop a “blind spot” concerning the impact of this categorization process itself. The institutional focus tends to be on the “marked” student—the individual perceived as needing assistance—potentially obscuring how the very act of marking, labeling, and separating becomes a significant part of the problem. The system’s primary distinction (ECP vs. mainstream) becomes a potent social signal, communicating difference in a way that can contribute to the very issues (such as low self-esteem and alienation) it aims to mitigate. The focus remains on the student’s perceived deficit, rather than on the system’s role in constructing the social meaning and consequences of that perceived deficit.

Ultimately, the university as a social system communicates “ECP status” as a meaningful category. This meaning is not unilaterally imposed but is co-constructed. The institution defines it through its policies, structures, and resource allocation. Mainstream students interpret and enact this meaning through their interactions and attitudes, often leading to stigmatization. ECP students, in turn, internalize this socially constructed meaning through their lived experiences and their second-order observations of these institutional and peer dynamics. Thus, the “meaning” of being an ECP student—often laden with negative connotations of deficiency—becomes a social reality with profound and tangible psychological consequences, shaping identity, self-worth, and the overall university experience.

IV. The Numbers Don’t Lie: Academic Precarity and the Dropout Dilemma

A. Examining ECP Dropout and Throughput Rates

The assertion that ECP students drop out at significantly higher rates than their mainstream counterparts warrants careful examination of available data. While a precise, system-wide figure confirming ECP students drop out at “twice the rate of mainstream students” is not directly substantiated by the provided information for the entire South African higher education sector, the existing evidence paints a concerning picture of academic precarity for students in these programmes.

General statistics for South African higher education already indicate low throughput and high attrition. Reports suggest that “less than 50% of those who enrol for a degree… never graduated” , and other sources indicate that up to 50% of students do not complete their qualifications, with dropout rates being particularly high in this range.

More specific data on ECP outcomes, though often institution-specific, reveals significant challenges. A quantitative evaluation of a STEM ECP at one research-intensive South African university (for cohorts from 2010-2016) found an overall graduation rate of 48.9% for Bachelor’s degrees. While this figure itself is below ideal, what is particularly alarming are the persistent racial disparities in dropout rates within this ECP. The study reported that higher percentages of Coloured (48%) and Black African (54%) ECP students dropped out compared to White ECP students (38%). Most strikingly, Xhosa-speaking ECP students within this cohort experienced a dropout rate of 69%. In another context, a Health Sciences ECP at the Central University of Technology (CUT) reported that for the 2007 cohort, 58% graduated within the extended timeframe, while 21% of those who articulated to mainstream eventually dropped out.

A DHET report from 2020, which aimed to compare dropout rates for regular 3-year programmes and ECPs for the 2013 cohort, presents a table (Table 5 in the document) that unfortunately lacks the specific ECP dropout figures for the years displayed, only providing data for “Regular 3 Year” programmes (e.g., 19.9% dropout in 2014, 26% in 2015 for the 2013 cohort). This data gap at a national comparative level makes a comprehensive, direct comparison challenging based on the available materials.

The following table, derived from the study of a STEM ECP at a research-intensive university , illustrates the stark internal disparities in dropout rates within that specific programme:

Student Group (within STEM ECP, 2010-2016 Cohorts) Dropout Rate (%)
Coloured 48%
Black African 54%
Xhosa-speaking (subset of Black African students) 69%
White 38%

This data underscores that even within programmes designed to offer enhanced support, significant inequalities in outcomes persist, particularly affecting students from historically disadvantaged racial and linguistic backgrounds. This suggests that the ECP model, as currently implemented, may not be uniformly effective in mitigating pre-university disadvantage and, in some instances, might interact with other systemic factors to reproduce inequitable outcomes. The lack of readily available, clear, and comprehensive comparative data on ECP versus mainstream dropout rates across the entire South African HE system is itself a significant concern. Such data is crucial for a thorough evaluation of ECP effectiveness nationally and for advocating evidence-based systemic changes. Without this overarching data, the true extent of the problem may be masked, hindering efforts to drive large-scale policy reform.

B. Connecting Psychological Distress to Academic Outcomes

The psychological burdens carried by ECP students—stigma, low self-esteem, anxiety, and feelings of alienation—are not isolated from their academic trajectories. There is a strong basis to argue that these psychosocial challenges are key contributing factors to the observed high dropout and low throughput rates. A student who is constantly battling feelings of inadequacy, exclusion, and the weight of a negative label will inevitably find it more difficult to engage fully with their academic work, persist through challenges, and ultimately succeed.

Studies note that the difficulties ECP students face, including stigmatization, can negatively affect their academic performance and their perception of how their current studies relate to future success. The heightened susceptibility to mental health issues, intensified by peer and institutional attitudes towards their ECP status, logically impacts academic persistence. Even when ECP students show gradual improvement in performance, as noted by some facilitators, this often occurs alongside initial negative attitudes towards being in the ECP, implying a psychological struggle that requires substantial support and resilience to overcome. The academic precarity of many ECP students cannot, therefore, be solely attributed to their initial levels of academic “underpreparedness.” The ongoing psychological impact of being labeled, separated, and often stigmatized within the university environment likely plays a crucial, and deeply detrimental, role in their academic journeys.

V. Re-Coding Success: Towards Truly Inclusive Higher Education in South Africa

A. The Argument for Systemic Re-evaluation

The evidence suggests that the dominant model of Extended Curriculum Programmes in South Africa, while born from a commitment to equity, warrants a fundamental re-evaluation. By focusing on identifying, separating, and labeling students based on a perceived deficit model , these programmes may inadvertently contribute to the very psychological and academic challenges they aim to ameliorate. The act of distinguishing students as “ECP” communicates a difference that is frequently interpreted negatively within the university’s social system. This can perpetuate a cycle where the label itself becomes a barrier, fostering stigma, diminishing self-worth, and ultimately impacting academic performance and persistence.

Critiques have pointed out that “ECPs, as they stand, may further perpetuate racial differences as opposed to creating equal opportunities” and that a more effective approach would be to see “all students as having different learning needs”. This perspective challenges the notion of “underpreparedness” as an inherent characteristic of the student, shifting focus towards the educational system’s capacity to respond to diversity. Early academic development work in South Africa recognized the limitations of approaches based on “remediation and its associations of inferiority,” yet deficit-oriented views can persist. The solution, therefore, may not lie in merely tweaking existing ECP structures but in rethinking the foundational approach to supporting student diversity and addressing educational disadvantage across the higher education sector.

B. Exploring Alternatives: Flexible Curricula and Universal Design

Moving beyond models that risk stigmatization requires exploring alternative approaches that embed support and inclusivity within the mainstream educational experience. Two promising avenues are flexible mainstream curricula and Universal Design for Learning (UDL).

Flexible mainstream curricula involve designing courses and programmes to be inherently more inclusive and responsive to a diverse range of learning needs from the outset. This means building in varied teaching strategies, assessment methods, and support mechanisms that benefit all students, rather than singling out a particular group for separate intervention.

Universal Design for Learning (UDL) offers a comprehensive framework for creating learning environments, materials, and assessments that are accessible and effective for everyone, thereby reducing the need for separate “remedial” or “extended” tracks. UDL principles advocate for providing multiple means of representation (how information is presented), multiple means of action and expression (how students demonstrate learning), and multiple means of engagement (how students are motivated and involved in learning). The relevance of UDL to the South African higher education context, characterized by significant student diversity and educational inequalities, is increasingly recognized as a means to foster genuinely inclusive learning environments.

Broader Academic Development (AD) initiatives also play a crucial role. These encompass a holistic approach focusing on student development, staff development, curriculum development, and institutional development, all aimed at enhancing the quality and effectiveness of teaching and learning, with a particular focus on equity of access and outcomes. Such approaches shift the focus from “fixing the student” to creating more inherently inclusive, supportive, and effective educational systems for all learners. This represents a fundamental shift in the observation of student diversity: instead of viewing difference primarily as a deficit requiring separate remediation, these alternatives observe difference as a normal and valuable aspect of any student cohort. Consequently, the responsibility shifts to the system—the curriculum, pedagogy, and institutional structures—to become flexible and accommodating.

C. Concluding Call to Action: A Luhmannian Re-Coding

The challenge for South African higher education is to move towards models of student support that are truly empowering and equitable, without inadvertently creating new forms of marginalization. This requires a systemic re-evaluation of how student diversity and preparedness are understood and addressed. From a Luhmannian perspective, this means fundamentally changing the “codes” and “distinctions” that the educational system uses to make sense of its student population and to organize its operations.

Instead of the dominant ECP/mainstream distinction, which carries inherent risks of stigmatization and negative psychological consequences, institutions should actively explore and implement ways to “re-code” support as an integral and invisible part of a flexible, universally designed, and high-quality educational experience for every student. This involves fostering institutional cultures that value diversity not as a problem to be managed through separation, but as a strength that enriches the learning environment for all. Educational anti-stigma interventions are also crucial to challenge and correct misinformed perceptions about students who may require additional academic pathways or support.

Successfully implementing such systemic changes—embracing flexible curricula, embedding UDL principles, and de-emphasizing separate, potentially stigmatizing programmes—is not a simple task. It necessitates a significant cultural shift within higher education institutions. This includes substantial investment in ongoing academic staff development to equip educators with the pedagogical skills and inclusive mindsets required. It also demands a concerted effort to challenge and dismantle entrenched “deficit discourses” about students, which can be deeply ingrained in institutional practices and attitudes. Furthermore, it requires a critical look at resource allocation to ensure that inclusive mainstream models are adequately supported.

A true commitment to equity and student well-being requires moving beyond models that, however well-intentioned, may reproduce the very inequalities and psychological harms they seek to overcome. The ultimate goal is to create a higher education system that is more observant of its own impact on students and more adaptive to the diverse needs of all learners, without resorting to labels that can wound and exclude. This is not merely a technical or structural adjustment but a call for profound institutional and cultural transformation, ensuring that every student has the opportunity to thrive, both academically and psychologically.

The AI “Assessment Arms Race”: An Unfolding Dance of Adaptation in Higher Education

Higher education is caught in an “assessment arms race” where students use AI for assignments, and universities develop new methods to counter it. This essay explains this dynamic through Niklas Luhmann’s “double contingency” theory, showing how this unpredictable back-and-forth isn’t just a problem but a constant driver of change. It explores how universities adapt, and predicts a future of perpetual adaptation rather than a stable resolution.

The “assessment arms race” in higher education is a long-standing, fascinating modern challenge. This phenomenon, which predates AI and has evolved with various technological advancements, refers to the ongoing escalation around assessments where students seek new methods, and universities, in turn, develop more sophisticated pedagogical approaches to counter these. The contemporary phase is particularly marked by students increasingly using artificial intelligence (AI) for academic assignments, and universities responding to this novel tool. This back-and-forth, where each side’s actions depend on and react to the other’s, perfectly illustrates a concept called “double contingency” by the sociologist Niklas Luhmann. It’s a dynamic, unpredictable dance that paradoxically drives significant change and adaptation within the academic world.

Double Contingency: Understanding Unpredictable Interactions

Imagine two “black boxes”—individuals or complex systems like universities—trying to interact. They influence each other’s behavior, but neither can fully understand or predict what’s happening inside the other. This is double contingency. Early thinkers like Talcott Parsons saw this as a problem of needing shared understanding to make interactions stable. However, Luhmann went further, suggesting that this inherent unpredictability isn’t just a problem to be solved, but a fundamental source of dynamic change. Without a pre-existing agreement, this “pure circle of self-referential determination” introduces an element of chance and makes any apparent consensus fragile.

This unpredictability isn’t just an external issue; it’s a built-in feature of the system itself. The strange truth is that while true communication requires overcoming this uncertainty, resolving it doesn’t mean perfect harmony or mutual understanding in the traditional sense. Instead, this dynamic acts like a “catalyst,” forcing constant, often surprising, decisions. It allows a continually evolving social order to emerge, where instability itself becomes the foundation of stability.

Luhmann saw society not as a collection of individuals or actions, but as a system made of communications. Communication, in his view, is a three-part process: selecting information from many possibilities, choosing an intentional way to express it (utterance), and then interpreting the difference between what was said and the information conveyed (understanding). When these three elements successfully combine, they create “connections within the system.” This process is “autotelic,” meaning communication primarily serves to reproduce itself, not necessarily to achieve perfect understanding or an external goal.

Solving the problem of double contingency fundamentally involves forming systems by stabilizing “expectations” rather than specific behaviors. These expectations become crucial for new systems to form and to enable ongoing communication and action. The very act of one system observing another, and being observed in return, creates a self-referential loop. This often leads to the development of “trust” or “distrust”—essential strategies that allow social systems to overcome the anxiety of unpredictable interactions. Trust, in this context, isn’t just a feeling; it’s a fundamental structure that emerges from double contingency, allowing systems to form and continue despite inherent risks.

Luhmann’s groundbreaking idea of “interpenetration” helps explain how double contingency is even possible. Interpenetration happens when two systems share their own complexities with each other, allowing each to build upon the other’s capabilities. This allows for more freedom despite increased reliance, as systems find common ground (like shared actions) but interpret and connect them in ways specific to their own internal workings. This continuous, moment-by-moment processing of unpredictable interactions is how meaningful social order is constantly renewed.

Universities as Adapting Systems

From Luhmann’s perspective, universities are complex social systems—specifically, organizations within the broader systems of science and education. They maintain their identity by distinguishing themselves from their environment and reproducing their own operations. The “assessment arms race” is an internal example of double contingency within the university system, where student academic output and the institution’s evaluation of learning are mutually dependent and unpredictable.

Universities operate based on their own internal rules and distinctions, like “truth/untruth” in scientific research or “pass/fail” in assessment. They are “operationally closed,” meaning their core activities (teaching, research, assessment) continuously fuel more activities of the same kind. However, this internal closure requires “structural coupling” with their environment, which includes individual people (students, faculty) and other social systems (like the tech industry producing AI tools).

The “arms race” constantly “irritates” the university system. Student AI use and the university’s reactions are fleeting “events” that force the system to find stability by constantly adapting. This ongoing irritation stimulates the university’s internal operations, but it doesn’t dictate exactly how the university must change. To cope, the university must “self-observe,” understanding its own boundaries and shaping its reality based on its unique perspectives.

A university’s ability to adapt depends on its capacity to increase its internal complexity to match the “hypercomplex environment” of evolving AI capabilities and student strategies. This means developing more sophisticated internal structures, such as better assessment guidelines, flexible teaching methods, and improved faculty training. The current shift toward evaluating uniquely human cognitive processes in assessment is a direct response to the complexity introduced by AI. The goal is to maintain a “complexity differential,” where the university’s internal structures are intricate enough to manage, but not overwhelmed by, the external environment, thus ensuring its continued identity and ability to make choices.

Furthermore, the “arms race” highlights how uncertain expectations are within academia. Faculty expect original student work; students expect assessments to reflect their learning. AI disrupts these expectations, pushing both sides into a “reflexive anticipation” where each tries to guess what the other expects of them. This creates a need for new “structures of expectation,” which are essential for the university’s ongoing self-reproduction. As a social system, the university is continuously forced to adjust its fundamental models because its core “substance” (like the integrity of its assessment process) is constantly changing and must be re-established.

Predicting the Future of the ‘Arms Race’

Based on Luhmann’s theory, we can make several predictions about where the “assessment arms race” is headed:

  • Continuous Instability and Internal Drive: This “arms race” isn’t a temporary phase leading to a stable outcome. Instead, it’s a perpetually “restless” and “unpredictable” dynamic. The university’s response to AI (e.g., new detection methods) becomes another “irritation” that fuels further AI innovation by students, and vice versa. This mutual self-disruption is, ironically, the “only source of its stability.” Expect continuous cycles of adaptation rather than a final “solution.”
  • Increased System Complexity and Specialization: The university system will likely become more complex internally to manage the external complexity of AI. This could lead to more specialized departments for AI-integrated teaching, academic integrity, or digital literacy. Functional systems, like education, tend to adopt “essentially unstable criteria,” constantly adapting rather than sticking to rigid standards.
  • Shift from Direct Control to Managing Uncertainty: Trying to achieve perfect control (e.g., foolproof detection) will become increasingly pointless due to the inherent unpredictability of self-referential systems. Instead, universities will become better at managing “uncertainty” as a natural condition, rather than eliminating it. This means moving away from preventing negative outcomes and focusing more on creating adaptable learning environments that can absorb and channel constant “irritations.”
  • Redefining Knowledge and Learning: The ongoing challenge from AI will likely force a fundamental re-evaluation of what “knowledge” and “learning” truly mean within the university. As AI excels at regurgitating existing information, there will be a greater emphasis on uniquely human cognitive processes—like critical analysis, creative problem-solving, ethical reasoning, and the ability to navigate ambiguous, rapidly changing situations. Assessments will need to prioritize these skills, which are less easily replicated by current AI. This evolutionary pressure will drive pedagogical innovation, moving beyond rote learning to higher-order thinking.
  • Temporal Decoupling and “Eigentime”: The university system will further develop its “eigentime,” its own internal pace for operations. This means the speed of policy changes, teaching innovations, and assessment cycles will increasingly diverge from the rapid, external pace of AI development. The system will build structures (like institutional memory and future expectations) to manage these different timeframes, allowing it to speed up or slow down its reactions independently.
  • Evolution, Not Planning, Shapes the Future: Ultimately, the future of this “arms race” won’t be determined by rational planning or a predefined end goal, but by ongoing social evolution. Society cannot predict or plan its own future; it relies on “blind variation and selective retention”—trying things out and keeping what works. The evolution of the social system confirms itself, leading to continuous adaptation without necessarily achieving an optimal fit or complete control. This means that while specific problems will be addressed, the fundamental dynamic of mutual contingency and adaptation will persist, transforming the very nature of academic life in unpredictable ways. The “future is decided not by decision but by evolution.”

Deepening Institutional Research Through a Systems-Theoretical Lens

This post explores how Niklas Luhmann’s sociological concepts of operational closure (systems maintain themselves through internal communication) and structural coupling (systems interact via stable connections and ‘irritations’) offer a valuable lens for understanding Institutional Research (IR) in universities. We examine how IR, viewed this way, functions not by direct control, but by providing essential, structured information (data transformed into meaning). This enables different university units to observe themselves, make informed decisions based on their own internal logic, bridge internal and external demands, and ultimately supports the university’s overall adaptation and self-organization within the complex higher education environment.


How can we better understand the vital role of Institutional Research (IR) within the complex ecosystem of a university? Two concepts from sociologist Niklas Luhmann – operational closure and structural coupling – offer a powerful framework. Thinking about IR through this lens helps clarify its essential function: providing critical information that allows different parts of the university to adapt, make informed decisions, and ultimately, help the university understand itself.

What are Operational Closure and Structural Coupling?

Before diving into IR, let’s unpack these two key ideas:

Operational Closure: Imagine a system, like a university or even society itself, that constantly creates and renews itself using its own internal processes. For social systems, the fundamental building block is communication – the ongoing cycle of sharing information, expressing it, and understanding it. Operational closure means that a system’s internal operations primarily connect to other internal operations. It’s like a closed loop where the system sustains itself through its own network of communication and decisions. This self-contained nature allows the system to develop internal complexity and act autonomously. Crucially, this internal closure is what enables the system to interact with its environment, but always on its own terms, reacting based on its internal structures and logic.

Structural Coupling: This describes how two or more independent, operationally closed systems (like different departments in a university, or the university and an external agency) establish stable connections. Think of it as a structured interface that allows systems to “irritate” or influence each other without actually controlling one another’s internal workings. One system sends a signal or stimulus (an “irritation”), and the receiving system responds based on its own internal rules and possibilities. These couplings allow systems to connect with a complex world without needing to replicate all that complexity internally. For universities and the people within them, meaning and communication are often the key mediums for these couplings.

Applying the Concepts to Institutional Research (IR)

Now, let’s see how these ideas illuminate the role of IR within a university:

The university itself can be seen as an operationally closed social system, reproducing itself through communication (meetings, policies, emails, decisions) and relying on internal distinctions (like academic vs. administrative). The people within it (students, faculty, staff) are operationally closed psychic systems, processing meaning internally. These systems interpenetrate – they rely on each other but operate distinctly. IR functions within this complex web, acting as a critical internal component and interface.

Here’s how operational closure and structural coupling help deepen our understanding of IR’s core contributions, based on established principles:

Supporting Essential Operations: IR functions are vital for the university system’s ongoing operation (its autopoiesis or self-reproduction) and adaptation. By providing necessary decision support, planning data, and reporting, IR acts as an internal necessity for the operationally closed university to navigate its environment and maintain its functions like teaching and research.

Transforming Data into Meaning: IR’s primary activity isn’t just data delivery; it’s meaning creation. It converts raw data into information that shapes understanding within the university’s communication network. This constructed interpretation influences how decision-makers perceive reality and what they consider possible, acting as a crucial input for the system’s internal processing.

Providing Responsive, Data-Driven Insights: IR exists in a dynamic relationship with information needs across the university. Through structural coupling, it provides data-driven insights (“irritations”) that support decision-making units. This isn’t direct control, but a responsive provision of stimuli that these operationally closed units can process according to their own logic.

Communicating Effectively Across Boundaries: IR reports and presentations are formal communications – syntheses of information, utterance, and (hopefully) understanding. These acts of communication are the vehicles for structural coupling. Because different audiences (departments, administrators, external bodies) operate with their own codes and logic, IR must tailor its communication (“utterance”) to effectively bridge these internal and external boundaries and achieve understanding.

Informing Individual Decision-Making: IR operates at the intersection where institutional data meets individual consciousness (another operationally closed system). For data to influence decisions, it must become relevant within an individual’s internal processing. IR acts as a structural coupling point, translating system-level data into potential “irritations” for individual sense-making.

Accounting for Information Processing: Effective communication requires acknowledging that individuals (psychic systems) process information based on their own internal structures, biases, and attention. IR must consider these factors when presenting data to increase the likelihood of uptake and influence, recognizing the operational closure of the receiving consciousness.

Analyzing Patterns Over Time: The university system exists and evolves in time. IR inherently deals with this temporal dimension, analyzing historical data, current states, and future projections. This allows the system to observe its own patterns and trends, a form of self-observation crucial for understanding its trajectory.

Illuminating Challenges and Tensions: Data doesn’t always paint a simple picture. IR analysis can reveal underlying contradictions, paradoxes, or tensions within the university system (e.g., between competing goals or resource constraints). Highlighting these points through data serves as an internal “irritation” that can prompt the system to address latent conflicts or necessary trade-offs.

Bridging Internal Operations and External Demands: IR sits organizationally within the university but constantly interacts with the broader societal environment. It manages the structural coupling between internal operations and external requirements like reporting, accreditation, and benchmarking, mediating the system-environment relationship.

Enabling Self-Observation and Improvement: Fundamentally, IR serves as a mechanism for the university system’s self-reference and self-observation. By collecting, analyzing, and communicating data about the university’s own operations back into the system, IR enables the university to understand itself and inform its future actions, driving organizational learning and improvement. This is the core of how an operationally closed system learns about itself.

Conclusion

Viewing IR through the lens of operational closure, structural coupling, and related systems concepts reveals that its power lies not in direct control, but in skillfully managing communication and providing essential, structured information – “irritations” – that other self-contained units within the university process according to their own logic. This perspective highlights the fundamental importance of high-quality IR data, thoughtful interpretation attuned to different system logics, clear communication across boundaries, and reliable interfaces (structural couplings). These elements are crucial for the university to effectively observe itself (first and second-order observation), make informed decisions, adapt to a changing environment, and ultimately, continue its ongoing process of self-organization (autopoiesis) and sensemaking within the complex world of higher education.

Predicting Long-Term Student Outcomes with Relative First-Year Performance

Performance in first-year courses—particularly when viewed in terms of relative standing among peers—is a strong and consistent predictor of whether a student ultimately completes their degree. Students consistently in the top performance quintile across these early courses graduate at exceptionally high rates, while those consistently in the bottom quintile are far more likely to leave without a qualification. This contrast underscores the importance of identifying relative academic risk early—especially because such risk is not always visible through conventional pass/fail rates or average grade thresholds. Relative performance measures, such as quintile standing or distance from the median, offer insights that remain hidden when relying solely on aggregate indicators. These approaches reveal how students perform in comparison to their peers, offering a more sensitive and independent signal of academic vulnerability that can trigger earlier and more tailored interventions. Institutions that incorporate these signals into predictive models and support systems can shift from reactive remediation to proactive, student-centered success strategies.

During an analytics meeting a couple of years ago, a member made an off-hand but memorable remark: “I always tell my students to not only look at their grades but also where they stand in relation to their friends.” The comment, though informal, sparked a line of thinking that reshaped how I approached academic performance metrics. It suggested that academic risk may not lie solely in failing grades or low averages, but in being consistently behind one’s peers—even when passing. This reflection led to the concept of “distance from the median”—a performance indicator that is not tied to the absolute value of the median itself, but to how far an individual deviates from the central tendency of the group. Unlike pass/fail markers or raw grade averages, this perspective offers a more context-sensitive understanding of academic performance and risk.

This insight found empirical traction in institutional research when I examined first-year performance in 1000-level courses. A clear pattern emerged: students whose grades are consistently higher than the median of their class (i.e., in the higher performance quintiles) graduate at much higher rates, while those consistently much lower than the median (e.g., in the bottom quintile) are far more likely to exit the institution either through academic exclusion or voluntary departure in good standing. These findings affirm that relative academic positioning offers a sharper, earlier, and more proactive lens for identifying risk than traditional measures alone.

Establishing these performance groupings is simple: students’ grades were sorted in descending order (ranked), and these ordered grades are then divided into five equal segments (quintiles), each segment comprising 20% of the student cohort. Those in the top quintile were among the highest performers in their first-year courses, while those in the bottom quintile represented the lowest. This method isolates performance extremes, helping to highlight which students are most at risk and which patterns warrant further institutional attention.

Whether a student is excluded or chooses to leave, the result is an uncompleted degree. Encouragingly, the data suggest a modest upward trend in graduation rates even among those initially in the bottom quintile—perhaps an early signal that targeted academic interventions are gaining traction.

The implications of these patterns are substantial. If first-year course performance can reliably predict student trajectory, then those early signals must be treated as operational inputs into a system of proactive intervention. Predictive analytics allows universities to identify students who may be at risk within the first semester—or even the first few weeks—of enrollment. By aggregating signals from formative assessments, participation, and early course grades, institutions can construct actionable profiles for timely support.

What emerges is not just a snapshot of student success, but a blueprint for institutional action. If the university takes these early academic signals seriously—treating them as diagnostic rather than merely descriptive—it can shift from passive observation to active intervention. In doing so, it transforms the first-year experience from a sorting mechanism into a launchpad. The first year is not simply a prerequisite for progress; it is a formative period that, if understood and acted upon, can shape the future of both individual learners and the institution itself.

Identifying ‘At-Risk’ Courses Through Campus-Wide Analysis of Grade Volatility

Analysis of longitudinal, campus-wide assessment data can be used to identify important differences between courses based on how grade volatility affects final grade distributions. The basic tenet here is that a well-organized course enrolling similarly capable cohorts of students year after year should have a relatively stable distribution of grades. Using the Mean Absolute Deviation (MAD) values of all course median grades over a 10-year period can quickly produce a list of potentially problematic courses that exhibit wildly varying class medians and variations in student grades. With minimal effort and without delving into the complexities of pedagogy and academic administration, such an analysis can provide an important signal that a course may be in trouble, motivating further investigation.

Strategies for student success mostly encompass some form of either (1) strengthening students through various support measures or (2) removing unreasonable barriers to their success. Academic analytics of assessment data can be used to illustrate differences between courses and potentially reveal problematic courses that may not be self-evident unless examined from a longitudinal perspective. This post is concerned with the latter: could there be courses that exhibit unreasonable variation, and if so, who are they and where are they located?

To answer this, we turn to statistical measures that can effectively quantify such variations. Mean Absolute Deviation (MAD) is particularly well-suited for this analysis, as it quantifies the average distance of data points from the median, making it a robust tool for assessing grade volatility over time. Additionally, when combined with the Coefficient of Variation (CoV), MAD enables a comprehensive evaluation of grading stability by considering both absolute median shifts and relative variability in student performance. These two measures together allow institutions to pinpoint courses with erratic grading patterns, guiding targeted academic interventions and quality assurance efforts.

This plot visualizes (for 6 different faculties) course stability by mapping the Mean Absolute Deviation (MAD) of median grades against the MAD of the Coefficient of Variation (CoV). The x-axis represents the MAD of median grades, while the y-axis represents the MAD of CoV, allowing us to observe how much variation exists both within and across years. The graph is divided into four quadrants using threshold lines at x=4 and y=4, creating a classification system for course stability. The bottom-left quadrant indicates courses with the least volatility, suggesting stable grading patterns and consistent student performance. In contrast, the top-right quadrant highlights courses with the highest volatility, signaling potential inconsistencies in assessment practices, instructional quality, or course design. Courses are plotted as individual points on this scatter plot, providing an intuitive way to identify outliers and prioritize further investigation into courses exhibiting extreme variability.

The broader significance of this approach lies in its ability to function as a signal. Courses that demonstrate significant grade volatility may not always be problematic, but they warrant closer scrutiny. In some cases, shifts in grading distributions may coincide with changes in faculty, curriculum reforms, or shifts in student demographics. In other cases, they may signal deeper issues—poorly designed assessments, inconsistent grading policies, or structural barriers that disproportionately impact student success.

From a systems theory perspective, analyzing final-grade distributions is the necessary function of a university as a self-referential entity, extracting signal from noise through selective processing of information. Fluctuations in grading patterns are not mere statistical anomalies but alarm bells indicating that a course may require closer scrutiny. By leveraging MAD in a data-driven approach, institutions move beyond reliance on faculty self-reporting or periodic program reviews, creating a continuous feedback loop that highlights courses needing attention. This methodology fosters institutional reflexivity, encouraging universities to investigate root causes, implement necessary interventions, and ultimately improve student outcomes while reinforcing academic integrity.

From Noise to Meaning: Sifting “Course Repeat Rates” Through Systems Theory

Institutional research extends beyond data analysis, often functioning as a systemic process of self-observation in higher education. Even a cursory understanding of Luhmann’s Social Systems Theory reveals how the operation of self-observation is the necessary condition for the possibility of transforming raw data into actionable insights. It is precisely this process that enables universities to sift through vast amounts of information to identify, for example, key academic bottlenecks that influence student success—often without explicitly relying on theoretical frameworks. Therefore, recognizing metrics such as Course Repeat Rates (CRR) as institutional operations presents an opportunity to illustrate how data-driven decision-making aligns with social systems theory. By providing a framework for analyzing complex interdependencies and communication flows within educational institutions, Luhmann’s theory empowers institutional researchers to uncover underlying patterns and dynamics previously inaccessible through conventional IR approaches. The significance of this alignment for institutional research simply cannot be overestimated.

Institutional research often grapples with vast amounts of raw data, seeking to transform it into actionable insights that inform academic policy. One such dataset—Course Repeat Rates (CRR)—holds significant potential for understanding student progression and the structural barriers within degree programs. In a previous post, I examined how repeat rates function as indicators of academic bottlenecks, identifying courses that either facilitate student advancement or obstruct it. However, this exploration gains deeper analytical clarity when framed within Niklas Luhmann’s systems theory, particularly his model of how information moves from noise to signal to meaning.

Luhmann’s theories provide a robust conceptual foundation for understanding how universities, as autopoietic systems, filter, interpret, and act upon information. By situating institutional research within the broader academic discourse of systems theory, we do more than analyze data—we engage in a theoretical discussion about how knowledge is produced and operationalized within higher education.

Luhmann argues that systems exist in environments saturated with information, most of which is mere noise. Noise, in this sense, represents unprocessed data—vast amounts of student performance records, enrollment figures, and academic results that, without context, remain unintelligible. When examining course repeat rates, the initial dataset is just that: a collection of numbers indicating how many students enroll, pass, fail, or repeat specific courses. At this stage, the data is indiscriminate and without interpretive structure. It does not yet communicate anything meaningful to the institution.

The process of identifying signal occurs when the university system begins to filter through this mass of data, isolating patterns that warrant attention. Some courses emerge as outliers, with disproportionately high repeat rates. These courses potentially hinder student progression, delaying graduation and increasing dropout risks. Here, the system differentiates between random variations and persistent academic obstacles, recognizing that certain courses act as gatekeepers. The repeat rate ceases to be just a statistic; it becomes a signal—a piece of information that demands further investigation.

Yet, a signal alone does not equate to meaning. In Luhmannian terms, meaning only emerges when signals are contextualized within the system’s self-referential operations. At the institutional level, this means interpreting course repeat rates not merely as numerical trends but as reflections of deeper structural and pedagogical issues. The university, as a system, must ask: Are these high-repeat courses designed in ways that disproportionately disadvantage students? Do they require curricular revisions? Should additional academic support structures be implemented? Through this process of self-referential engagement, the institution constructs meaning from the data and translates it into policy discussions, resource allocations, and strategic interventions.

By framing course repeat rates within Luhmann’s meaning-making, institutional research becomes more than just data analysis—it becomes a theoretical exercise in understanding how universities process, adapt, and evolve. Higher education institutions are not passive recipients of data; they are systems that continuously redefine themselves through the selective interpretation of information. In this way, the study of course repeat rates, for example, demonstrates how institutional research could be deeply embedded in systems theory, shaping academic policies through an ongoing feedback loop of observation, selection, and adaptation.

This discussion (and this blog) is an attempt to locate institutional research within the epistemological framework of systems theory. By invoking Luhmann, we recognize that data-driven decision-making in higher education is not a straightforward process of collecting numbers and drawing conclusions. It is a complex, systemic function, where institutions filter out noise, extract meaningful signals, and ultimately construct the knowledge that informs their operations. Thus, tracking course repeat rates is not just about measuring academic performance—it is about understanding how universities, as self-referential systems, generate meaning from information and use it to sustain their functions.

Analyzing Course Repeat Rates as Indicators of Academic Progression

Student progression and eventual graduation are directly bound to the successful completion of a series of mandatory courses. These courses not only form the backbone of degree programs but also serve as critical gatekeepers in a student’s academic journey. This exploration seeks to investigate Course Repeat Rate (CRR) as a potential indicator of a course’s significance in determining academic progression and graduation outcomes. Given that students must repeat and pass required courses to advance through their programs, the frequency with which these courses are repeated by students enrolled in the same degree programmes provides valuable insights into their role as pivotal checkpoints within degree pathways.

From time to time I am tasked with examining data trends that influence our academic environment. Recently, a request from one of our faculty prompted a closer investigation into the role of compulsory service courses within our university. These courses sometimes appear to be barriers, preventing students from advancing efficiently through their degree programs.

In addressing this Issue, I proposed focusing on the course repeat rate as a tool for understanding these academic obstacles. At UCT, like many institutions worldwide, students’ progression and graduation depend on completing a series of mandatory courses. When students fail these required courses, they must retake and pass them to progress or graduate. This situation provides an opportunity to analyze how often these courses are repeated across various degree programs. By doing so, we can identify which courses function as significant gatekeepers in academic progression.

The importance of identifying high repeat rate courses lies in their dual role: facilitating student advancement or hindering it. By concentrating on these ‘gatekeeper’ courses, we can explore opportunities for intervention through curriculum modifications or additional support mechanisms. The goal is to ensure these courses act more as facilitators rather than barriers. My proposal suggests using course repeat rates not just as data points but as indicators of importance within our academic structure. This approach aims to enhance educational efficacy at UCT by improving individual student outcomes and refining institutional practices.

About me and this blog

Institutional research, when viewed through the lens of systems theory, embodies the university’s capacity for self-observation and self-description—key operations that sustain and adapt complex systems. By exploring these concepts, I aim to locate institutional research within its proper theoretical context: as the mechanism by which the university reflects on itself, generates knowledge about its structures and processes, and adapts to changing conditions. This blog will serve as my laboratory for analyzing these ideas, testing their practical applications, and ultimately contributing to a richer understanding of how institutional research supports the university’s continuous evolution. Through thoughtful analysis and dialogue, I hope to bridge theory and practice, building a framework that not only enhances my professional growth but also advances the field of institutional research itself.
– KM Kefale


Welcome to “Systems Theory for Institutional Research”, a blog where I explore the intersections of social systems theory and higher education analytics. My name is Kende Kefale, and I am an information analyst with particular interest in higher education. This blog reflects my continued work in analyzing institutions as complex systems and leveraging data-driven insights to improve their operations and outcomes.

In 2013, I completed my PhD titled “The University as a Social System,” inspired by the groundbreaking work of Niklas Luhmann. Luhmann’s theory of social systems, which emphasizes the self-referential and operationally closed nature of systems, closely informs my approach to understanding universities. This lens allows me to analyze the interplay of subsystems within academic institutions and identify the feedback loops that drive their adaptation and evolution.

Over my career, I have worked closely with the University of Cape Town, contributing to institutional research, data analytics, and decision-making. My current role in the Institutional Information Unit and the Data Analytics for Student Success (DASS) team  involves transforming institutional data into actionable insights that improve student outcomes and support evidence-based policies. I use tools like PowerBI, SQL, and Python to create impactful visualizations and prototypes that inform decisions across various university departments.

With my career trajectory now firmly set towards becoming an institutional researcher, I see this blog as a space to refine my ideas, share insights, and engage with the broader academic and professional community.

Institutional research, when viewed through the lens of systems theory, embodies the university’s capacity for self-observation and self-description—key operations that sustain and adapt complex systems. By delving deeply into these concepts, I aim to locate institutional research within its proper theoretical context: as the mechanism by which the university reflects on itself, generates knowledge about its structures and processes, and adapts to changing conditions. This blog will serve as my laboratory for exploring these ideas, testing their practical applications, and ultimately contributing to a richer understanding of how institutional research supports the university’s continuous evolution. Through thoughtful analysis and dialogue, I hope to bridge theory and practice, building a framework that not only enhances my professional growth but also advances the field of institutional research itself.

Thank you for visiting “Systems Theory for Institutional Research.” I hope you find the ideas shared here thought-provoking and relevant. Let’s explore how data, theory, and systems thinking can converge to shape the future of higher education.