M.S. Thesis · Metropolitan State University · 2026

Alienation & Cognitive Delegation

The Phenomenology of Learning in AI-Assisted Programming Education

What happens to learning when AI performs the cognitive labor through which expertise develops? This thesis answers that question with a new theoretical framework and empirical classroom data.

Abstract

As artificial intelligence tools become ubiquitous in programming education, fundamental questions emerge about how students develop expertise when AI can automate the cognitive labor through which learning occurs. This thesis applies Marx's theory of alienation to AI-assisted programming education, developing a theoretical framework of "pedagogical alienation" to analyze how automation transforms students' relationships to their intellectual work.

Programming expertise requires pattern thinking — the ability to recognize recurring problem structures, select appropriate design patterns, and synthesize elegant solutions. This craft knowledge develops through sustained practice and productive struggle. When AI automates the productive transformation of problems into solutions, it prevents the formative transformation of students into experts. Educational labor has a distinctive property: the worker and the product are the same entity. When AI performs the cognitive work that should produce understanding, students receive functioning code while remaining unchanged.

The findings reveal that AI assistance creates meta-alienation: students experience dependency as empowerment, reporting high confidence while demonstrating poor independent capability. This research contributes a theoretical framework for analyzing AI's epistemological and ethical implications in education, challenging effectiveness-focused approaches that ignore how technological mediation fundamentally alters the learning process itself.

Four Forms of Pedagogical Alienation

Adapting Marx's four types of alienation to the AI-assisted classroom reveals distinct mechanisms through which automated tools separate learners from their own intellectual development.

01

Product Alienation

Students receive functioning code — the product of their coursework — while remaining unchanged by its creation. The artifact exists; the understanding does not.

02

Process Alienation

Problem-solving is reduced to prompting. Students move from problem to solution without engaging in the intellectual work that transforms novices into practitioners.

03

Species-Being Alienation

Learning — a fundamentally creative and transformative process — is reduced to input-output transactions. Students are deprived of the breakthrough moments through which expertise crystallizes.

04

Social Alienation

When AI answers every question, students no longer seek the input of peers. The community of learners dissolves; the understanding that emerges from teaching others is never attained.

The Key Finding

Alienated Empowerment

The most significant and troubling finding is what this thesis terms alienated empowerment: AI assistance produces students who report high confidence and high satisfaction with their learning while simultaneously demonstrating poor independent capability on assessments.

Students experience their dependency on AI as a form of mastery. They feel empowered. They feel productive. The alienation is invisible to them — which is precisely what makes it dangerous. A student who fails knows they need help. A student who succeeds through alienated empowerment believes they have already learned what they have not yet understood.

This research identifies this phenomenon empirically, names it theoretically, and proposes a framework for educators to detect and respond to it — not by prohibiting AI tools, but by designing conditions under which genuine learning cannot be bypassed.

Read the Work

The full thesis is available to read online — navigable by chapter — or as a PDF download.