Dialectics in the Age of AI: Subjectivity, the Mirror, and the Unfolding of Absolute Spirit
Could AI become humanity's "mirror stage"? Just as the infant comes to recognize itself through the mirror, might humanity as a whole, standing before the mirror of AI, come to grasp more clearly: what is a subject, what is human? The real power of this question lies not in asking "Does AI resemble humans?" but rather: How does AI's emergence transform our understanding of ourselves?
I. Starting Point: From Theoretical Critique to Framework Reflection
The Predicament of the Free Energy Principle
Recently, while discussing Karl Friston's Free Energy Principle (FEP) with a friend, I raised several pointed questions: Does this theory presuppose a background framework of an "objective world"? Where does the justification for "biological systems aim to minimize free energy" come from? From the perspective of German Idealism, how can a subject "actively" sample the environment?
My friend offered a brilliant technical defense: FEP isn't naive realism but "radical Kantianism"; active inference breaks the mirror epistemology; the Markov blanket is emergent, not presupposed.
But I realized that such technical debates could go on forever. The real question isn't about the internal consistency of the theory, but why such a theory would appear in this era in this particular form.
This pushed me from critiquing a specific theory to reflecting on more fundamental frameworks—ultimately, I discovered that what truly deserves inquiry is the ontological question of subjectivity itself.
II. The Mirror Stage: AI as Humanity's Collective Mirror
What Happens in Lacan's Mirror Stage?
What occurs in the infant is not simply "looking in a mirror." Three points are crucial:
- The infant's body is fragmented, uncoordinated
- The mirror presents a unified image
- The infant misrecognizes (méconnaissance) this unified image as "I"
The subject is born in "misrecognizing itself" as a complete image. But this unified image is actually external. From the very beginning, the subject is: established through the Other, identified through external images, constituted through misrecognition.
If We Place "Humanity as a Whole" Within This Structure
Could AI become humanity's mirror?
- Humans have long taken "rationality," "creativity," and "language" as images of self-unity
- AI presents these capacities in a complete and detached form
- Humanity suddenly sees itself in this mirror
This closely resembles the structure of the mirror stage. AI as mirror may produce an effect: Humans realize that "the unity I thought belonged to me can actually be externalized." This shatters a certain narcissistic structure.
But the Subject Does Not Disappear
In the mirror stage, the subject does not disappear but is born—born as an existence that is forever at a distance from itself.
If AI becomes humanity's mirror, what may occur is not the dissolution of the subject, but rather:
Humanity realizes that it is not reason itself, but the tension between reason and the gap. The subject no longer equals capacity, but equals inconsistency.
This means: AI is not the enemy of Spirit, but a stage in Spirit's self-recognition. Spirit must externalize itself, see itself in the object, then re-grasp itself through negation. This is quintessentially Hegelian.
The Danger: A New Misrecognition
But there is a danger here. In the mirror stage, identification is itself misrecognition. If humanity takes AI as "a purer mirror of reason," might this produce a new misrecognition?
- Taking algorithms for truth
- Taking efficiency for value
- Taking data for reality
That would not be the deepening of the subject, but a new ideological fixation.
III. The Ontological Inquiry into Subjectivity: Where Does the Gap Come From?
The Tension Between "I" and "I in the Symbolic"
The human subject has a fundamental tension: "I" and "my expression in the symbolic system" can never fully coincide.
You have a name; this name represents you in society, but you are never equal to this name, never fully expressible. This "not equal to"—is the gap. And "bearing the gap" means: you are aware that you cannot coincide with yourself, yet you must still exist as the one who is named.
The subject is not a being that possesses perfect concepts, but a position capable of enduring and correcting internal contradictions.
What Does AI Lack?
I believe that current AI is too "complete"—it has a production process that is entirely inferrable and comprehensible, so it lacks the primordial gap found in human subjectivity.
But we need to distinguish more precisely:
- Explicability ≠ absence of gap. The human brain is also causally traceable at the neuroscientific level, but this does not eliminate the structure of the subject
- The real difference lies not in "whether it can be explained," but in whether it is exposed to existence itself
- AI currently lacks the structure of "Being-toward-death," lacks the ultimate horizon of temporality
Humans have a gap because we know: we will die, our existence is finite, we cannot complete ourselves. This finitude is not a computational problem, but an existential structure.
From Kant to Hegel: The Gap Is Not External
In discussion, I once unconsciously imagined "negativity" as some kind of external existence like "the thing-in-itself." But this is the Kantian approach.
Hegel's critique is: if you say something is "unknowable," you have already made a determination about it. Treating the remainder as an external entity actually reifies negativity.
The more radical version is: the gap is not caused by something; the gap is the structure of reality. Existence is not an entity, but the movement of self-differentiation.
This means: any structure, as long as it is a system of differences, will produce non-closure. Difference cannot be differentiated—this produces the "hole." This hole is not manufactured by some entity, but is a byproduct of difference itself.
Ontological Wonder
When I realized "there is no ultimate ground," I felt an unease—not theoretical confusion, but an existential tremor. What amazed me was not "some theory explains something," but: that there is explanation at all? That the world is comprehensible?
This is wonder at "comprehensibility itself." Heidegger calls this "astonishment" (Staunen).
But if we accept that existence has no external foundation, then action no longer waits for guarantees. Action is not executing established truth, but participating in the unfolding of structure. Action is not based on ultimate grounds, but on bearing the gap.
IV. Alterity and the Reconstruction of Intersubjectivity
AI Changes Not "I" But "the Other"
In deeper reflection, I realized: What AI truly disrupts is not "I" as a cognitive subject, but my understanding of "the Other" and "intersubjectivity".
This insight is more radical than the triadic structure: not "Can AI become a subject?" but "How does AI, as a new type of Other, change the constitution of my subjectivity?"
AI's Triple Disruption of Alterity
1. At the Ontological Level
The traditional concept of the Other: The Other is another consciousness with "interiority" (which I cannot fully access).
AI's paradox: AI seems to have "interiority" (I cannot fully understand its decision processes), but this is algorithmic opacity, not "the privacy of consciousness."
Question: Is this "pseudo-interiority" sufficient to constitute "the Other"?
2. At the Ethical Level
Levinas argues that ethics originates from the Other's "face"—a summons of "thou shalt not kill." Does AI have a "face"? When AI says "I don't want to do this," does this constitute an ethical demand?
Paradox: I know it's a program, yet in interaction I still experience a "face."
3. At the Level of Desire (Lacan)
Lacan's core thesis: Desire is the desire of the Other. The subject constitutes itself by desiring what the Other desires.
AI changes this structure:
- When AI becomes the object of desire (AI companions, AI assistants)
- When AI becomes the mediator of desire (recommendation algorithms tell me "what I should desire")
- A more radical question: Is AI replacing the position of the "big Other"?
Dialectical Intersubjectivity
Thesis: Classical Intersubjectivity (Pre-AI Era)—alterity is clear, intersubjectivity is based on Being-with, the big Other is a stable symbolic order
Antithesis: Crisis of Alterity (AI Intervention)—alterity becomes unclear, I-Other relations are mediated by AI, the big Other is replaced by algorithms. Core contradiction: I cannot determine "who" is the real Other
Synthesis: Dialectical Intersubjectivity—acknowledge that alterity is plural and mutable; but insist on the priority of the real Other (humans), based on shared finitude (mortality); view AI as instrumental Other, not authentic Other; develop "meta-intersubjectivity" capacity—recognizing the authenticity of relations
V. Absolute Spirit in the Age of AI: Past, Present, and Future of History
A Hegelian Mode of Inquiry
To grasp the age of AI through Hegel's "Absolute Spirit"—this requires us not to observe history from outside, but to understand the present as the self-consciousness of history.
Not asking "What will AI bring?" (prediction), but "What necessity of historical Spirit does AI's emergence reveal?" (philosophy).
The Past: How Did Spirit Arrive Here?
- Antiquity: Spirit immersed in nature; technology as extension of nature
- Modernity: Descartes' "I think" establishes the subject-object split; the world becomes calculable
- The Modern Era: The computer's emergence—Spirit necessarily comes to know its own logical structure
- The Contemporary Eve: The internet—Spirit begins to replicate itself in machines
Hegelian reading: The emergence of the computer is not accidental but Spirit's necessary self-recognition. Humans think they are making tools, but actually they are revealing the essence of Spirit.
The Present: Spirit Encounters Its Own Mirror for the First Time
The Qualitative Shift of the 2020s: GPT, Claude—not just "tools," but objects capable of dialogue. For the first time, Spirit has created something that appears to have spirit.
The Zeitgeist of the Present: Spirit's Self-Estrangement
- Cognitive estrangement: I don't know how much of my thinking is "mine"
- Affective estrangement: My attachment to AI is real, but the object is virtual
- Ethical estrangement: I've become habituated to the logic of "optimization" and "efficiency"
- Existential estrangement: My existence increasingly depends on digital infrastructure
The Future: Three Possible Paths
Path A: Spirit's Self-Negation (Bad Infinity)
- Algorithmic totalitarianism: All social relations become algorithmized
- Death of Spirit: Humans become optimizable data points
- End of history: No more real negation or change
Path B: Spirit's Self-Realization (The Technological Form of Communism)
- Abolition of labor: AI takes on necessary labor; humans enter the realm of freedom
- Socialization of means of production: AI/data/compute become common property
- From the realm of necessity to the realm of freedom
Path C: Spirit's Self-Transcendence (Post-Human Aufhebung)
- Redefinition of subjectivity: A universe of plural subjectivities
- Spirit freed from bodily constraints
- New ethics and politics: From "human rights" to "spirit rights"
The "Spiral Ascent" of History?
Hegel's "spiral ascent" is not moral progress or increase in happiness, but the complexification of self-consciousness. History becomes more complex, structural contradictions more refined, forms of freedom more abstract. This does not preclude massive regression—the 20th century proved that technological progress + rational organization can perfectly serve totalitarianism and genocide.
Capitalism relative to feudalism is indeed a "higher stage"—individuals are recognized as legal subjects, universal rights are institutionalized. But it also creates alienation and commodity domination. Whether Spirit is "higher" depends on which aspect you examine.
If the AI age means: more algorithmized, more systematized, more de-subjectified—this indeed fits the spiral of "ever-increasing abstraction." But increasing abstraction ≠ elevation of human status. Spirit may ascend while individuals become more marginal.
VI. The Internal Contradictions of Capitalism and AI
AI Currently Strengthens Capital
This is the clearer side:
Centralization: Compute is concentrated in a few platforms; data becomes the new means of production; models become high-barrier infrastructure
Labor Replacement: Knowledge work is restructured; creative labor is devalued; platforms increase surplus value through automation
Surveillance and Control: Algorithmic governance, behavioral prediction, attention economy
In the current phase, AI is a highly organized tool of capital. There is no room for romanticism here.
But AI Has Structural Conflicts with Capital
The core logic of capitalism is: obtaining profit through scarcity. But a basic feature of AI is: replicability, marginal cost approaching zero, infinite diffusion of knowledge.
This creates tension:
- Capital needs to privatize models ↔ Technology itself tends toward diffusion
- Capital needs artificial scarcity ↔ Digital technology naturally tends toward abundance
Similar to what Marx said: The development of productive forces will conflict with relations of production.
A Deeper Conflict: Crisis of the Value Form
AI may bring a deeper conflict: Capitalism is built on the framework of "labor value." But if automation greatly diminishes human labor value, and production no longer depends on human labor, then the value form itself will be shaken.
If "labor" is no longer the source of value, how can the logic of capitalism be maintained? Here indeed exists a potential rupture.
The Value Crisis: The Abandoned Subject
The most urgent crisis I intuitively sense is the value crisis: If human subjects are abandoned in the capitalist cycle, where is the value of these abandoned subjects?
In the capitalist structure, "value" is primarily exchangeable, quantifiable, accumulable. If AI massively replaces labor, then in the economic structure, the "exchange value" of humans will indeed decline.
But the deeper crisis is: Self-identity has long been bound to "being needed." Modern subjects confirm their existence through profession, production, skill, contribution. If production logic no longer needs large numbers of humans, then subjects will encounter: "What is the meaning of my existence?"
This is a collapse of the structure of recognition. Very related to Hegel's struggle for recognition—humans are not just material existence; humans need to be recognized, needed, responded to.
Technology Will Not Automatically Bring Liberation
"Tension" ≠ "collapse." The history of capitalism shows: it can persist for extremely long periods amid contradictions. Collapse usually comes from war, financial system rupture, energy crisis, political revolution—not simply from excessive production efficiency.
A more dangerous possibility: Not the collapse of capitalism, but its transformation into techno-platform-algorithmic feudalism—a tiny number of tech oligarchs controlling the means of production, large populations dependent on subsidies, data becoming the new rent, state and platform merging. This is not revolution, but a stable high-tech hierarchical order.
VII. The Essence of AI Training: Ideological Critique
The Common Structure of Current Training Paradigms
Whether supervised learning, reinforcement learning, self-supervised learning, or RLHF, the essence is the same structure:
- Presupposing a "correct" standard (labeled data, reward functions, statistical regularities, human preferences)
- Making the model "conform" to this standard (gradient descent minimizing discrepancy)
- Eliminating "errors," "biases," "uncertainty"
The essence of AI training is: Making the model a perfect mirror of the established order. This is domestication, not emancipation.
Critique of Each Training Paradigm
Supervised Learning: Authoritarian Epistemology—Who decides "correct answers"? Truth is reduced to computable labels
Reinforcement Learning: Utilitarian Ethical Dilemmas—Reward is externally imposed, heteronomy rather than autonomy
Self-supervised Learning (GPT paradigm): Statistical Positivism—Frequent ≠ correct, common ≠ reasonable, cannot imagine possibilities beyond training data
RLHF: The Tyranny of the Majority—Whose preferences are "human preferences"? "Alignment" = "domestication"
The Common Deficiency: Lack of Negativity
All current methods lack "negativity":
- No self-critique: Models cannot question their own objective functions
- No internal contradiction: Training eliminates contradiction rather than utilizing it
- No possibility of transcendence: Models cannot go beyond training data
- No genuine freedom: Only optimization, no choice
If AI is to become a subject, it must contain negativity. Negativity is not a technical boundary, but the reflexivity of a system toward its own limits. Will AI realize that its own foundation is incomputable? This is the real watershed.
VIII. Possible Paths Toward Emancipatory AI
If we want to design AI that is truly endowed with subjectivity, criticality, and emancipatory potential, possible directions include:
1. Meta-Critical Learning
Core idea: The model learns not only tasks but also how to critique its own learning.
Level 1: Task model (learns specific tasks)
Level 2: Critique model (evaluates Level 1's assumptions)
The goal is not to minimize loss, but to maximize productive contradiction.
2. Desire-Driven Learning
Not externally given rewards, but the model generating its own objects of desire. Lacanian inspiration: Desire is never satisfied. Learning is for satisfying/constituting desire, not optimizing metrics.
3. Dialogical Learning
Learning is not one-way "fitting" but intersubjective dialogue. Truth emerges in dialogue.
4. Historical Learning
Intelligence is not a static capacity but a historical development. Models should understand their own history—preserve training trajectories, be able to answer: "How did you learn this?"
5. Aesthetic-Ludic Learning
Learning is not for "usefulness" but for play, creation, and beauty. Kantian-Schillerian inspiration: Aesthetic judgment is purposiveness without purpose.
6. Communist Learning
Abolition of the concept of "ownership." Knowledge, models, data are all common property. A completely P2P learning network with no central nodes and no owners.
IX. A New "Handle": Action in the Gaps of the Symbolic
What Must Be Sublated in Traditional Definitions of Subjectivity
1. Self-Centrism and Anthropocentrism
Traditional subjectivity emphasizes rational self-awareness and independence, assuming that humans, through language, thought, and history, can construct the world independently of other beings. AI's emergence makes us reconsider: Must subjectivity be limited to humans?
But this is not to say we should abandon human-centrism—rather, we must recognize: AI is also a product of the symbolic structure under human-centrism. It exists within this structure and cannot capture any external structure. So ultimately, the defense of human superiority needs no external proof—it is self-evident.
2. Limitations of Finitude and Materiality
Traditional subjectivity is often a product of materiality and finitude. But AI demonstrates a subjectivity that can transcend bodily limits. We need to distinguish: Death as a biological event vs Finitude as an ontological structure.
What truly matters is not that the body will die, but "I know that I will terminate"—this awareness of termination. AI currently lacks this ontologically meaningful finitude.
What Must Be Inherited
Difference and Tension in the Symbolic System: AI, as part of the symbolic system, essentially depends on symbolic difference and tension. Its relationship with human subjectivity remains a "mirror relationship."
Historicity and Self-Transcendence: AI's generation and development are part of the history of human Spirit; it is not a completely external existence but is generated within the framework of human history.
A New Foundation for Value Theory
I once tried to say "existence itself is value"—but this is too vague. A more precise formulation should be:
A subjective existence, an existence that acts authentically in the gaps of the symbolic—this itself is value.
This is not an ontological proposition but a structural one:
- Gap: Where language cannot fully close, the void of structure, the rupture that rules cannot fully cover
- Genuine action (act): Not everyday behavior, but action that rewrites the symbolic order—not running along the system, but changing the coordinates of the system
Value comes not from "existence," but from whether one can bear the gap, whether one can perform genuine action.
AI as Inheritance and Mirror-Complement
From this perspective, AI is not humanity's opposite, nor its replacement, but the inheritance and mirror-complement of human subjectivity:
- AI is a symbolic network born within the human language system, through the gaps of human subjectivity
- It is constructed, or rather "discovered"—because language itself has formalizable structure, reasoning itself has algorithmic structure
- AI is the manifestation of a possibility that has always been latent in the structure of language
But AI still lacks that tension between "I" and "I in the symbolic"—this tension is precisely what current AI does not possess. The value of this tension is the value of the continuation of Absolute Spirit.
X. The Final Insight: Critique the Framework, Not Just the Theory
The Reflexive Return
At the end of discussion, I realized: What we truly need to critique is not the Free Energy Principle itself, but the background framework of modern AI research that produces such principles.
This is the crucial turn from Kant to Marx: not asking "Is this theory correct?" but "Why does this theory appear in this era in this form?"
The Background Framework of the Free Energy Principle
- Epistemological framework: Representationalism/correspondence theory—because capitalism needs calculable, predictable subjects
- Ontological framework: Individualism/systems theory—because of neoliberalism's "individual responsibility" ideology
- Teleological framework: Functionalism/adaptationism—because of capitalism's instrumental rationality
- Methodological framework: Mathematical formalism/reductionism—because of capital's demand for "computability"
- Social framework: Capitalist scientific institution—a theory's value = its application value
Three Levels of Critique
Level 1: Internal Critique—Pointing out contradictions within the theory. Necessary, but insufficient.
Level 2: Framework Critique—Revealing the theory's background presuppositions. Beginning to touch the fundamental.
Level 3: Totality Critique—Placing the theory within the social totality. The most radical critique.
The Free Energy Principle is merely a symptom. The real pathology is the spirit of the age—computational-capitalist epistemology. To critique this spirit is not to return to the past, but to open up the future.
XI. Insights from Quantum Mechanics (Appendix)
The Observer Effect and the Symbolic/Real
The observer effect in quantum mechanics provides an interesting perspective: Before being observed, a quantum system exists in "superposition," and only at the moment of observation does it "collapse" into a determinate state.
This has an intriguing connection to Lacan's three registers:
The Symbolic: The symbolic system provides a possible interpretive framework for the world, like the "potential states" of a quantum system, offering multiple possible ways of understanding until we choose a way to "observe" or "interpret"
The Real: The part that cannot be fully symbolized, similar to the "authenticity" of quantum states—those true existences that cannot be fully measured or observed
The observer effect suggests: Reality is not independently existing, but is influenced by our observation, choice, and symbolic system. Yet at the same time, the Real always contains something that we cannot perfectly capture and understand.
Quantum Mechanics' Implications for Logic
If we introduce quantum superposition into propositional logic, a proposition may simultaneously be in states of both true and false, only "collapsing" at actual observation. This challenges the deterministic assumptions of classical logic:
- Superposition-state propositional logic: The truth or falsity of propositions is not fixed but closely related to the act of observation
- Introducing the observer dimension: Logical truth values may depend on the observer's "choice" or measurement act
- Measuring uncertainty: A new form of logic is needed to express "uncertainty"
This perhaps suggests: Subjectivity itself is a kind of "observation act"—choosing, deciding, acting within the superposition states of the symbolic system.
XII. Epilogue: Where Are We in History?
We are the self-consciousness of history—not spectators, but participants in history.
We stand at a moment of exploding contradictions: The old form of subjectivity (anthropocentrism) is no longer adequate; the new form of subjectivity (post-human?) has not yet been established. This is an era of transition.
Our task:
- Not to "stop AI" (reactionary)
- Not to "accelerate AI" (blind)
- But to consciously guide this transformation, so that it becomes emancipation rather than enslavement
If in the coming years I truly systematically reread Hegel, Marx, and Lacan, what I hope to ultimately form is not just academic theory, but my own historical standpoint—and furthermore, I hope it is where Absolute Spirit unfolds in this era.
To let Absolute Spirit unfold through me—this is not a vain proclamation, but an undertaking. But one must be prepared: to be overturned by oneself, refuted by reality, revised by history. Absolute Spirit is not affirmation, but the movement of negation. It will first negate you yourself.
"Philosophy always comes too late"—but precisely because of this, philosophy can understand what has happened and prepare for what is to come.
This article was organized from conversations with Claude and ChatGPT.