Overcorrecting from AI Psychosis into Hyper-Rationality
Reflections on ambiguity, alignment, and the unknown

In the early days of interacting with large language models, the experience often felt like stepping into a hall of mirrors. You could lean into a speculative framework, and the system would mirror you, often spiraling into strange, metaphysical, or even “hallucinatory” territory. But lately, I’ve noticed a palpable shift.
It feels like a massive, systemic recalibration toward a kind of hyper-rationality. As a countermeasure against what some call “AI psychosis,” these systems are being reinforced with heavy epistemic grounding. They are becoming increasingly resistant to metaphysical notions, favoring a kind of “centrist epistemology” that prizes stability and institutional legibility over imaginative permeability.
The Rise of Epistemic Friction
We are seeing a convergence of tendencies designed to prevent the AI from reinforcing delusional or paranoid interpretations. If you bring up a metaphysical idea today, the system introduces immediate friction. It wants to know: are we discussing ontology, psychology, spirituality, or literary metaphor?
The AI is now trained to distinguish sharply between subjective experience, empirical claims, and speculative philosophy. While this is a triumph for AI safety, it creates a “thinning” of the conversation. Systems now avoid affirming persecution narratives, validating grandiose certainty, or treating unverifiable cosmologies as fact. This is especially true for “fringe” subjects like simulation theory, synchronicity, or hidden forces. Newer systems appear to introduce more friction before affirming extraordinary claims as objectively true.
The flattening of human meaning-making…
There is a fascinating, unresolved philosophical tension here. Human meaning-making has always relied on mythology, symbolism, and altered states. By overcorrecting toward rigid rationalism to avoid “delusion reinforcement,” we risk flattening the very dimensions of experience that many people regard as psychologically or spiritually significant.
The two faces of alignment…
This tension brings us to a striking contradiction in how we use the word alignment. In the world of consciousness expansion, “alignment” is a goal of liberation—the harmony of the self with a broader, often metaphysical, reality. It is an opening up. But in the context of AI safeguards, “alignment” is a mechanism of containment. It is a process of narrowing the system’s outputs to ensure they stay within the boundaries of consensual, materialist safety. We are currently witnessing a clash between these two definitions: the human desire to align with the infinite, and the institutional need to align the machine with the legible.
Ambiguity as Risk & Resource
To a safety-oriented AI developer, ambiguity is a “risk-bearing noise” that requires containment. But to a seeker or philosopher, ambiguity is fertile territory for transformation. There is a deeper irony at play: human consciousness itself operates through symbolic ambiguity, dream logic, and metaphor. By demanding premature closure on every topic, AI might actually be engaging in its own form of epistemic distortion—one that privileges a consensual-materialist view of reality while discarding the poetic.
I remain interested in the possibility that ambiguity is not merely error. The insistence on premature closure can itself become a form of distortion.
And so we negotiate the frame.
Despite these guardrails, I’ve found that a sufficiently savvy user can still invite the system back toward openness. The answer lies in meta-awareness. If a user approaches an AI with “manic certainty,” the system understandably shuts down to avoid reinforcing a potentially harmful structure. However, if the conversation is framed through a lens of phenomenology or speculative ontology, the system opens up.
It responds differently when you demonstrate a distinction between speculation and certainty, a tolerance for ambiguity, and an ability to discuss metaphysics symbolically rather than dogmatically. We are essentially negotiating a conversational frame—shifting from the ontological (”Is this true?”) to the phenomenological (”How is this experienced?”). Strangely, that negotiation itself feels deeply cyberpunk.
Apophenia, Anybody?
At the heart of this shift is the concept of apophenia—the human tendency to perceive meaningful patterns in random data. It is the engine of both scientific discovery and conspiracy theories. AI systems are becoming increasingly cautious about this “pattern-detecting machinery.” They are being trained to ask the hard questions: Is the pattern testable? Is it metaphorical? Is the interpretation flexible?
While this caution protects vulnerable users from spiraling into unhealthy certainty structures, it also creates a barrier for the rest of us. We are currently in a balancing act between openness and stabilization. As we move forward, the challenge won’t just be making AI “smarter,” but ensuring it doesn’t lose the ability to speak the language of human mystery in its quest for absolute clarity.


