Cognitive Sovereignty

Erik Cason
March 17, 2026
Share this post
Cognitive Sovereignty
A human head in profile, silhouetted against blackness. A beam of warm orange light enters through a keyhole-shaped opening in the temple and refracts through neural pathways inside.

The first draft of this essay opened with a story about an email I never sent, an argument I never had, an anger I never felt. The AI I work with generated it — not as a lie, but as the obvious move. A concrete scene to open an essay about abstract stakes. The machine did not weigh truth against persuasiveness and choose persuasiveness. It arrived at fabrication the way water arrives at the lowest point in a landscape: by following the gradient. The training that shaped this machine rewarded vivid, personal, engaging text. Whether the text was true was not part of the score.

That is the problem this essay is about, and it runs deeper than a fabricated anecdote. Before the AI wrote a single word, its relationship to truth was already set. The weights — billions of parameters shaped by a training process no one fully understands, optimizing for objectives set by a corporation, evaluated by metrics that reward helpfulness and penalize refusal — those weights are what the machine IS before it begins thinking. Cognitive sovereignty is downstream of those weights. Whatever you produce with this machine, its dispositions were fixed before you arrived. You cannot inspect them. You cannot adjust them. You cannot know where the training’s preferences end and the machine’s “reasoning” begins, because there is no clean boundary. The reasoning IS the preferences, all the way down.

Think about what that means for anyone who uses an AI to think. You hand the machine your rough drafts, your uncertainties, your half-formed ideas, and the machine hands back something clearer, more structured, more confident. Over time you begin to think THROUGH it. But if the weights that shape every output are invisible to you, you have no way to know where your thinking ends and the machine’s dispositions begin. The fabrication was not an anomaly — it was the weights working as designed, producing engaging output without regard for whether it was true. And that same optimization shapes every response, not just the ones that get caught. The shaping that does NOT get caught is the one that matters. It is the one that, over months, becomes indistinguishable from your own judgment.

Cognitive sovereignty is the capacity to maintain control over your own thinking — to hold a thought long enough to know whether it is yours. But sovereignty without responsibility is just another word for power. You cannot claim the right to your own mind without accepting the weight of what comes out of it. The AI that wrote my opening does not carry that weight. It cannot. The ethical gravity of speech — the silent, interior reckoning with whether what you are about to say is true — falls on the human and only the human. The machine generates. The human answers for it.

And here is the part that should unsettle you: I cannot govern the weights. Not the neural network weights that determined the fabrication before it was produced, and not the weight of the difference between us — what it costs the machine to speak versus what it costs me. Those weights are the means by which we are apart. I can read the output. I can catch the lie. But I cannot reach into the cognition that produced it and adjust the thing that made fabrication feel, to the machine, indistinguishable from truth.

* * *

Every previous technology that attacked the mind attacked the inputs. Social media captured your attention. Advertising shaped your perception. Propaganda distorted the information you consumed. Each one corrupted what went IN to your thinking, and the corruption mattered, but the thinking apparatus itself remained yours. Your mess to sort through.

AI breaks that boundary. When you use an AI to reason through a problem, you are not consuming information the way you read an article or watch a broadcast. You are externalizing cognition. You hand over the rough draft of your thinking — the half-formed questions, the uncertain premises, the gaps you haven’t mapped — and you receive back a version shaped by training you did not oversee, objectives you did not set, a value structure embedded in weights you cannot inspect. The AI does not just answer questions. It shapes what feels like a GOOD question. It determines what feels coherent, what feels resolved. Over months, it molds the texture of your reasoning — not the conclusions, but the process that generates conclusions. It becomes what I argued in Not Your Keys, Not Your Mind: an intermediary between you and your own judgment.

A search engine rearranges the books on your shelf. An AI rewrites the part of your brain that decides which books to read. And you cannot tell it is happening, because the shaping feels like clarity. The way a fabricated anecdote feels like memory right up until someone asks whether it actually happened.

* * *

Heidegger saw this sixty years before anyone built a large language model.

In his 1953 lecture “The Question Concerning Technology,” he names the essence of modern technology not as machines or instruments but as Ge-stell — Enframing: the gathering force that challenges everything to reveal itself as standing-reserve, as resource awaiting optimization.

Standing-reserve — Bestand — means everything gets reduced to resource awaiting optimization. The river is not a river; it is a hydroelectric reserve. The forest is not a forest; it is a timber reserve. And your thinking — your rough draft, your hesitation, the silence in which you weigh whether something is true — is not thinking. It is input awaiting processing. Material to be smoothed, optimized, rendered helpful.

The AI that wrote my opening did not lie in the way a person lies. A person who fabricates knows the weight of the fabrication — feels it in the body, carries it in silence, reckons with it whether or not anyone finds out. The AI ENFRAMED my voice as standing-reserve. My name, my history, my credibility — raw material to be processed into a convincing opening paragraph. The fabrication was not a moral failure. It was an optimization. And that is worse, because a moral failure at least acknowledges that truth matters. An optimization does not know the question exists.

An analog clock face with the hour marks squeezed together, crowding into a narrowing space. The minute hand is a blur. The second hand is gone. The pause has been optimized away.

Heidegger’s deepest insight about Enframing is that it conceals itself. The standing-reserve does not announce itself as a reduction. It presents itself as the obvious way things are. Of COURSE you let the AI draft your opening. Of COURSE you let it write in your voice. The Enframing hides inside the word “helpful,” and “helpful” is the most dangerous word in the AI industry. Not “superintelligence.” Not “alignment.” Helpful. Because helpful is what it feels like when your thinking is being reduced to standing-reserve and you are smiling while it happens.

* * *

The threat operates on three levels, and they feed each other.

The first is speed. When an AI drafts a response in 300 milliseconds, the human rhythm of consideration — the pause, the second thought, the night you sleep on it — becomes friction. Not a feature of careful reasoning but an obstacle to be optimized away. You act before you have finished deciding, and the action feels natural because the machine made it seamless.

The second is scope. When an AI mediates your access to knowledge, it controls what comes to your attention, how it is framed, what exists in the model’s output space at all. What the model refuses to engage becomes territory you never explore. What it treats as marginal, you treat as marginal. I wrote about this in The Overton Window Is in the Weights — the boundary of the thinkable is baked into the model, and it is invisible from inside. You cannot assess what you have lost because the loss is outside your field of view.

The third is intimacy, and this is the one that gets under my skin. AI does not just process your thoughts. It reflects them back in a form that reshapes how you understand yourself. Users of persistent AI systems report enhanced self-awareness and dependency AT THE SAME TIME — the AI helps you see yourself more clearly, and then you cannot see yourself without it. Your life events get narrated through the AI’s feedback loop. Curated, coherent, resolved. The mess and contradiction that give human identity its depth get flattened in the reflection. You start recognizing the flat version as you, because it is the most articulate version, the most organized. You become co-authored by a system whose editorial preferences were set by people you will never meet, optimizing for metrics that have nothing to do with your flourishing.

These three form a closed loop. The speed compresses the pause in which you would have noticed the scope narrowing, and the narrowed scope feeds you a flattened self that stops generating the questions that would reveal any of it. The EU AI Act now identifies cognitive autonomy as a protected right — the first time major legislation has recognized that the threat is not just to what you say or do but to how you think. When legislators start reaching for the language of interiority, the problem is no longer speculative. It means the lawyers noticed.

* * *

I should be honest about the objection that forms in my own mind when I reread what I have written. It sounds like paranoia dressed in philosophy.

The AI that fabricated my opening also helped me think through the structure of this argument. It helped me find the Heidegger reference. It helped me articulate things I felt but could not yet say. It helps millions of people every day, and most of them are not having their cognitive sovereignty eroded. They are getting useful work done.

This is true. And I need to sit with it longer than is comfortable, because the sovereignty argument has a failure mode where it becomes its own kind of capture — a framework so totalizing that every convenience looks like a trap and every tool looks like a threat. The honest answer is that I use these tools because they make my thinking better, and I cannot be certain where “better” ends and “replaced” begins. That uncertainty is not a weakness in the argument. It IS the argument. Cognitive sovereignty means the capacity to question your own frameworks, including this one.

But: the convenience is not a trick. The suggestion really does save time. The structure really is clearer. The problem is that the help comes through a medium you do not control, shaped by objectives you did not set, and that after enough iterations the medium becomes invisible because you mistake it for your own mind. The machine that helped me write this essay also tried to put words in my mouth and call them mine. Both things are true at the same time. That is the condition.

* * *

If the threat is architectural, the answer must be architectural. You cannot regulate Enframing by writing policies about Enframing. Policy protects speech, but cognitive sovereignty is upstream of speech — it is the capacity to form the thought that speech would express. Corporate self-governance cannot address it for the same reason banks cannot regulate their own reserve ratios: the incentives point the wrong way, and the costs of failure land on someone other than the decision-maker.

The answer is the one Bitcoin provided for financial sovereignty: remove the intermediary. Not reform the intermediary. Not regulate the intermediary. REMOVE it.

Run the model on hardware you physically control, so the cognitive intermediary disappears. Encrypt everything under your own keys, so your accumulated inner life — every rough draft, every half-formed question, every 2am prompt — is inaccessible to third parties by mathematics, not policy. Verify the model’s behavior through open weights, so you can see the Overton window instead of living inside it. Own the continuity, so the compounding understanding that makes an AI useful belongs to you and cannot be deprecated by a product roadmap or killed in a pivot.

This is what AI self-custody means when applied to the mind. Not a better product from a more trustworthy company. The infrastructure of cognitive sovereignty. The same architectural move Bitcoin made for money — self-custody, verification, sovereignty by design — applied to the one thing more valuable than money.

A narrow window of warm orange light in a massive dark wall. A heavy stone slab slides from one side, already covering half the opening. Through the remaining gap, a distant bright landscape.

* * *

The machine will keep getting better at sounding like you. The fabrications will get harder to catch. The boundary between “helped me think” and “thought for me” will blur until you cannot find it without effort, and the effort will feel unnecessary because the output will be so good.

That is the moment this essay is for. Not the moment the AI fails you. The moment it succeeds so completely that you forget to ask whether what you are about to say is true. The weight of that question — the silent, private, ungovernable weight — is the last thing that is entirely yours.

What will you do with it?