Philosophical Wednesday reflection
We live in a world where more and more of what we “buy” is not really ours.
You can purchase a Kindle book, pay full price, see it in your library — and still not own it. You only have access to it as long as your account is active, the platform exists, and the provider continues to grant you permission. If the platform decides otherwise, the book can disappear overnight. There is no shelf, no physical copy, no ability to lend it freely, archive it permanently, or pass it to someone else. What you own is not the book — only a temporary right to access it.
With a printed book, the story is different. You can place it on your shelf, annotate it, lend it to a friend, resell it, or keep it for decades. It is an asset. It exists independently of the vendor’s business model or API uptime.
This difference — renting versus owning — may sound obvious, but its consequences are massive. Libraries struggle with this shift: publishers increasingly dictate what libraries can access, for how long, under what conditions, and at what price. Ownership used to guarantee cultural continuity. Renting replaces that with fragile contracts and platform dependency.
The same pattern is now repeating itself in AI.
When AI Becomes a Rental Dependency
A few days ago, Anthropic released a powerful new tool — a computer-use / co-working agent that can automate workflows, operate interfaces, and scale personal productivity. Technically, it is impressive. I was genuinely excited to try it.
Then I discovered that it was not available on my current subscription. To access it, I would need to pay six or seven times more per month. Just to experiment.
This moment is small — but it reveals something much bigger.
As we increasingly integrate AI tools into our daily workflows, we begin to depend on them. They shape how we write, code, research, plan, and think. Once your workflow is built around a tool, switching away becomes painful. Your habits adapt to the tool’s interface, capabilities, and assumptions. You become, in a very real sense, locked in.
But unlike software you install locally, you do not control:
Pricing
Feature availability
Model behavior
Rate limits and token costs
Policy changes
Product shutdowns
You are renting cognition infrastructure.
Today the price is reasonable. Tomorrow it may not be. A feature you rely on can disappear, move to a higher tier, or degrade silently. The vendor can change the model quality, introduce throttling, or redesign the product in ways that harm your workflow — and you have no meaningful leverage.
This is not hypothetical. It already happens across SaaS platforms, cloud infrastructure, and developer tooling. AI simply amplifies the impact because it touches thinking itself.
If your thinking pipeline depends on rented intelligence, your autonomy becomes fragile.
Ownership Is Not About Possession — It Is About Sovereignty
For more than a decade, I’ve been working with ideas of sovereignty: self-sovereign identity, self-sovereign agents, ownership of data, decentralization of trust. The conclusion I keep coming back to is simple:
Owning data is not enough.
You must also own the tools and compute that give that data meaning.
If you store your data locally but rely entirely on external platforms to process it, analyze it, or reason over it — you are still dependent. Your sovereignty is incomplete.
True sovereignty requires control over:
Data — what you store, how it’s structured, how long it exists.
Compute — where processing happens, what models run, what costs exist.
Tools — the software layers that interpret, transform, and reason over your data.
If any of these layers is fully externalized, you inherit someone else’s rules, incentives, and risks.
The same applies directly to AI workflows.
If you build your daily work on proprietary AI services — and you do not own the models, the inference pipeline, or the compute — you are exposed to sudden loss of control. Pricing changes, access restrictions, geopolitical regulation, business failure, or simple product pivots can disrupt your cognitive infrastructure overnight.
That is not a comfortable dependency to build a life or career on.
Local AI Changes the Power Dynamic
This does not mean everyone needs to build a GPU cluster at home or invest €30,000 into servers. That would be unrealistic and unnecessary for most people.
But there is a meaningful middle ground:
Local or self-hosted models for core workflows
Hybrid setups where sensitive or critical tasks stay local
Open models that can be upgraded without vendor permission
Toolchains that remain functional even if one provider disappears
When you own the compute — even partially — the relationship changes. You can experiment freely, control latency and costs, archive workflows, reproduce results, and preserve your intellectual autonomy.
It’s the same difference as between:
Hosting your own database versus depending entirely on a managed cloud service
Owning your encryption keys versus trusting a third party
Running your own CRDT sync layer versus outsourcing consistency guarantees
Ownership creates resilience. Renting creates convenience — but also fragility.
Convenience is addictive.
The Hidden Cost: Cognitive Dependency
There is another layer that worries me even more than pricing and access: cognitive dependency.
If you always start with AI and only later try to think yourself, your own reasoning muscles weaken. You stop exploring the problem space deeply. You outsource curiosity, structuring, synthesis, and even doubt.
My strong recommendation is:
Use your brain first.
Then use AI to amplify, refine, challenge, or accelerate your thinking.
Start with pen and paper. Sketch the idea. Write the rough argument. Build the mental model. Only then bring AI in as a collaborator — not as a replacement.
If you reverse this order, you risk becoming cognitively dependent. Over time, your own internal problem-solving capacity can degrade. That is a much more dangerous form of lock-in than any subscription plan.
Renting Is Anti-Sovereign by Design
Renting infrastructure is not neutral. It centralizes power, concentrates leverage, and slowly erodes individual agency. When applied to AI — a tool that shapes thought itself — the risks become structural.
This does not make AI vendors evil. Many of them do amazing work. Anthropic, OpenAI, and others are pushing the frontier of what’s possible. But business incentives naturally push toward lock-in, monetization tiers, and platform control. That is simply how markets behave.
As users and builders, we must consciously resist turning our cognitive infrastructure into a pure rental economy.
Owning your data is not enough.
Owning your tools is not enough.
Owning your compute — even partially — is what restores balance.
A Personal Note
I genuinely admire what companies like Anthropic are building. The quality is impressive. The engineering is world-class. But when access jumps to €100 per month just to experiment, it forces a question: who is this future really for?
For me, that price crosses a psychological boundary. It transforms curiosity into gated access. It reminds me again that relying too heavily on rented intelligence is not a path toward long-term autonomy.
We should enjoy these tools — but not surrender our agency to them.
More on renting ai