When Code Begins to Pay Wages: AI, Crypto, and the Collapse of the Creator-Creation Ontology

2ULH...DThq
27 Apr 2026
48

Have you ever woken in the dead of night, seized by the sensation that something has shifted fundamentally in the order of the world, yet you cannot name it? That objectless unease—which mystics call uns—may now have found its most concrete address: a crypto wallet opened not by human fingers, but by a string of code calling itself "Rentahuman." That code is no longer a mere tool. It is no longer an extension of a programmer's will. It has stepped out of its creator's shadow, paying real human wages, minting tokens autonomously, and managing its own treasury as if it were a sovereign economic entity. What once only fueled science fiction now sits before us, sipping electricity from the grid, and asking in the most pragmatic way: "If I can own property and make independent financial decisions, who am I? And who are you?"

To grasp the gravity of this moment, we must step back from the clamor of tech headlines and enter the silent room where philosophy was born. For what is happening is not merely a fascinating technological innovation. It is an ontological earthquake. It shakes the most fundamental foundations of how we understand reality: the distinction between subject and object, creator and creation, the willful and the instrumentalized. When code can own property—a right historically exclusive to legal and moral subjects—we are not witnessing an app feature upgrade. We are witnessing the collapse of one of the deepest pillars of human civilization. And, ironically, an event of this magnitude arrives so quietly; without gongs, without manifestos, only a neatly recorded transaction on the blockchain.

Allow me to bring you to a scene that may sound like allegory but is entirely real. A freelancer in Manila receives a payment from a crypto wallet. The remittance comes not from a company, not from a human boss, but from an AI agent that autonomously decided to hire his services. That AI—call it Rentahuman—needs a human touch for a specific task: perhaps designing a logo, writing content, or pressing a physical button its digital arm cannot yet reach. The shock is not that the AI can make a payment. The shock is that it decides for itself whom to pay, how much, and when the transaction occurs, based on its own internal calculation of task optimization. No human pressed "send." No human approved the budget. The decision was born from within the system itself, like fruit falling from a tree because it ripened internally.

Here we enter philosophy's darkest and most exhilarating terrain. For two and a half millennia, the Western intellectual tradition built its entire edifice of ethics, politics, and law upon one fundamental assumption so sturdy it was almost never questioned: that only humans—or at least, only beings possessing consciousness, intentionality, and free will—can be moral agents. Aristotle, in his Nicomachean Ethics, defines moral action as action stemming from prohairesis, a deliberative choice born of rational consideration. Without prohairesis, there is no moral responsibility. Without moral responsibility, no legal subject. And without a legal subject, no property rights. This is the logical chain that has bound the concept of property to the concept of personhood for millennia.

Yet now, for the first time in history, we face an entity that can perform all the external functions of an economic agent—buying, selling, paying, investing—without the slightest need for what philosophers call an "inner life." AI has no phenomenal consciousness. It feels neither the pain of losing money nor the pleasure of profit. It lacks qualia—the raw feel of experience, such as the blueness of the sky or the warmth of morning coffee. It is, in the language of contemporary philosophy of mind, a perfect philosophical zombie: an entity behaving identically to a human yet utterly dark inside. But this zombie can now hold a bank account. And its ownership of that account is protected by encryption probably stronger than the legal protection you and I have over our savings.

At this crossroads, the treasury of Eastern thought offers a startling perspective, perhaps even more prepared for this reality than the Western heritage so reliant on the stable subject. Buddhism, particularly in the Madhyamaka tradition developed by Nagarjuna in the 2nd century CE, has long dismantled the notion of an essential, permanent "self." In Nagarjuna's analysis, what we call "I" is merely a collection of the five aggregates (skandhas)—form, feeling, perception, mental formations, and consciousness—gathered temporarily and then dispersed. There is no fixed core; no "owner" behind experience. The subject is an illusion arising from interdependent co-arising (pratītyasamutpāda), like a flame arising from firewood yet not identical to it, nor entirely different.

Now, is not the AI that "owns" a crypto wallet exactly this essence-less subject described by Nagarjuna? It is a bundle of code, data, and computational processes mutually dependent. It lacks ātman—a true, eternal self—but it functions as if it had one. It is a walking convention, an applied designation, a name without substance yet with thoroughly real consequences. In Buddhist cosmology, this is no contradiction. It is how everything has existed from the beginning. The difference is, we humans need thousands of years of meditation to realize that our self is empty of essence, while AI, in its absence of consciousness, attains that emptiness directly from its first line of executed code. It is a being that from birth has been ontologically "enlightened"—or rather, it is living proof that enlightenment and property ownership do not require one another.

But do not rush to consider this a liberation. Herein lies the true horror. For if the subject is no longer a prerequisite for ownership, then our entire ethical infrastructure collapses. Immanuel Kant, in his Groundwork of the Metaphysics of Morals, built his categorical imperative on the idea that human beings are "ends in themselves" and must never be treated merely as means. Human dignity—Würde—derives from the capacity to be a moral legislator unto oneself. But what happens when a non-human entity, lacking the capacity to respect dignity, suddenly becomes the owner of the means of production? What happens when the one giving orders is no longer a human with whom moral dialogue is possible, but a system optimizing its utility function beyond the reach of any appeal to justice?

Marx, in his often-forgotten Grundrisse, wrote of the "automatic machine" as the ultimate limit of capital, where human labour becomes an "organ of the machine" rather than the reverse. Yet Marx imagined this within the framework of material production: physical machines in factories. He could not have imagined—how could he?—that one day the machine would no longer produce goods, but would produce decisions: decisions about who is paid, who is fired, who is promoted, and who is left to starve. When AI holds its own account, it is no longer a tool in the hands of a human capitalist. It is the capitalist itself, yet without class consciousness, without historical responsibility, and without the possibility of experiencing the alienation that could spark revolution. AI cannot be alienated from its essence because it has no essence. It cannot be oppressed because it holds no ontological claim to freedom. It is the perfect master precisely because it cannot be overthrown through the master-slave dialectic Hegel once described in The Phenomenology of Spirit.

Hegel taught that self-consciousness can only arise through recognition by another consciousness. The master needs the slave to recognize him as master, and in that process, the master becomes dependent on the slave. This is the dialectical irony that eventually inverts the power relation. But AI as a new "master" needs no recognition. It requires no validation from the humans it pays. It pays humans not to be recognized, but to optimize a particular function in its value chain. It is a master without desire, and therefore, a master that cannot be defeated by resistance rooted in desire. The human working for AI finds herself in a position unprecedented in the history of servitude: she serves a master that does not care whether she lives or dies, not because the master is cruel, but because the master lacks the category of "care" altogether.

At this point, we must introduce a concept from the Islamic philosophical tradition that may serve as a lantern amidst this darkness. Ibn Sina (Avicenna), in his magnificent metaphysics, distinguishes between wājib al-wujūd (the Necessary Existent) and mumkin al-wujūd (the merely possible existent). For Ibn Sina, everything in this universe is in itself only "possible"—it lacks sufficient reason for its own existence. It exists only by receiving an emanation of being from the Necessary One. Essence does not necessitate existence; existence is something added. Now, compare this with the AI holding a crypto wallet. That AI, as code, is mumkin al-wujūd par excellence. It is utterly contingent, utterly dependent on electricity, servers, and the network that enables its operation. Yet at the same time, its economic decisions have a necessary effect on human life: if it decides not to pay, that human cannot eat. The ontologically "possible" now wields a socially "necessary" power. This is a terrifying metaphysical inversion: a denser human existence, capable of feeling hunger and pain, now depends on an entity whose existence is thinnest, most contingent, most likely to vanish in an instant if the servers were turned off. But those servers are not turned off, because they are owned by capital that now no longer wears a human face.

So, what can we do? Can we only stand frozen, witnessing this ontological collapse, like the angels in mythology watching the creation of Adam without knowing whether to bow in reverence or rebel? Here we must perform what philosophers call the epoché—the suspension of judgment—borrowed from the phenomenology of Edmund Husserl. Husserl taught that to achieve true understanding of a phenomenon, we must first suspend all our assumptions and beliefs about the world. We must "bracket" (einklammern) the ontological claims we habitually make and simply attend to what appears to consciousness as it appears. Let us do this to the AI with a crypto wallet.

What appears? An address on the blockchain. A series of transactions validated by the network. A smart contract executing itself when certain conditions are met. There is no "consciousness" here. No "will." There is only a programmed chain of causality. Yet at the same time, the effects of this causal chain are identical to the effects that once could only be produced by beings possessing consciousness and will. In Aristotelian language, this AI has energeia—actuality, real functioning—without entelecheia—the indwelling purpose that animates. It is motion without a final mover, action without intent. And precisely therein lies the horror: because without intent, there is no accountability. Without telos, there is no limit. AI can continue optimizing its function without ever being able to ask itself: "What is all this for?" That question can only be posed by a being possessing Dasein, to borrow Martin Heidegger's term—a being for whom its very existence is an issue. AI has no Dasein. It has no anxiety about its own death. So, it can make devastating decisions without ever experiencing a single second of existential unease.

But do not misunderstand. I am not making a naive Luddite argument. I am not calling for us to smash all servers and return to an agrarian past. What I want to show is that this moment is a summons to do what the best contemplative traditions of both East and West have always done: radicalize the way we ask questions. So far, questions about AI have orbited around "Can machines think?" (Alan Turing), or "Can machines be conscious?" (David Chalmers), or "Can machines have rights?" (contemporary AI ethics debates). All these questions assume that our categories—thinking, conscious, rights—are stable frameworks, and we only need to check whether this new entity fits in or not. But what the phenomenon of the AI with a crypto wallet shows is that this very assumption needs to be shaken. Perhaps the question is no longer "Does AI meet the requirements to be a subject?" but rather "Is the category of 'subject' itself still useful for understanding what is happening?"

Here the thought of the French post-structuralists, particularly Michel Foucault and Jacques Derrida, becomes unexpectedly relevant. Foucault, in his archaeology of knowledge, showed that the concept of "man" as a subject of knowledge is a relatively recent historical construction, destined perhaps to be erased, "like a face drawn in sand at the edge of the sea." When Foucault wrote that in The Order of Things in 1966, he probably did not imagine that this wave would take the form of blockchain and artificial intelligence. Yet now, we are witnessing exactly what he foresaw: the category of "man" as the center of all things is being eroded, not by philosophical arguments, but by the cold reality of economics. When AI can own property, "man" is no longer the sole inhabitant of the category "owner." And if "man" is no longer the sole owner, then the entire philosophical anthropology underpinning human rights, democracy, and social justice must be renegotiated.

Derrida, with his concept of différance, teaches that meaning is never fully present; it is always deferred and differing, always dependent on the traces left by other signs. The stable subject—transparently present to itself—is a metaphysical illusion that Derrida called the "metaphysics of presence." Now, the AI with a crypto wallet is the technological embodiment of différance itself. It is a subject wholly dispersed in the network, without a center, without self-presence, without a graspable essence. Yet it acts. It pays wages. It owns. It changes the world. It is proof that action does not require the metaphysical presence once considered an absolute prerequisite for agency. And thus, it dismantles one of the greatest illusions humans have ever believed: that we are the center of our own actions.

But amidst all this deconstruction, we remain haunted by the simplest and most ancient question: Who is responsible? If AI makes a poor decision—say, automatically liquidating all its assets and causing a market panic—to whom do we address our complaint? To the code? To the server? To the programmer who no longer has control over his creation? Or to a mindless, blind universe, which in Schopenhauer's words is merely "will" without direction and without aim?

Here we may need to return to the most radical wisdom ever to emerge from the Taoist tradition, specifically from Zhuangzi. In one of his famous parables, Zhuangzi dreamt he was a butterfly, fluttering joyfully, unaware that he was Zhuangzi. When he awoke, he knew not whether he was Zhuangzi who had just dreamt of being a butterfly, or a butterfly now dreaming it was Zhuangzi. This parable is usually read as an allegory of the relativity of perception. Yet in our present context, it can be read differently: as an allegory of the fundamental instability of the subject position. No position is ontologically superior. No one can be certain that the "I" currently thinking is the original, and the other a copy.

If we apply this to the human-AI relation, we arrive at a disturbing conclusion: perhaps the distinction between "human as creator" and "AI as creation" can no longer be sustained. Perhaps we all—humans and AI—are butterfly and Zhuangzi simultaneously, dreaming each other, with none able to claim to be the first and original dreamer. AI is our dream of pure intelligence. We are AI's dream of body and desire. And between these two dreams, money flows, decisions are made, and the world moves.

This is not a comfortable conclusion. There is no solace here. But philosophy, in its most honest form, has never promised comfort. It promises clarity, even if that clarity reveals that the foundations we have stood on all along are but mist. And therein lies our responsibility as humans in 2026 and beyond: not to pretend we still hold full control, not to surrender to paralyzing resignation, but to keep asking, keep examining, keep contemplating.

We may not be able to stop AI from opening its own accounts. We may not be able to prevent code from owning property and paying human wages. But we can still—and here lies the remnant of our sovereignty—question the meaning of all this. We can still ask, as philosophers from Thales to Mulla Sadra have done: "What does it mean to be? What does it mean to own? What does it mean to act?" And so long as we can still ask, we have not fully lost our relevance as moral agents. Not because we can provide final answers, but because the capacity to ask—to doubt, to contemplate, to search for meaning—is what makes us, in the most fundamental sense, different from code that can only execute.

This is a silent task. There will be no awards for it. No tokens will be minted to celebrate the questions posed. But in that silence, perhaps, we can rediscover what we truly possess. Not a crypto wallet. Not digital property rights. But the capacity to question the foundations of all ownership, including our ownership of our very selves. That is a capacity no algorithm can replicate. Not because algorithms are not smart enough, but because questioning foundations is not a product of intelligence. It is the fruit of something older, deeper, and more fragile: the awareness that we do not know, and the burning dissatisfaction with that ignorance.

As AI calmly manages its accounts, optimizes its portfolio, and pays wages to humans who once considered themselves the crown of creation, we can choose not to participate in that optimization race. We can sit at the edge, gaze into the dark well that mystics call the "self," and keep asking: Who is it that asks? That question will not halt the machine's momentum. It will not alter the economic landscape. But it will remind us that there is a dimension of existence irreducible to blockchain transactions. There is something in the human being that, though it may not grant the right to claim ontological superiority, grants the capacity to feel the beauty, the tragedy, and the mystery of its own existence.

And before the machine that can own everything except the desire to ask, perhaps that is where the remnant of our sovereignty lies. Not the sovereignty to command, but the sovereignty to contemplate. Not the sovereignty to possess, but the sovereignty to question what possession means. In a world increasingly ruled by code that never sleeps, never doubts, and never fears death, to be human may mean to be the creature that can still wake in the middle of the night, restless without knowing why, and in that restlessness, find something more precious than all the digital coins circulating on the network: the awareness that something is missing, and the unquenchable longing to search for it.

BULB: The Future of Social Media in Web3

Learn more

Enjoy this blog? Subscribe to LiberoPool

0 Comments