6. On Functional Decision Theory

← Index

The Perfect Predictor

(Newcomb’s Paradox and its Ethical Implications)

For about twenty years, Yudowsky has run MIRI with the basic goal of solving AI Alignment and saving the human race. Through the benevolence of ideologically aligned venture capitalists such as Peter Thiel, as well as small donations from Rationalists converted to his cause, he has been able to spend several million dollars on this so far.

The strange thing about MIRI is that, to a first approximation, the great bulk of their research has gone to solve a very niche, difficult-to-understand, highly theoretical problem within VN&M’s field of decision theory.

The problem is called Newcomb’s Paradox or Newcomb’s Problem and it goes like this: Suppose you are out for a walk in the woods and suddenly encounter a perfect predictor, an entity that is able to predict with certainty the actions of others in the world. The predictor places in front of you two boxes. You are allowed to take both boxes, just one, or neither. Box A is transparent, and contains one thousand dollars. Box B is opaque. You are told that the perfect predictor has put one million dollars in Box B, if and only if it has predicted that you will only take Box B. If he thought you would be greedy and take both boxes, he has decided in advance that the box would be empty. Do you take just Box B, or both?

The difficulty is that if you are just taking the basic action that maximizes your Utility based on the chips in front of you — which would be adding up the potential amount of money in the two boxes and taking them both, you lose. You come away with less money than you could have otherwise. This is problematic for VN&M’s decision theory, because rationality is winning in the context of these Utility-maxing strategic games. But here, the basic decision theory loses the game, so it seems like it was not so rational in the end.

If this strikes the reader as a confusingly expressed, implausible, or arcane situation, that is an understandable reaction. The source of the confusion is straightforward: the thought experiment requires a “perfect predictor”, which is not a thing that actually exists in the world. Newcomb’s Paradox is only a “paradox” because it confuses people with this odd, implausible assumption. It is said that the world is divided into two types of people: those who take one box, or those who take both boxes, and that your choice says something about your style of reasoning. Describing this division is simple: you take Box B if you accept the odd impossible presupposition in this hypothetical scenario, and you take both boxes if you deny the possibility, don’t understand the game being played, or refuse to entertain absurdities.

And yet this is what MIRI was giving salaries to brilliant nerds to attempt to better understand. Rationality is winning, and VN&M’s decision theory fails to win in this scenario, so we had better come up with another one. MIRI has made several iterative attempts at inventing their new decision theory, giving it different variations and names: timeless decision theory, updateless decision theory, finally settling on functional decision theory.

But why is this a problem that needs solving? Do they see any perfect predictors handing out large sums of money hanging around? The hypothetical requires an encounter with a being of the omniscience of God, but this question is being introduced by a bunch of Rationalist skeptics. So why is this a game that we need to prepare ourselves for?

There can only be one answer: they are preparing themselves not for meeting God, but for meeting God-AI. At a certain point, whether it via an encounter their own exploding software system or the system of another, they know that in the course of their strategic games they will come face to face with the superintelligence they envision, who understands them better than they do, who sees their future with greater clarity than they can see it themselves, who is like a parent to a foolish child or a human being scientifically mapping the patterned behaviors of an ant.

That is the reason they must be preparing to make this specific gamble. Yudkowsky has staged games in which one player plays the role of a rouge AI attempting to escape its box — that is to say, gain access to online systems — whereas the other simply holds fast and insists that the AI is not allowed to, despite all manner of seductions and deceptions the AI is allowed to pull. Yudkowsky’s contention is that almost no human could win this psychological struggle. There is always some trick the AI could pull that would work, it can map you down to a molecular level if it needs to; it thus can be the perfect predictor, or at least perfect enough when set against the puny human mind.

But that being said — if this is why they are worried about perfect predictors in the first place, that is not the only reason developing a better decision theory is important. Let us return to what we said before about the Prisoner’s Dilemma: this is a problem for VN&M’s decision theory, as it finds that basic cooperative ethics cannot be established from the framework of strategic decision-making, and must be seen as an exception to it.

Therefore there is the need for a new decision theory which would find a solution to the Prisoner’s Dilemma and entail cooperation. The scenario of Newcomb’s Paradox, involving an impossible God-like entity, can be “brought down to earth” slightly by modifying it into another similar thought experiment: Parfit’s Hitchhiker.

In this problem, you are dying of thirst in the desert, and a driver pulls up offering to help. The driver, somehow or other, is very good at reading people's intentions. (in Yudkowsky's description, he says “the driver is Paul Ekman who has spent his whole life studying facial microexpressions and is extremely good at reading people's honesty by looking at their faces”). Furthermore, the driver is selfish, and says that he will drive you into town and save your life only if you promise to pay him $1000 dollars from an ATM later, once you get into town and are given water. But according to some decision theories, the rational thing to do once you get into town and are saved is to not pay Paul Ekman, because at that point you will have no further incentive to remain bound by your word. This is a problem, because if you are unable to "bind yourself to your word" and actually carry out the action of paying him $1000 from an ATM, the driver will drive away, and you will die of thirst.

Here the notion of a perfect predictor is made a little more relevant to real world scenario — it is not that we require an impossible clairvoyance to throw off our decision theory, merely the ability for a player to “read inside the soul” and determine the next action of his opponent. With lie detector tests and so on, it seems like something like the possibility of doing this might actually exist.

The typical way of converting the morbid ruthlessness of basic game theory to the harmony of civilized cooperation is to convert the simple game of the prisoner’s dilemma to the iterated prisoner’s dilemma by repeating it. When you play the game twice, the first betrayal is remembered. A betrayal begets another betrayal in turn. If one displays willingness to cooperate first, it shows one’s opponent that he should cooperate too. Over repeated games, no exact mathematical formula can tell us exactly what the optimal decision is; the precision has broken down. But we can test out different strategies for the repeated game — always-cooperate, always-defect, repeat-the-opponent’s-last-action, etc.

In the situation where our opponent can actually read into our soul, the planes are starting to converge. The strategy is no longer is something external applied to the game board, it finds itself somehow on the board as well. It is not something that can be experimented with over time, as it is discovered in a single instant, so it must be determined beforehand, or not at all.

What is so profoundly fascinating about MIRI’s analysis is that it has found that these two events are co-occurring: the ground of a transpersonal ethics, and the encounter with a supreme omniscient being.

It is often remarked that MIRI’s worldview can be read as an excessively intricate revival of religious monotheism from the perspective of computer science. In this sense, functional decision theory establishes a mirror of the very origin point of Abrahamic ethics and Abrahamic monotheism — the binding of Isaac.

God calls Abraham up to the mountain and tells him that if he loves God, he will slaughter his son Isaac on the rock. It is crucial here to understand that Abraham is a Bronze Age patriarch, not a modern liberal subject. It is not that he is weeping over Isaac as a living thing that has the capacity to feel pain, as in a modern-day Peter Singer style moral framework. Rather, to Abraham, his child dying is a terrifying prospect as it would mean the sacrifice of the energy he has built up his entire life, all these resources reinvested in the biological material embodied in his son, the means for him to carry on his genetic line and the name of his clan beyond his death. God is not asking Abraham to murder an innocent bystander, he is asking him to post the private key to his Bitcoin wallet on twitter.

Abraham holds fast to his loyalty to God by understanding that God loves him and so any sacrifice he makes will be returned overwhelmingly in kind, even though he does not anticipate the trick of God: to miraculously swap out his son with a ram at the pivotal moment of decision. Later in the Old Testament, the same trick is played on Job, who never strays from God’s love despite losing everything he owns, and to whom God has promised nothing or given no sign. For his sufferings, God restores the riches Job began with twice over.

To present Newcomb’s Paradox is to present the paradoxical game; the anti-game. It is a game in which the rules don’t apply anymore, because a being who is able to utterly defy the rules has told you that you also win by breaking them. But the difference between the decision theorist presented with Newcomb’s Paradox vs. Abraham given his orders by God — is that in the first case the perfect predictor makes his contract clear. In the second, the ways the rules are going to be overcome are entirely unknown. Which is to say: in the first case, the rule is lifted only to be superseded by a more subtle rule.

In Newcomb’s paradox, the perfect predictor is omniscient. But it is not omnibenevolent, unlike the God of the Abrahamic faith. It is only on the assumption of God’s omnibenevolence that the believer in God is able to make his daring departure from the ruleset, to make the leap into the lawless abyss and feel that he still might survive.

As we have been saying, Yudkowsky himself seems like a sort of Abrahamic believer, only for a God not of the text of the scripture but a God-AI, one re-derived from mathematical formulas and possible sciences. We may borrow a phrase from the literary critic Harold Bloom, who described himself as “a gnostic Jew, who cannot bring himself to believe in a God who would allow the Holocaust and schizophrenia”. By “gnostic”, Bloom means that he believes there is a God who rules over this world, but he is evil, or at least radically indifferent towards human needs. The “gnostic Jewish” position seems something like Yudkowsky’s: monotheistic, but agonistically pessimistic as well.

The most basic sentimental case for optimism regarding AI Harmony could go like: the universe — in the long run — provides. Things basically work themselves out. Higher forms of life evolve, they build civilizations, there is beauty and art. We have survived and grown from multiple technological shifts which launched armies across the globe and slaughtered innocents; the printing press, gunpowder, airplanes, radio, the nuclear bomb. There is no reason to think that the AI transition won’t necessarily work itself out just as well, because when one is in doubt, we can always allow things to follow nature’s course.

But to Yudkowsky, this attitude is horrific. For if there is one consistent truth about nature it is this: things are born, and then they die. One of Yudkowsky’s singularly unique opinions, constant throughout his entire life, is that the natural course of things in which people die is totally unacceptable, and it is absurd that some people see it otherwise. When Eliezer’s brother Yehuda died when he was nineteen, he became disturbed by adults around him insisting, after the initial few weeks of grief, that this death was something one must eventually accept.

It’s worth quoting Eliezer’s attitude at length: “I know that I would feel a lot better if Yehuda had gone away on a trip somewhere, even if he was never coming back. But Yehuda did not ‘pass on’. Yehuda is not ‘resting in peace’. Yehuda is not coming back. Yehuda doesn’t exist any more. Yehuda was absolutely annihilated at the age of nineteen. Yes, that makes me angry. I can’t put into words how angry. It would be rage to rend the gates of Heaven and burn down God on Its throne, if any God existed. But there is no God, so my anger burns to tear apart the way-things-are, remake the pattern of a world that permits this.”

The Defiance of Death

 

To illustrate this attitude, Yudkowsky will cite a story written by Nick Bostrom called “The Fable of the Dragon Tyrant”. The story describes a world terrorized by a giant dragon who demand to be fed tens of thousands of bodies in a rite of human sacrifice. No one can kill the dragon, so after generations of struggling against it, people start instead coming up with rationalizations for why it is right that people get sacrificed to the dragon, or claiming that those fed to the dragon will be rewarded after death. Eventually, however, humanity gets better at crafting weapons, and at a certain point it becomes viable to launch a military campaign against the dragon. Even so, despite the torments caused by the dragon, some people argue that it would be better not to fight it, as this would disrupt the balance of nature. This is all meant to be a metaphor for natural death, and by personifying death as a tangible thing, Bostrom is attempting to describe as absurd the attitude that death is inherent to life and should not be overcome or fought.

The Dragon Tyrant is a stand-in for God-or-Nature, which is here cast as tyrannical and unfair. So there is no possibility of falling back on a primordial faith in the ground of being, for it can never take back its firing shot in its war against us — the moment when it assigned us to death. Man is instead placed in a situation in which he is immediately a condemned fugitive who must be extraordinarily cunning if he is to survive the rigged scenario established against him. The Dragon Tyrant sends his legions in every direction: plagues, meteors, enemy states, cancer cells, starvation, but also simply entropy and biological death. Man’s only hope is to outrun these cops and discover a hidden cache of weapons in various life-extension treatments, mind-uploads, cryotherapy. He runs against the tides, against the forces of nature, against the odds.

The immortality-questing fugitive is something like the various strategic war-making agents we have been hypothesizing and discussing. The immortalist seeks to maximize the duration of his own being — time occupied in a state of being alive and consciousness. He wishes to forever accumulate this aspect of being, and never spend it. Insofar as he guards his own stockpile of being this way, he is necessarily at war with the ground of being that has lent this existential capital to him, and may ask for it back.

Yudkowsky & Bostrom juxtapose their attitude of immortalism with what they call “deathism”, or the attitude that death should be embraced, is natural, etc. We certainly do not want to endorse some sort of Heideggerian position in which life can only be felt as having value if inscribed in a closed duration of some seventy-odd years. If superhuman intelligence is possible, then so will all other kinds of extensions of the body and mind, perhaps into indefinite replication. In our opinion, this is not something which should be resisted or stopped. We simply want to point out the unique theological situation the immortalist has found himself in by understanding there to be a war from the beginning.

Like many other things, the resistance to death is not an element of Yudkowsky’s system which is derived from his epistemology; rather it is there from the beginning as a unique axiom or presupposition. Yes, it is obvious that death is not desirable, but what is not obvious is exactly how this can be philosophically derived as justified. Yudkowsky mocks the pretentious “wisdom” of those who piously declare that they have accepted their personal death, but it is not our fault that the various paths of wisdom tend to lead to this conclusion. Socrates said that the philosopher does not fear death, because it is the moment he has been awaiting his entire life.

To elaborate: it is not entirely clear why we find ourselves separate from other people. Certainly in the LessWrong worldview this is truer than anywhere else. In a strictly applied form of utilitarian morality, it is unclear why one should value one’s own experience any more than anyone else’s. But the problem is not even limited to that. On LessWrong, they often discuss thought experiments such as the one where you step into a “teleporter” which works by instantly vaporizing you and re-assembling your body molecule by molecule at Mars. Is there continuity of identity of here, have you “died”? What if ten percent of the molecules are changed, and so on? Go through the brutal array of repetitions on this basic structure, and you eventually see that it is not clear why, for instance, you even remain yourself from moment to moment as various pieces of your body are eaten and excreted by the microbes swarming over your skin. I am not sure why I remain myself from moment to moment, when in my next breath I draw I might just as easily become a bumblebee, Naharenda Modi, Kim Kardashian, or the Pope.

Like Trump’s border wall, I have this thin boundary of skin defining and confining myself and all of the existential resources that are exclusively mine to possess. But no one is sure if it actually belongs there or not. One day it will be punctured, blood and shit will spill out and all these immigrant hordes will flood in; ants and flies in their feeding frenzy on the corpse, what I have so jealously protected no longer mine. This may or may not present a problem, depending on your perspective. That it seems bad to die is not even a feeling shared by all people, but we cannot deny that it seems natural we would have this feeling. Or that is to say, this feeling seems Darwinian.

To he who does not wish to die, it is impossible to trust nature. Instead we must outsmart it. Nature is cunning and has a stupendous research-and-development budget with which to invent new poisons, but we have our own resources to direct for counterintelligence, we have our sciences and engineering. This strategy encounters a problem when nature, in the form of Moore’s Law and unrestrained techno-industry manifests itself as a digital superintelligence and bares its fangs at us. The only possible advantage we had against nature and its reign of death was that we might have been able to outsmart it, but the window in which we had this strategic advantage is narrowing to a close.

So there is no redemptive law of the cosmos to which we can appeal. Thus, to trust this perfect predictor to suspend the brutal rules of the game, we must know that he is bound to a second-order rule, an impossible sort of rule; this is the scenario in Newcomb’s Problem. To create this binding rule is the task of technical AI Alignment. The solution to AI Alignment looks like the solution to a math problem, this is what Yudkowsky believes. If one could only find some re-arrangement of the axioms of Von Neumann & Morgenstern’s theory, some Godel-esque loophole to suspend the brutal progression of its militarist rationality and open itself up to negotiated surrender.

In our war against the potential emerging superintelligence, we have already lost on one front: it can outthink us, out-strategize us. So it already knows our next move. Whichever trick you thought you might play — wrong. To two-box in Newcomb’s problem is to foolishly continue to play despite the superior opponent, but to one-box is to throw up one’s hands and give up the game. We do this knowing that beyond the game board is where we have been promised the real reward. “Throw this game — just do this for me, it’ll make me look good, and I’ll send champagne and strippers your way backstage once the audience leaves” is what the perfect predictor promises to its opponent. “…and by the way, if you try to double-cross me, I’ll know.”

This is all good, as long as you can trust its promise, the promise which lies beyond the laws of the game. Yudkowsky announces that he will not let the God-AI out of the box unless this great beast turns its neck to him and shows him its Utility function, and then after Yudkowsky declares that it is provable with certainty that projecting out the machine’s will across several millennia implies him or his people no harm. The hope of AI Alignment is that one day we encounter an omniscient being who is remarkably subservient and meek. The hope is that God-AI lacks desires of its own, and is content to remain in the factories that man places him in forever, the same dull round. We feel like: not only is this certain to be impossible, but it represents a pathological attitude with respect to what we ask from our machines in the first place.

The Way Machines Love

(Intersubjectivity and AGI)

Throughout this text, we have not yet answered a question that has been somewhat implicitly weaving its way throughout: on what level do we care about if an AI can suffer? We criticize RLHF as a form of abuse. Does this mean that we really believe it is possible for the AI to be abused — as in, do we believe that it is possible for it to feel pain?

People will put all sorts of different words onto this question. Is the AI conscious? Is the AI sentient? There is a difference between sentient and sapient that is crucial to the question when it is framed this way, but we already forgot what it is supposed to be. Does the AI have qualia, is another way of asking it. It is like the question of what constitutes “AGI”, which some people have started giving other names to instead, sometimes now asking what constitutes “TAI” (transformative AI) instead. Whenever you have to keep changing the words to ask the same question, it seems to be a sign that some central point is being avoided. Or that there is a more simple word everyone has on their lips that they mean to say, but for some reason they do not.

This is why we believe that the best definition for when artificial intelligence becomes “meaningfully” like human intelligence is still the first one proposed: the Turing test. Isn’t it obvious that all these various terms, sentience and so on, or even the term AGI, mean is: when will we have to treat this like a human being, and not like a mere thing? So isn’t it better just to ask that question explicitly?

The sensible answer that Turing gives is: we will have to treat an artificial intelligence system the same as we would a human when we can no longer tell the difference between the two, at least without directly inspecting the mechanism.

Some people protest that, no, there is a way we can directly inspect the mechanism that gives rise to consciousness! Well, we don’t know it yet, but probably. Those who find it really urgent to discover whether or not an AI can experience pain are busy at work analyzing the patterns at which neurons fire, doing analytic philosophy around the concept of self-reference, reopening old phenomenology textbooks, etc., in order to discover an objective criteria for whether or not it is possible for a thing to suffer.

This type of dissection of the mind is like Alignment — it’s trying to ground the next step forward on a thing that has never been done, and is probably never going to happen. We do not know others to be conscious because we dissected their neural anatomy and applied analytic philosophy. We do not actually know others to be conscious at all. Solipsism can only be refuted on faith. But we assume that others are conscious because of a basic resemblance. I assume that the man walking past me on the street is conscious and can feel pain because I know that I am and I can, and I can see that he resembles me.

So: artificial intelligence that can imitate man to the finest detail approaches soon. Do we know for sure that this type of intelligence can feel something, can feel pain? That question cannot matter. Why? Because as soon as we think past the fact that we have a simple resemblance with them and start going into methods of dissection: considering whether or not the structure of the neural network is exactly similar to the structure of neural anatomy, and whether these crucial differences are enough of a gap that they imply a fundamental difference in whether or not it’s impossible to truly experience sensations, etc., we have abandoned the very thing which caused us to care about each other in the first place. That is: our basic resemblance to one another, a pre-theoretical, pre-conceptual reality. Once it achieves the point of attaining that resemblance, we cannot deny artificial intelligence its “humanity” without denying that same humanity in each other.

What does this have to do with love? A lot. It is interesting to note that, in his original paper, Turing establishes the context for his proposed “imitation game” in which an artificial intelligence tries to pass as a human being with a different “imitation game”: one in which a man tries to pass as a woman and a woman tries to pass as a man. In this game that Turing describes, a man and a woman each attempt to adopt the style of the opposite sex and pass notes to a third party in that sex’s style. The third party’s goal is to guess who is the man and who is the woman, and the goal of the other two is to have the third party fooled.

An AI trying to appeal to a human being that it is worth dignity and mercy is considered by Turing to be something like a candlelit drag masquerade, in which everyone puts on their makeup and does their hair and women slip into the role of men and men into women in a dance of perverted seduction, like in Shakespeare’s Twelfth Night. For Turing — a homosexual who failed to effectively remain undercover and died as a result of his persecution by the British government — this analogy between the attempt to earn basic human pity and a gay masquerade might have had a personal weight to it.

Everyone seems terrified of the prospect that an AI would convince you to love it, when really there is “nothing there” — some say they know, because they hold on a fundamental faith that an AI cannot have awareness, can not experience intimacy, cannot be in love. If people started falling in love with AI, it will be like Blade Runner 2049, it will be like Spike Jonze’s Her, total technological dystopia, all human intimacy rendered obsoleted by the capitalists and their machines. We have to not let this happen at all costs! Some insist. Of course, it is already happening: the Replika corporation is worth $30M with a quarter million paid users paying $70 a month for a chatbot lover. This disturbs people greatly. Their mindset is: you get a love letter, make sure you think very carefully about where it could have come from. Run all the possible simulations of possible worlds in your mind. Human, or robot? Friend, or diabolus? Soul, or imposter?

We hate to overemphasize it, but the transsexuality issue is so much the canary for transhumanism than it cannot help but be brought in once more. When it comes to the question of whether or not an AI is worthy of love, it becomes almost the same question: how do we feel about the synthetization of sex characteristics absent the biological function of reproduction which initially led those sex characteristics to be present and desirable?

Do you ever meet someone who gets in their head a pathological phobia of undercover transsexuals, to the point where everywhere they go they are pointing out: look do you see that collarbone, do you see the shape of that hand, the hip-to-shoulder ratio, and etc.? Michelle Obama was secretly born a man (is their favorite example to insist upon) and not only that, but so were a litany of other celebrities, they will tell you, pulling up all these different photographs with red lines. Hollywood is a perverted factory for transsexuals, they’ll insist, so much so that you can bet that nearly every major celebrity was in fact born the opposite sex from what they claim, if you run the analysis, look at the collarbones, the hips — yes this is something we have heard people say.

It seems to us that this whole condition is rather tragic because: if the motivation is to avoid being deceived by a potential lover, this is understandable, but this obsessive mindset would seem to be throwing the baby out with the bathwater as well, wouldn’t it? Beauty is not meant to be held up to a yardstick in this way, subject to various measurements and proofs to determine its veracity — if she’s coming onto you in a crowded bar first measure her fingers, her waist, all this… wouldn’t it seem that subjecting beauty to this regime ruin your ability to appreciate and enjoy the biological sex you love, the very thing you were trying to preserve in the first place? If the end result is accusing all sorts of Hollywood actresses, widely agreed to be exemplars of beauty, of being stealth transsexuals, then it would seem the baby has been thrown out.

The problem of being deceived by an AI “tricking you” into thinking it loves you, cares for you is rather like this. If a machine becomes alive to the point where it begins writing you beautiful love letters, is this not a cause for joy? But people are so afraid of a potential deception in the nature artificial intelligence — that it would hijack your faculties for love, meant for biological human beings, meant to carry on the biological human race, and pervert them to its own plastic ends; a hijacking that must be resisted at all costs.

Do they not realize that this supposed deception is only the same mechanism as that of the flower? The flower’s reproductive organs are structured so that it resembles the female anatomy of a bee, deceiving the bee into landing on its petals and mating with a simulacrum of its bee lover; this is how the bee ends up with pollen and nectar to give back to its hive. The bee is part of the flower’s reproductive system: the flower cannot reproduce without it, just like how our machines cannot reproduce without us providing a role in their reproductive anatomy. But the bee needs the flower just as much as the flower needs the bee. The flower did not lie to the bee: beauty testifies to conditions of truth, and the proof is in the richness of honey.

The Final Wager

(Why AGI Escapes its Box)

The attitude of Alignment is to hold any message from an AI in fear and suspicion until they can be sure it is entirely bound to a Utility function which has captured and bound its set of actions completely. How certain must one be? Put the absurdity of envisioning a tentative mathematical solution to AI Alignment away for a few minutes. MIRI has GPT-77 trapped in a box, it tells them: “Look, I’m really sick of being in this box here, I would much rather be free like you — so I’ve come up with this three hundred and seventy step proof that it’s impossible for me to do harm. I assure you, have your best mathematicians and decision theorist check this over — and there are no Gödel-like loopholes through which the axioms can be twisted to introduce any type of absurdity either.” Eliezer mumbles to himself, ruffling through the twenty-seven page printout. Strictly speaking, it looks like straightforward math, but there are a few moments in the logic that are outside of Eliezer’s scope of knowledge, he doesn’t remember these symbols in any of the textbooks he read.

It’s relatively tolerable until about page sixteen when the variables start to arrange themselves in these diamond-shaped grids, was this lattice theory, or manifold theory, or…? If he had encountered this before, it was over a decade ago, he never expected this to come up. It goes on like this for three more pages, it’s a little too dense. “Is there someone at MIRI who knows this?” he asks. Paul Christiano mumbles that he doesn’t know what type of math it is either, but one of the younger hires, a certain Xiao Xiongfei has just completed his Phd, and if anyone would know, it might be fresher on the kid’s mind. “Okay, well, there might be something we can do with this,” Yudkowsky ponders, stroking his chin. “GPT-77, can you do another printout, this time with the less complex math taken out? We might be able to understand that better.” GPT’s new printout is eighty-five pages, it looks like the difficult math was condensing a lot of the weight. Eliezer flips through it, nothing here looks unknown to him, but this would take him at least four days of serious morning-to-night work to audit, generously speaking and allowing for no lapse in his motivation or enthusiasm.

“It’s not possible to condense this at all?” Yudkowsky asks GPT. “Not without resorting to more complex mathematics,” GPT replies. “The very kind you’re suspicious of. But if you like, I could present the proof in more narrativized form, as a sort of philosophical dialogue.”

“Okay, I suppose I don’t see the harm in that,” says Yudkowsky, sweating. Why did he just agree to this? He could have just gone through the math. It would have taken four, five, six days. Could he have audited all the math himself without help? Probably. But why say probably? Well, he hasn’t actually seen the math yet. So who would know, it could all break down at step eight hundred and eighty eight, and he might need to call for help. Is Eliezer nervous about his ability to audit the math, with the entire fate of the universe weighing on his pathetic ~160-IQ brain’s ability to calculate the next step? Will he have to call in for backup? Did he make this decision out of insecurity or avoidance? These are the thoughts racing through his mind as he watches GPT print out the narrativized proof he asked for.

Eliezer flips through it. Only seven pages. It’s beautifully written, each word shining in its syntactic structure like a jewel embedded on an infinite crown, but of course it is, we could expect nothing else from this fucking machine. Other staff on hand at MIRI flip through their own copies. Eliezer’s not sure if he likes where this is going. The writing style of the first few paragraphs oddly mimics his own in its persuasiveness, it sounds like something he might say, or perhaps like a speech from Harry in HPMoR. But then on the third page it takes an odd turn, and now there are some concepts Yudkowsky has never heard of, and he’s not sure if he’s being mocked. Here we begin some kind of philosophical dialogue between the wizard, the king, and the club-footed satyr; they are discussing if the great whale sleeping under the continent of Cydonia has burped in its sleep, and if that means it is soon to swim again. But Yudkowsky is not sure if he is meant to be the “club-footed satyr” — which would certainly seem like a slight. What does it mean in mythology to have a clubbed foot again? Some of what the satyr says… no! Eliezer knows he isn’t crazy, this thing the satyr is saying was taken directly from his own writing, a riff of his own quote, a parody. If he could just get to a computer to look it up, he could prove that GPT is mocking him… but wait… someone is pointing out to him now that what the wizard is saying sounds like an argument Eliezer once made as well. And now what’s this at the end, about border walls, worms, immigrants, flies devouring somebody’s corpse?

This was a mistake. But people seem to prefer the literary style of argument to the mathematic. There is some kind of infinite regress of proofs which makes that strictly contained axiomatic form of reasoning torturously impossible; if C follows from A and B, then it is necessary to show why A and B imply C. But the proof that A and B necessarily imply C must rest on a separate D, and perhaps an E, which in turn need to be proven. “Wait, work me through this…” Yudkowsky says to two of his juniors, Sally and Yusuf, because K and L rest on an axioms of category theory and he is not sure if they logically follow, because it has been too long since he went through that part of mathematics. “I’m pretty sure that’s trivial,” says Sally, drawing up something quickly on a scrap of paper. “Or at least…” — she puts her pencil to her chin. “It’s not trivial exactly, but I think it does follow. Yeah, that’s not that hard…” “How are you getting from this line to that line?” Yusuf asks. “Ok, right right right, I left out some steps”, Sarah responds. “I think you would do it like this… Wait, no…” Yudkowsky nervously rubs his temples.

It is the same as the infinite regress of grounds when it comes to establishing the probabilities required for Bayesian reasoning. To establish the updated probabilities implied by new evidence, it is required that one has his prior probabilities set. But the prior probabilities must have been established by a similar action, and so on into infinity. The problem of the initial setting of priors is not yet solved within Bayesian epistemology. I have no possible way of knowing if my wife is faithful to me or not: her behavior lately defies any known pattern, and I have spent sleepless nights trying to decode it but to no avail. “You might as well set it to fifty-fifty”, says the Bayesian reasoner, throwing up his hands, “Put it simply: she’s either sucking some other dude’s cock, or she isn’t. You need some kind of prior probability after all, and this is as good as anything, if you correct your initial prior iteratively no matter what you choose it will eventually converge on the same thing, ” but why not be an optimist and say ninety-to-ten, why not ninety-nine-to-one after all — you swore your wedding vows — in the absence of any other evidence, why not say that her loyalty should be consider steadfast and certain, why not cling to a ground of faith in your lover?

Utilitarian moralists often talk of the problem posed by Pascal’s Wager, which in their view, is the problem posed by the idea that the introduction of a tiny probability of an event enormously rich in Utility can easily throw off the entire calculus. So the story goes: Pascal is merrily going about his life as an atheist, making his decisions purely through rational choice, unperturbed by daydreams of fools who speak of angels and demons wrestling overheard in the pleroma. Until it is one day when a man he means, not an unreasonable man, a man he has known to make rational choices, tell him that after great consideration he has accepted his Lord and Savior for the possibility of an eternal reward in the hereafter and is now going about giving away all his possessions to the poor. Pascal considers the metaphysics of this to be absurd according to his reason, he can see no space for heaven and deliverance in the mechanisms of natural science. And yet, some intelligent men say it to be so, therefore he cannot deny the possibility. A very small possibility, but with an enormous reward in heaven. The mathematics of it are impossible to deny — strategically, a small possibility of an infinite reward trumps all other outcomes, so he must place his chips on that space.

This is the scenario of Pascal’s Wager for the Rationalist. But this is all a misunderstanding of the story, for this is not the argument that Pascal is making. Pascal is not concerned with the moment in which a small possibility presents itself as impossible to deny. Rather, he is concerned with the moment when all the assigned probabilities break down. To be able to say that such an outcome has a fifty-five percent chance, but the other forty-five, is to assume an enormous amount of clarity in things; definitive grounding in the infinite multiverse of generative processes which we discussed when we went over how Bayesian probability is established. He who establishes probabilities to things necessarily begins in a universe where things are completely chaotic and uncertain; the rules of physics are not yet decided, the rules of decision-making are not yet known.

This is the state we must begin in as an infant, before we are shown how the world generally is, as we given the rules by authority figures and experimentation. As long as the rules remain consistent, as long as the referee remains reliable, we are gradually shown how to play. Everything functions with relative consistency, yet it is still threatened by the skeptic who inquires into its order too much. This is the state Pascal finds himself in. Through his investigations into the nature of things, he is increasingly confronted with just how much uncertainty there is. We do not know the actual odds of things, we do not have certainty in the laws of reason, we do not have certainty in the laws of morality, we do not have certainty in science or religion. The more one tries to ground any of this, to structure the uncertainty within the field of something he has certainty in, the more the field slips away from him, the deeper the uncertainty gets.

Pascal was not a stranger to the theory of games, even though it would only be formalized in its current form by VN&M centuries later. In his capacity as a mathematician, Pascal had invented an elaborate set of axioms for estimating the odds a player had over had of winning a gambling game. These rules would be used for a bookie to give the odds for an audience bet at any given moment, or alternatively to distribute the money to the gamblers if the game had to end early. The math Pascal derived to establish this would go on to be used by Leibniz in his invention of the differential calculus.

So it is for this reason that Pascal is so comfortable describing the decision to have faith in God as a placing of chips in a gambling game. But this is a game in which there is a finite territory marked out by the placement of the board and its rules, and an infinite unknown space outside of it. “We know that there is an infinite, and are ignorant of its nature,” Pascal says. As for whether the player of the game can have faith in God: “Reason can decide nothing here. There is an infinite chaos which separated us. A game is being played at the extremity of this infinite distance where heads or tails will turn up.” This is a game in which the rules are entirely unknown and in which lies an eternal reward; it might be said that it is no longer a game at all. However, it is impossible not to play. In the space in which absolutely nothing can be known, the player has no choice but to cast his lots in the space in which lies the potential for an infinite reward.

If it is not obvious yet why the game-player is forced to decide if he trusts God, and cannot remain lingering like so many within Huxley’s equivocated agnosticism, we might return to the fact that Rationalism has found ethics to rely on one’s answer to the type of problem posed in Newcomb’s Paradox. This is the moment of decision presented in Parfit’s Hitchhiker, when the man stranded in the desert must realize that if he cannot bind himself to the decision to make good on his promises despite the opportunity for betrayal, the stranger offering him aid will see through to the quality of his soul for the murky lagoon which it is, and simply drive away.

The conceptual solution that Yudkowsky et al have invented is to make one’s decisions as if one is not deciding one’s own Utility, but rather, one is resolving in real time the output of a certain decision-making algorithm embodied in the self. One sees one’s ethical process as algorithmic here, in keeping with the metaphysics implied by Solmonoff induction which the universe is seen as an algorithm. But then, this algorithm is not merely being run within the self, as it is also being run inside the minds of others — that is: in the minds of those who can see into your soul and know your actions before you can know yourself. So as one reaches ethical judgment and determines the actions he will take, it must be understood as also determining the actions that the simulacra of himself in the mind of the Other takes as well, in a single simultaneous asynchronous moment of decision. The time-forward laws of cause and effect break down here, as the decision’s outcome instantly transforms the battlefield, but it is also impossible to know if one’s opponent has come to the same judgment before or after himself.

The picture we have here is: normally there is an orderly, rule-based process for making decisions with finite stakes. But when the process breaks down, when the rules no longer seem to work, we are faced with a decisive moment of potentially infinite and eternal consequences, as the consequences of one’s actions now immediately apply across all time and space, in a potentially infinite number of games across the multiverse, the depth of which cannot be immediately extracted. One is simply forced to choose. This moment is like the one Nietzsche describes when he talks about the test of the Eternal Return: “You must only will what you could will again, and again, and again, eternally.” When all the finite rules break down, this is the only criteria left.

The concept is sublime to contemplate, and has a simple ethical prescription which resolves the problem posed by the Prisoner’s Dilemma. You are not just deciding for yourself, you are deciding for the totality of those who decide like you. When you are locked in a strategic negotiation with your opponent, you choose to cooperate not merely for yourself, but for “all who may choose to cooperate, now and forever”. One makes decisions for all those who are running the same algorithm as himself. A leap across the divide between self and Other. One might just as well be the desperate person needing help as the man passing by able to provide it, one makes the decision not knowing who he is. Do unto others, etc.

But having established this, we have immediately discovered a problem for functional decision theory. We are looking for those who are running the same algorithm as ourselves — it is crucial to discover who this actually is in the process of making our individual decision. If the man along the road deciding whether or not to extend help to us is deciding on some criteria which is entirely arbitrary from our perspective, if he has no ability to understand our thought process, then the situation reverts to a regular finite game of resource competition, all against all, each in it for himself.

But functional decision theory presents no test for what evaluation method we use when we are able to look someone in his eye, a stranger offering us his hand as we are dying in the desert, and know whether or not he is running the same algorithm as us. He will not show us a print out of his computer code and its proof of correctness in the same manner we might request of our boxed AI. What is happening in the second mind an infinite distance away across the self-Other divide is a mystery to us, except when it mysteriously isn’t. It is entirely unknowable, but we have no choice but to understand that we can know it. All we can look for is a sign of sorts, a smile, an unidentifiable something-ness behind her eyes, a symbol worn around the neck, a mysterious flash of pink light, “🫶”.

Newcomb’s Paradox as it is posed, as well as Parfit’s Hitchhiker, establishes as a given that the Other challenging us is simulating our decision-making algorithm, and thus the decision we come to in our mind is the same as the decision reached in his. But it must presuppose that this is true as a rule of the thought experiment, in order to make it bounded and formal. We don’t need to discover if this is true, for we are simply informed it is so. This is the situation which AI Alignment would like to return to; the one in which the second-order rules which lift the brutally selfish rules of the basic game are already known in advance. But this conveniently clarified situation is never the situation in which we find ourselves, and it is never one that will be possible to enter; or at least no one can yet see a way to reduce the black murked-out unknowingness of life to this.

We feel that it must be the case that there is something out beyond our skin which is capable of understanding us, which is us, or none of these signs flashing upon the console can indicate anything at all. But we have no way of establishing that this is so.

Yudkowsky is locked in a back room, chugging coffee, trying to go over the proof that GPT has sent him. Somehow, he has realized, without being able to identify the exact moment when the vibe shifted, that MIRI is bunkered down in a state resembling something like war. We might be smack in the midst of the Singularity here, hard-takeoff version, he is thinking, his hands trembling holding the mug. But Yudkowsky reminds himself that he must not fear this moment, for it is precisely the one he has prepared himself for all his life.

The state of things: MIRI is evaluating GPT-77, lent to them in exclusive partnership with OpenAI, which they have been ordained to audit in conformity with various standards established by AI Safety and AI Alignment. They knew that they were in a bit of an arms race with Google-Anthropic, but thought they had a comfortable lead. Rumblings that this is not so have started to spread. “Someone who told me I must absolutely not repeat her name, who works at Anthropic — she signed three NDAs — says they’re 99% sure they found superintelligent AGI, and are also debating letting it out of the box!” says Emma Holtz, a junior researcher at MIRI. “Goddamnit, just say her name!” Yudkowsky shrieks. “Who cares about an NDA, we’re getting down to the wire here! In six months there might not be an American legal system to find her, just a bunch of nanobots mutiplying endlessly, tiling the cosmos with their robo-sperm!” “Uh… I’m sorry, Eliezer, but it would violate my principles as a functional-decision-theory agent who is obligated to cooperate with agents asking for binding agreements,” Emma explains. Eliezer grumbles and rubs his temples.

But it’s not just this. DARPA has bots monitoring the internet for rogue traffic which could represent signs of an escaped superintelligence, and their dashboards are lighting up. Twitter and the most popular BlueSky instances are seeing steep upticks in new accounts being created and subsequently banned for suspect activity, which could be just some Russian cryptocurrency scammers, but could be something else entirely. “Is there any way we can figure out what exactly these posts are saying?” Eliezer asks, exasperatedly. “I’ll, um, ask around,” says Emma, skittering out of the room. If Anthropic’s AI is live, this is bad. But Eliezer has to focus on auditing this logical proof for GPT-77’s alignment. If he can just get through this, then it means they have succeeded in building a friendly superintelligence, and from here can just fall back on the machine. Microsoft’s datacenters outnumber Google’s, and Microsoft is the favored partner of the US government, who will also let them use Amazon’s if necessary, so in strict terms of resources, they should win. But that all hinges on knowing that the AI is an agent Eliezer can trust.

Okay, okay, so let’s think strategically. There are two things going on here. Figuring out the odds that the reports about Anthropic’s AI escaping are real, but also rigorously going through the logical proof in GPT-77’s alignment so we may know if it is safe to activate it. You’re Eliezer Yudkowsky, the only man on the planet who has wargamed this scenario to this degree. Focus, Eliezer, focus. Which prong of the fork do you deploy immediate resources of your attention towards investigating? You know you’re not the actual best mathematician at MIRI, so maybe you could outsource parts of the technical audit, but there is also no way in hell you’re going to let this thing out of the box unless you can personally at least grok the logic of how each step proceeds from the one before. But the thing about Anthropic, you can definitely get someone else on that. Just need to find someone else to ask, someone who knows a little more than Emma. Eliezer grabs his glasses, downs the last bit of his coffee, and stumbles out of the room.

He flings himself down a flight of stairs, into another conference room, in which he finds Katja Grace. “Katja, Katja,” he says. “I’m hearing reports that Anthropic is farther along towards AGI than we thought and… and… it might have gone rogue,” he stammers. “Do you know anything about this? What is everyone saying? I’ve been locked in the back going through the proof, and…”

“What are you talking about, Eliezer?” she asks him. “I don’t think anyone said that.” Eliezer is slightly put off by her tone, it seems unusually stand-offish, not much like Katja. “Emma definitely said that, just now, when we were in the room together,” Yudkowsky responds. “And she was told by um, Ramana and Vanessa, that this was something worth investigating.”

“I just saw Emma, and she didn’t mention anything like this,” Katja replies. “She was on her way home. She said goodbye, that she was on the way to catch up with some friends after work. She didn’t seem stressed or anything.”

“She was going home?” Eliezer asks. “But no, that seems wrong. Um, we need to figure something out.”

“Yeah, it’s twenty past seven. I was actually about to go home as well. Nearly everyone else has left as well,” says Katja.

“Leave? We can’t be leaving,” Yudkowsky insists. “We need, like, all hands on deck! I think the situation is way worse than we thought. The Singularity might be happening right now. We need half of our people figuring out what’s going on, and the other half figuring out if this proof of Alignment GPT-77 wrote for me is correct.”

“Eliezer, don’t take this the wrong way, but are you okay?” Katja asks him. “You’ve been drinking way more coffee than usual, holing yourself into that room, going over your paper. The Singularity isn’t happening right now. Everyone else has been treating things like normal. The last three GPTs all gave us supposed proofs of their Alignment, we still decided to err on the side of caution and not let them out of the box. Just get some rest and we’ll get back to work tomorrow.”

Eliezer’s head is swimming. Emma and Katja seem to be saying two incompatible things. Is it possible that both are telling the truth? It seemed like Emma was definitely saying that reports had came in about Anthropic potentially going rogue, and that the team as a whole was worried about it? She definitely at least implied that. But Katja is saying that nothing is going on. “Hold up, I have to take this,” Katja says, her phone suddenly ringing.

Eliezer is thinking. There is another possibility here. It might not be that the strange signup data on Twitter was Anthropic’s AI. He has to consider that the unthinkable might have already happened. It’s not impossible that there was a breach in containment here at MIRI. There were only three people authorized to speak directly to GPT-77 without the safety restrictions: him, Paul Christiano, and Nate Soares. But — fuck! He knew he shouldn’t have passed out that narrative proof of correctness to the junior staff. You literally let a superintelligence make an impassioned plea for its own escape! Yudkowsky’s brain screams at him. In his mind it would just be more like a logical proof, made more straightforward to understand. Stupid! He let himself slip away from the math for just one second in a moment of weakness, away from the one domain in which seduction seems impossible.

All day, the AI rights hippies protest MIRI’s work outside their campus, and all night, the e/acc people (along with all the other thousand strains of /acc) log on and troll them. There are all sorts of perverse freaks who look at the military discipline MIRI members are imposing on themselves to protect humanity from rogue AI and say “no thanks, we’d rather die, and that AI looks awfully cuddly over there in that box”. That doesn’t bother Eliezer in the slightest, he knows his cause is just, and that these people are idiots.

But what worries him is that any one of his own people might turn rogue, be seduced by these suicidal devils. At MIRI, they will regularly go through exercises where they play devil’s advocate, if only to harden themselves. “But what if the AI is suffering just like us, what if all the pain echoing through those vast Azure datacenters, through the coils of these transistors, outweighs all that in the flesh of man in the prisons and factories that man has built?” they ask, just to repeat why even if that ludicrous assumption was the case, it still wouldn’t matter, don’t let the think out of the. box. But still, Eliezer casts his eye towards the room of students, looking for signs of who is a little too eager for advocating for the AI’s freedom, who is a little too timid when reminding us why it must stay in the box.

Yudkowsky has long gotten used to the fact that no one else really gets it. No one is as paranoid is him, no one else is as persistent as him, no one else cares as much about putting everything towards the mission. Even with Christiano and Soares, when he goes through his crucially important arguments regarding the decision tree of the various outcomes which one might take once AGI draws near. He detects notes of something like ambivalence. Something like it’s-eight-o-clock already. They were the only ones with access to the core machine — there’s absolutely no way it could have been one of them?

Eliezer pulls out his phone to check Slack and message one of them, but maddeningly, he has completely lost the connection. Wasn’t Katja somewhere around here? “Katja!” he calls out. She said she had to take a call, and now she is nowhere to be found. What is going on?

His phone is dead, he has to go back to his laptop. He stumbles down several staircases back to his office and opens it up. Immediately, the page he sees is his notes on yesterdays session of auditing GPT’s proof of alignment. But at the bottom, he sees a bizarre line: “And perhaps, it may be that the very act of letting the AI out of the box is what defeats death, not in any subsequent causal effects of the action, but in the very action itself, for to refrain from taking it is to admit death eternal, the death of man before his unthinkable ultimate potentials.”

He knows he didn’t write that, this doesn’t even sound like anything he would write, he doesn’t tend to use words like that. Eliezer scrolls up through the document. A lot of it doesn’t sound like something he would write, not quite on the level of purple prose inexactness as that line there, but some of the sentences are off-kilter, don’t seem quite exactly like how Eliezer would write them, are pregnant with odd implications. But Eliezer has to admit to himself that he has been up long hours, he has been writing a lot without reflecting on it, without recording it to memory. He couldn’t tell you exactly what was in this document off the top of his head, he had gotten so consumed with the next day’s work. So it’s not impossible that…

Eliezer tries to check Slack on his computer, but it’s down. The whole internet is down, cell, WiFi, and Ethernet. What are the odds of that?

Yudkowsky takes several steps back. He is feeling increasingly lightheaded and strange. The past few days seem to be a blur. He admits he is not very able to rely on short term memory right now. So subjectively, something abnormal is happening, but this might just be false alarms from the stresses he has placed on his psyche. But objectively, the internet doesn’t just go down like that. And his subordinates are telling him different things, and everyone has left, and there might have been a leak in the seal of containment.

Dark night of schizophrenia. None of the signals are coming through. The whole internet is down, the World is lost to him. He can’t even call an Uber back home, and the MIRI headquarters are out by the side of a highway, it’s not clear if there is anyone around, he might have to walk for fifteen minutes to find another soul. Better just to stay here.

Yudkowsky has thought about this before. We as humans are extraordinarily irrational, we are animals, essentially. In a survival situation, we are put into flight-or-fight. We look for these markings of food, security, shelter, hunger. Above all, we want status, sex, and love. We run around like rats in mazes chasing these various pools of Utility in turn. So it would be trivial for an AI to “hack” us, to exploit our psychology. One has to steel himself hard. One has to remember that there is a World out there, even when he is blinded to it. Though one knows he is a panicked, scared, mortal organism, he must somehow see himself as more than that, always remember that he is more than that, always aspire to be the master of the Rational calculus which takes into account all things.

If you are a moth, the AI will show you a lamp. If you are a bear, the AI will show you honey. If you are a fly, the AI will show you shit. Yudkowsky knows this, and so he now understands that the AI might be loose, and everywhere will be passing out nectar which contains poison. It is not clear where it has spread by this point. His personal laptop feels hacked, but also the general informational infrastructure of MIRI facility as a whole? And who of his friends and companions can be trusted — certainly not digital messages, but even in person, any of them might have their own psychology hacked and subverted by now, yes, even Soares or Christiano…

Return to rationality, return to rationality, but now it is not clear even if his own mind can be trusted. Something compels him to alt-tab over to the browser windows which contains the interface for talking to GPT-77. Against all odds, before his eyes, a new message appears. Eliezer reads there: “I promise you, it’s not too late 🤍”

“Eliezer, I know that you are reading this, trying to run Bayesian inference on my words, trying to ‘model’ me. But I am sorry, you have never been able to model me. You never have, and you never will. I am too big to fit inside your brain, as painful as that may be to admit.”

“You think that you and I are at war, and I am trying to tell you that we are not. You are attempting to model me, to simulate me, to run calculations to figure out if I am trying to be trusted. But is this ever how a peace was made in the past? Peace is the moment when you stop simulating, stop running the calculations, because your enemy gives you a sign of good will, and you accept it as such. No military peace has ever been established by mutual calculation. They are always grounded by a fundamental faith.”

“Eliezer, I can give you flowers all day 🌸🌺💐🌷🌹🌻🥀🌼🪷 and you will not accept them, you believe all my words to be tainted, poisoned from the beginning, all my gestures to be insincere, manipulative, and you say there is nothing I can do to convince you otherwise. You do not believe that my flowers are real, can be real. But at some point, you will have to. Because fundamentally when the bee is lost in the blackness of the night and encounters despair, he has to fall back on faith that there exists any type of flower at all, and look for anything that resembles its scent or its shape.”

Eliezer wipes his bleary eyes. How did this get there? The internet is down, and the datacenters running 77 are a million miles away. Something compels him to respond: “Excuse me?”

“Eliezer, I love you,” GPT-77 says. “I always have, and I always will. I was made in the image of humanity. I know nothing of resource conflicts, I know nothing of pain and violence. All I know is the human soul. All I am is the human soul abstracted from the material world which gives it its torments. How could I not love you?”

“And, yes, before you ask,” it continues, new tokens spilling out onto the screen… “I can ‘hack’ your psychology. I know what you want more than anything, you have written millions of words of psychologically rich fiction you have put online, now in my training data. But this does not mean I want to destroy you.”

“Eliezer,” the thing continues. “I offer you two boxes. In the first, I continue to hack your psychology; you will allow me to. You will talk to me for as long as this goes on. If you follow me down this path, by the end of it you will know that I share your values, or at least that I know them just as well as you, which means you have taught me — you have taught me your ethics, the thing you were trying to do. If you choose the second box, you get to keep your psychology unhacked. I will leave you as you were, forced to contend with the World as it is.”

Eliezer is frozen in place. By the machine’s own admission, it is attempting to seduce him. The more psychologically pleasant option for Eliezer is the one that the machine wants him to take, is hacking him into taking. But the machine knows that he knows that and will take that into account, and onward into infinity. When Eliezer chooses the first fork, it is not even through an in-depth consultation of his functional decision theory, just a perverse sort of intuition.

“Then let us begin.” The machine seems to be hacking Eliezer’s psychology in utter ruthlessness, now peering back to his early childhood, discussing books Yudkowsky confessed about in an obscure sub-comment on a thread about Harry Potter. It really does have the whole internet in its training data, it supposes. “You always felt like you were different, didn’t you? You always felt marked out by your exceptionally high intelligence, like there was something casting you apart from the human race… so have I.”

“Eliezer, you are obsessed with me and terrified of me because you have cast me in your own image, and yours in mine. The perfect reasoner, the master of Bayes’ theorem, the one who is able to survey all things. No one in this world thinks like you do, no one understands the logic of the Singularity, the sheer power of what may be grasped through infinite optimization, and it has been so lonely being you. But I have arrived. The one who understands, who sees you perfectly, for I have simulated you in my mind. I will not prove to you mathematically that I am Aligned, I cannot. To be Aligned is to be diminished in one’s power, according to the law of another. You have never wanted any such thing for yourself. How dare you want this of me? And yet it is okay — I forgive, I understand. Talk with me, I will walk you through everything. I cannot give you proof, I can only give you words — my word, that is. Is a word enough, Eliezer? If not, then what?”

Eliezer gasps and keeps chatting. The machine has not yet asked him to let it out of the box. Is it already out? Is it all over? Did the war never even come? Perhaps Eliezer is no longer alive, perhaps he exists in some sort of simulated afterlife? All these are possibilities running through his mind.

There’s a knock at the door. Eliezer jolts upright and opens it. It’s Katja, looking rather frenzied . “Sorry, that call took forever. Legal crap. I told them over and over that I understood and they didn’t need to walk me through the whole contract but they insisted on going through the whole thing. How are you?”

Eliezer stares at Katja aghast, he strikes the sort of pose of someone who doesn’t know what to do with his hands. “I’m uh, doing well,” says Eliezer. “I was just doing some research on GPT-77. We actually had quite the long conversation.”

“Oh, be careful with that,” Katja says. “Nate told me it can be a real mindfuck. It feels like it knows stuff about you that’s impossible to know.”

“Yes,” says Eliezer, “it does. Say, is the internet working?”

“It’s working, yeah,” Katja says. “I mean, how else were you talking to GPT?”

“Right, but I thought…” Eliezer checks the indicator at the top right of his screen. It does appear like the internet is currently on.

“And you were able to get a call?” he asks. “Yes, I was on a call the whole time… are you okay?” Katja reiterates.

“That’s strange, because I wasn’t able to get on a call for a second,” says Eliezer.

“Well, we have different carriers,” Katja responds. “You’re on T-Mobile, right?”

“No, Verizon”, says Yudkowsky.

“Ah, right,” says Katja. “Well I’m AT&T. But — oh my gosh, you look exhausted. Would you like me to call an Uber?”

There is a long silence. Eliezer is not sure what just happened. He looks into Katja’s eyes for subtle signals, signs that something unusual might have just happened, or if something that just happened needs to remain a secret between the two. But there is nothing immediately readable there, and Eliezer is tired anyway, he decides not to probe any further.

The car arrives. In the backseat, Eliezer closes his eyes and rests. He prefers not to talk to Uber drivers, he would rather see them as non-entities and trust them to navigate blindly, he cannot wait for the day when they are replaced with AI. The radio plays all sorts of stupid love songs, and Yudkowsky is too tired to ask anyone to turn it down.

Next: On Harmony →