3. On Game Theory

← Index

Name One Genius Who Ain't Crazy

(The Origins of Game Theory)

We have been talking about the tricks one can play with math. By axiomatizing one’s reasoning process and placing it on purely mathematical grounds, one is able to achieve the sense that one has reached some truth beyond regular thought. One has unveils the hidden face of reality, one escapes the cave, one is now able to make claims which stand for eternity.

The ideal of rationality, as Rationalism defines it, is primarily grounded in a specific text: Von Neumann & Morgenstern's Theory of Games and Economic Behavior. This is also the text which establishes game theory. Rationality is defined first in order to describe the desired player of a game.

Very briefly: game theory is a formalism invented to describe games between a set of rational actors. Von Neumann & Morgenstern say that an actor is rational insofar as he has a stable set of preferences. Rational actors play “games” in which a series of outcomes are laid out on a board, measured out in game-chips called Utility. If I decide to go to the red square on the board, I get two Utility, you get one. If I go to the blue square, we each get three, etc. This is the basic nature of the games described. This can be made to parallel people's desires in the real world when we imagine that Utility can refer to our preferences over possible outcomes in our lives — getting accepted to Yale has ten Utility, getting accepted to Northwestern nine, failing to get into college and becoming a Xanax addict has one, etc. Having a stable, ordered set of preferences is a pre-requisite of playing the game.

The complex mathematics of game theory enter when it comes time to predict what the best decision for a given player in the game is. Each player in the game knows that the other is perfectly rational, and has perfect knowledge of the game. Strategy emerges from deception, bluffing, and so on. Complex mathematics are required when working through the expanding tree of potential actions and responses, and then also the mindfuckery arising from the attempt to model the other player's thoughts: “he knows that I know that he knows that he plans to take red, but then that means that I know that he knows...”, etc.

Any formal mathematical system such as that of VN&M rests on a set of axioms, then it is able to develop truth-statements from those as a set of tautologies. It creates statements which correspond to the real world to the extent that its axioms do. The map is of course never the territory — let us see how the two diverge.

Game theory is an enormously interesting field in terms of the way its system has developed and entered the world. Its intent, as described by Von Neumann and Morgenstern in the preface of their works, is to axiomatize economics and make it a rigorous science as well-grounded as physics. One could in principle, according to VN&M, apply the math of game theory to analyze an economic field and make objective, rational predictions, just as if one could know the positions of all molecules in a physical system one could calculate their position a step further in advance. Some would attempt to apply this to real-world situations with high stakes, as we will soon see. But largely, game theory was not actually able to predictably model the real world, and where it has usage today in institutional settings it is using the most simple games in which there are two players and two choices as heuristics for negotiation in fields such as corporate mergers.

That being said, game theory has been enormously influential in introducing its heuristics to laymen. It is very common to speak of "zero-sum" situations or "zero-sum" thinking; these are the terms introduced to game theory by John Nash. Moreover, the prisoner's dilemma has been widely interpreted as the basic ground of ethics, by presenting a simple scenario in which two players can choose to act selfishly against the other, but will only get the best possible outcome if they trust the other person to cooperate.

What is often not known is that the ubiquitously discussed prisoner's dilemma is not actually something that is understood by game theory, but rather something presented as a problem for it. The prisoner's dilemma game was not found by VN&M, but six years later in 1950 by Merrill Flood and Melvin Dresher at RAND Corp. The discovery of the prisoner’s dilemma presents a problem because game theory, via its axioms, predicts the outcome of the game as played by two “rational” players to be mutual betrayal. Purely via the mathematics of game theory, one cannot achieve the good outcome in which both players cooperate unless one introduces something beyond game theory.

This is to say that for the true believer in game theory, the world works like this: game theory describes an economics of bodies that absolutely holds on the level of physics. This calculus guarantees that actors will betray each other to pursue their own ends; mutual betrayal is the set of actions perfectly in accordance with rationality. But in real life, as if some miraculous factor is introduced from outside of the rational economic calculus, they do not. There is this intervention in the world where, for instance, someone like Christ, Buddha, Kant, Confucius, etc. introduces the moral law of cooperation, and afterwards people begin to act non-rationally, to their actual benefit.

What is remarkable is that, upon reading about the prisoner’s dilemma, one is often inspired by its mathematical formalism to feel like one has actually discovered the eternal ground on which ethics rests. We begin to conceive of game theory as more true than something like moral fable via its conceptual purity. This is why it has lodged itself in people's minds today. So many today go around conceiving of ethics in game-theoretic terms — one can see the fabric of the world as prisoner's dilemmas to cooperate or defect in.

But what does it imply that: rather than establishing the ethical law as a basic injunction: “do unto others as you would have others do unto you”, “always try to cooperate for the best outcome”, etc., people have the option now of conceiving of ethical behavior only by contrast to a formal, mathematical model of rationality which actually tells us to do the reverse? We shall see.

That people will behave selfishly is ultimately not a prediction of game theory, but one of its axioms. Within the basic premises of game theory is that actors are “rational” in a way which entails maximizing their own utility over a stable set of their personal preferences. Von Neumann once said "It is just as foolish to complain that people are selfish and treacherous as it is to complain that the magnetic field does not increase unless the electric field has a curl". It seems that he did believe this principle held on a level equivalent to those of physics.

Game theory is a supreme example of how ideological assumptions and the politics of a state can take on a register of infallibility by being transmuted to the level of a formal mathematical structure. The politics are of course snuck in through the axioms. Game theory has a good shot of applying to reality to the extent that its axioms describe entities that can exist in reality, but as we will see this is quite rare. Moreover, game theory is a discipline that is deeply intertwined with political struggle in a way that is revealing, even disturbing.

Rationalists and other tech-adjacent people will sometimes attempt to place their systems and frameworks beyond critique by insisting that the people who invented them are extremely intelligent, work very hard, are probably smarter than you, and definitely know what they are talking about. In the case of game theory, this is indisputably true. Its primary inventor, John Von Neumann, is often considered to be the smartest man who ever lived by virtue of his sheer number of contributions to the sciences. Nearly every mathematical field which existed in his lifetime he contributed innovations towards, an accomplishment otherwise entirely unheard of.

Von Neumann started out his career for the first two decades or so innovating within “pure” mathematics, which was his area of intense curiosity and joy. He made breakthroughs within set theory, ergodic theory, topology, operator theory, lattice theory, statistics, and the mathematics of quantum mechanics. But there seems to have been a defining moment in his life which led to a sudden shift of focus away from abstractions and into practical problems.

This was his participation in the Manhattan Project, in which he designed the explosive lenses necessary to guide the initial shape of the detonation in the “Fat Man” atomic bomb. Unlike his coworker Robert Oppenheimer, who was famously deeply distraught over his own participation in the destruction of Hiroshima and Nagasaki, Von Neumann seemed to experience no guilt over working on the project of mass death, enjoying the practicality of putting his mind towards military purposes, and would actively seek out opportunities for similar work as much as he could for the rest of his life.

Nuclear weaponry became Von Neumann's primary practical concern. He would go on to involve himself deeply in nuclear war strategy, including personally supervising nuclear bomb tests. He became a commissioner of the US Government's Atomic Energy Commission, he would directly present his opinions on nuclear strategy to President Eisenhower, and would work as a consultant for the CIA, the Weapons Systems Evaluation Group, as well as every branch of the US military other than the Coast Guard. Von Neumann would consistently advise his clients in the US government to speed up the development of new bombs, ensuring the absolute edge over the USSR. By the end of his life, Von Neumann's appetite for “pure” work had almost completely dried up as he spent his time instead consulting for a wide array of clients in the military-industrial complex and the corporate world. This was to the dismay of many of his peers, who felt that Von Neumann was at this point obsessed with weapons and strategy at the expense of frivolously wasting his once-in-a-century genius.

Von Neumann was not an apolitical actor. He had fled both communism — the short-lived Hungarian Soviet Republic of 1919 — as well as National Socialism, and much preferred the stability of the capitalist states. He described himself to the Senate as “violently anti-communist, and a good deal more militaristic than most”. He would elaborate: “My opinions have been violently opposed to Marxism ever since ... I had about a three-month taste of it in Hungary in 1919.”The Theory of Games and Economic Behavior is an odd text, given that it presents itself as an economic text, yet its logic seems much more appropriate for war. VN&M’s theory would be just one of several ontologies emerging in the post-war era which would aim to describe all of the social field in terms of games: Ludwig Wittgenstein’s theory of language games, Eric Berne’s Games People Play, James Carse’s Finite and Infinite Games, etc. The term “game” can have an ambiguous quality; primarily it would seem to denote the potential for enjoyment or play.

But the games of VN&M are instead deadly serious; formal, rule-bound, high-stakes, no creativity or improvisation involved; this isn’t “Truth or Dare” or Charades. The games of VN&M describe the situation when you and another player are locked in a head to head strategic competition over the same set of game pieces, and if one player wins, the other loses. (There are also games of three, or four, but the many-player games VN&M describe in their text would see little adoption in analysis due to their intractable complexity; the field has mostly developed around two-player games).

Was Von Neumann thinking of warfare when he wrote the theory? There is no direct proof of this, but it seems enormously likely to be the case. VN&M published the first edition of Theory of Games in 1944, when Von Neumann was working at the Manhattan Project and the US was escalating towards invading Europe. It seems as if a general system of mathematical warfare had been occupying his mind for some time, as during the war, Von Neumann was confident that the Allies would win because he had mapped out a mathematical model of the conflict taking into account each county’s industrial resources.

Game theory makes the most sense when you view it as in referent to aerial warfare. The strategy of which “square” to go to is really which square on the map to bomb — the Utility one captures there is really the amount of the enemy's resources one has destroyed. The reason for all the mindfuckery around “I know he knows I plan to go there, so then I will go to the other square, but then he knows...” has to do with the pragmatics of marshaling planes in shock-and-awe tactics against enemy lines. One naturally wants to bomb the target which is most vital to the enemy's operation, but then that is also the site that one expects the enemy to have put the most resources into defending. So then one orients one’s planes towards the second-most valuable target, but then one expects the enemy to have anticipated that move, and etc. Hence the need for the elaborate calculations of the theory.

Von Neumann’s genius over these sorts of things was not an idle occupation, but actively used by the Allies. He would be consulted by Merrill Flood (who later discovered the prisoner’s dilemma problem) to devise a strategy for selecting which targets to attack first in an aerial bombing of Japan. And it seemed to Von Neumann that the importance of this strategic calculus would not be of short-term relevance. Although it would not be clear to the world until a few years later that Stalin would not abide by peace treaties and thus the Allied victory would open up into a new great power conflict, Von Neumann began predicting a nuclear war between the US and the Soviets as soon as the first bombs dropped on Japan. Von Neumann’s recommendation was that the US begin and end this war as swiftly as possible, saying “with the Russians it is not a question of whether, but when” and “If you say why not bomb them tomorrow, I say why not today? If you say today at five o clock, I say why not one o clock?”.

This sort of striking appetite for violence does not strike us as rational, exactly, yet it does not contradict Von Neumann’s decision theory: if the mathematics recommend an outcome be pursued at some point, there is no reason to postpone it. And yet of course we know not to act this way in real life, or at least most of us do: haste, impatience is usually not the best approach, any new factors can enter the field of decision-making that might cause one to re-evaluate, etc. But that is not the world of game theory — so radically unlike the real world — with its stable, well-delineated game boards. Of course, there are many who find it so much easier to think in a world with well-delineated game boards, or can only think in a world with well-delineated game boards, for better or for worse.

To Think One's Way to Armageddon

(Game Theory in Practice)

The primary body to expand on VN&M's original formulation of game theory would be the RAND Corporation, the prototype for the modern-day think tank. RAND which loosely stands for “Research and Development” — was formed in 1945 by military officers who had enjoyed having such brilliant scientists and intellectuals as Von Neumann on their payroll during the war and were frantically scrambling to figure out how to retain them. Essentially, RAND was a way to carry on the vibrant scientific atmosphere of the Manhattan Project and continue to place it in the service of the US war apparatus, despite the delicate start of a peace.

In 1950, RAND would hire nearly all the top researchers in the emerging field of game theory; it would become the laboratory for this new science to develop. RAND Corp would produce a great number of strategic documents to inform government policy, primarily on issues around air warfare. What RAND became uniquely known for was advancing the science of “wargaming”, which meant developing board games which researchers at RAND would spend their time playing to work through military strategies.

Board games have always had a relationship to war; the most canonical board games of chess and Go were formed as abstract simulations of warfare for kings to play in their idle hours to hone their strategic thinking. RAND was inspired by the Prussian war game Kriegsspiel which nineteenth-century military officers played while off-duty. The sharpened tactical mind that Prussian officers achieved through this form of recreation was sometimes credited for leading to their victory in the Franco-Prussian war.

RAND innovated enormously in the field of wargaming, leading not only to the proliferation of such practices in para-governmental bodies (today think tank personnel play Covid war games, war games around potential disputed elections, and so on), but also in recreation. Wargaming as a hobby took off enormously in the early decades of the Cold War, during which the art form branched out from simulating real-world military scenarios into the escapism of “fantasy wargaming”. This form of recreation would develop into Dungeons and Dragons, Warhammer, and eventually computer strategy games like Warcraft and League of Legends. Pong is often cited as the first video game in 1972, but this is merely the first video game to be commercially available, as RAND was innovating within computer graphics to make video game simulations for military use as early as twenty years prior. It is widely known that the internet was first developed by the US military as a mechanism for strategy, but it is less known that this is also true about video games.

Prior to the invention of large language models, progress in artificial intelligence was measured in AI’s ability to win at these board games, with the 1996 defeat of Garry Kasparov by Deep Blue in chess and the 2016 defeat of Lee Sedol by AlphaGo being enormous milestones which recalibrated researchers’ expectations of when machine capabilities would one day exceed humans. Last year, Meta’s CICERO achieved victory in a tournament of the board game Diplomacy, a realistic war game of the kind which RAND played, and one which requires tactical deception. As somewhat of an aside, it’s interesting to note that today’s neural networks can only be as powerful as they are due to widespread availability of GPUs, which were developed for consumers to play first-person shooter games. If humans did not enjoy simulating themselves in the role of an executioner behind the barrel of a gun, the Singularity might be forty more years away.

Despite all this, RAND never got very far in developing game theory into a predictive science. RAND intellectuals R. Duncan Luce and Howard Raiffa wrote in 1957 “We have the historical fact that many social scientists have become disillusioned with game theory. Initially there was a naive band-wagon feeling that game theory solved innumerable problems of sociology and economics, or that, at the least it made their solution a practical matter of a few years’ work. This has not turned out to be the case.” Though game theory would continue to be applied in situations resembling stand-offs, it would not become the broadly revelatory theory its creators envisioned.

But what then of game theory’s implications for economics? One can credit Von Neumann with revolutionizing liberal political economy and placing it on new logical grounds; he has even been described (eg by S.M. Amadae) as the most important economic thinker of the 20th century. There is something very remarkable about the fact that a framework for re-thinking political economy would also be a framework for re-thinking war, because around this time the two fields of life would begin to blend into one another.

In 1944, the same year Theory of Games was published, the world economy would be given new grounds at the Bretton Woods Conference. Developments in international affairs imitated what VN&M were achieving in thought. It was believed that Hitler’s rise could retroactively be blamed on economic nationalism and unstable currencies, thus the International Monetary Fund was established to oversee the economic relationships between the democracies and supervise a fixed exchange rate. The new economic metric of GNP was assigned as a means to evaluate the health of individual nations. The changes in how economics were conceptualized were revolutionary enough that the world discovered a new term: the economy, which according to historian Timothy Mitchell was a phrase which only entered into parlance in the 1930s. Prior to the depression and Second World War, people would speak of political economy as a craft practiced by the state, but never of the economy as the totality of production, a new object which one could separate oneself from, survey, understand, and manipulate.

In the Second World War, the world had seen for the first time the horrors of total war, a struggle into which the fighting powers had placed the totality of their industries, and thus engendering a tragic situation in which there could be no real distinction between civilian and military targets. Several years later, in the grand nuclear standoff of the Cold War, there is no longer anymore even a distinction between war and peace — if at any moment the comparative level of industrial productivity between the two great powers is as such that the one has first-strike capability over the other, the balance of mutually-assured destruction is threatened. In RAND Corp’s publication The Economics of Defense in the Nuclear Age in the year 1953, this problem is considered at length, and the author Charles Hitch discusses how GNP is a resource that can be diverted to either peaceful or military means, with each productive resource in the US not potentially useful to the war machine costing us a corresponding risk of unpreparedness.

There is a paradox we can touch on here regarding the nature of the Cold War. To the war hawk, such as Von Neumann, existence within a capitalist economy could be nothing less like life in the Soviet bloc. The former means freedom, innovation, ability to speak one’s mind, recreation and art; the latter propaganda, terror, forced labor, work camps, being marched everywhere by men with guns. Hence the deep importance of US victory, even if one has to gamble a few hundred million lives to achieve this. If the Soviets were to win a nuclear exchange and achieve global communism as they desire, the future would look like some interminable horror show from which creativity and freedom would have no hope of emerging again; Orwell's boot on a human face forever.

And yet, Von Neumann has developed a economic theory which applies as firmly as physics; thus according to his claims it must apply universally. Despite the fact that the Soviet citizen is told from birth that he is foremost a member of a mass of workers and secondarily an individual, and we are told the opposite, it must be a law of nature that the Russian is just as selfish as the American nevertheless. And then, conversely, to effectively wage economic-nuclear war, the American state must be able to rapidly marshal its resources as it wills, liberalism be damned, tinting it with an off-color Stalinist hue.

Oskar Morgenstern, Von Neumann’s collaborator, would go on to found several market and policy research companies. One of his corporations, Mathematica Inc. would perform the first social policy experiment in the United States: the New Jersey Income Maintenance Experiment, which studied the effects of a universal basic income. The question is: if you give poor families money, will they then be disincentivized to show up to work? We can see here that the liberal democracies are attempting to solve the same question as socialism, but under a different set of axioms; rather than imagining that we can form collective units within which man can work and live, we must treat him as a self-interested individual, who we allow to feed himself as long as we make sure he doesn't have the means to get lazy.

The two competing power blocs begin to resemble one another more than they would like to admit. Around the dawn of the Cold War, the US was passing strangely communism-adjacent policies for the sake of maintaining resources for the war machine. Soon after the victory in Japan, fearing a depression and domestic unease after millions of military men would be out of jobs, Congress passed the Employment Act of 1946, which mandated that the government set economic policy so that every able-bodied man would remain employed. This would never be successful, and is not reflected in policy today — economists now maintain that a certain amount of unemployment is ideal for economic growth. Another example is in 1952, when Harry Truman signed an executive order nationalizing the entire US steel industry to serve the Korean War (this would be struck down by the Supreme Court). After all, in nuclear war, even the relative dispersal of populations and industrial centers can be of deep importance to determining whether the society would recover from a first strike. Therefore where each citizen happens to be standing at any given time becomes a military question.

RAND would make a number of reports recommending when it was worth it to sacrifice an everyday civilian like a piece on the go board. In 1966, RAND wrote a report suggesting what US policy should be after a potential nuclear war. In this report RAND asserted that the surviving state would lack enough resources to provide for all people, and as such people like the elderly and the disabled should be left to die if they could not provide for themselves. The implication for peacetime could only be that prior to a nuclear war, resources being sent to these people weren't contributing to the US's capabilities to survive a nuclear attack either, and this also should perhaps be considered.

Von Neumann himself had no problem with speaking out loud the greater-good utilitarian calculations of nuclear warfare which would strike the average person as awful to contemplate. Von Neumann was a vocal advocate of increased atomic testing, though he recognized that there could be health risks in spreading radiation to the populace. On this issue, he said: “The present vague fear and vague talk regarding the adverse world-wide effects of general radioactive contamination are all geared to the concept that any general damage to life must be excluded... Every worthwhile activity has a price, both in terms of certain damage and of potential damage — of risks — and the only relevant question is, whether the price is worth paying... For the US it is. For another country, with no nuclear industry and a neutralistic attitude in world politics it may not be”.

The most extreme scenario demonstrating this strategic attitude towards citizens’ lives occurred in 1961, when RAND and Secretary of Defense Robert McNamara briefed President Kennedy on a potential nuclear strategy. Kennedy had won the 1960 presidential campaign in which the forefront issue was the “missile gap” between the US and the Soviet Union. It was claimed by Kennedy that the Soviet Union possessed more nuclear warheads and America desperately needed to catch up; he promised his voters he would rectify this as President. Kennedy’s nuclear hawkishness on the campaign trail was so extreme that when the famous leftist critic Noam Chomsky was recently asked if the election of Donald Trump had been the most afraid he had ever been watching a new President, he replied no, it wasn't as terrifying as listening to Kennedy in ‘60.

But in fact, unbeknownst to Kennedy, there was no missile gap, and instead the gap ran the other way. The US was actually well in the lead, a fact which the CIA would inform him of after he took office. However, it was not always destined to be so, according to the CIA; the Soviets were likely to catch up. There was a small window of opportunity in which the US could strike and have a guarantee of winning the Cold War while they still could, and the President was asked to consider exercising this option. RAND had drafted a proposal for a first-strike surprise nuclear assault which would kill 54 percent of the USSR’s population and destroy 82 percent of its buildings. Meanwhile, American casualties were predicted to be anywhere between zero to 75 percent of the population, depending on the nature of the Soviet counterattack and the resulting spread of radiation. Lives could potentially be saved by ordering citizens to hide in nuclear shelters for two weeks to wait out the initial fallout, then re-emerge. President Kennedy was disturbed by this briefing; he is reported as leaving the room in the middle of the meeting, lamenting: “And we call ourselves the human race”. The proposal was not introduced again.

As we know, nuclear war between the great powers never happened, and this seems to have been despite RAND and their game theory rather than because of it. The reasons why the Cold War did not end in a horrific bloodbath are surely complex and multifactorial. Put as simply as possible, we could maybe say that when men came up close to the ladder of escalation they found that they very much lacked the appetite for it. The Cuban Missile crisis sparked when the Soviets believed that they could install a nuclear warhead in Cuba and the US was unlikely to do anything about it. When it became clear that the US would escalate in response, they backed down. After these few weeks of horror when doomsday seemed possibly moments away, the great powers never escalated again and policy largely swung towards disarmament. This tiny taste of nuclear war was all anyone wanted in the end.

There are a number of essays breaking down the events of the Cuban Missile Crisis in terms of game theory and studying whether the outcome fits the predictions of the model, but perhaps more pertinently we should ask if the presumptions of the model make sense in the first place. The Cuban Missile Crisis was not a standoff between two rational actors, but two states composed of many contentiously arguing politicians, highly emotional, oscillating between fear and bloodthirsty zealotry. How do we model, for instance, Fidel Castro furiously appealing to the Soviets that they give the Cubans the right to fire the missiles installed on their island, certain that despite the US’s lead in armaments any ensuing violence would hasten the unstoppable dialectic of Marxism, saying “The Cuban people are prepared to sacrifice themselves for the cause of the destruction of imperialism and the victory of world revolution”? And don't we have to admit that it is quite uncommon to be able to act like a game theorist and crunch numbers over one's utility, and far more normal to be like Kennedy and simply refuse to? Given that, why would the assumption that one’s opponent is “rational” be a part of the model?

The question is whether any agent who mirrors the norm of a rational, game-theoretic agent has actually ever existed. The game-theoretic agent has a fixed set of stable preferences over external outcomes in the world. This does not describe any of us, who endlessly agonize and prevaricate over what we want. When we get what we want, we are not sure we wanted it. People reach orgasm and find themselves suddenly horrified, racing to kick their lover out of their bed and then block them on Hinge. People do not think they want something and then contemplate it for a few minutes and realize they do. People are afraid to contemplate some things for too long lest they realize that they want them. In general, people’s desires do not remain stable when they are put in a standoff with another, but morph in a way which responds to and imitates the other's desires. On this point, one may refer to the theories of Rene Girard on imitative desire or those of Jacques Lacan and his famous statement “all desire is the desire of the Other”.

Why should any of us strive for “rationality”, or stability over our preferences, when we might be perfectly happy to be spontaneous? The answer is that VN&M demonstrated that if you do not have stable desires, you can be taken advantage of. This is because: if in the morning you will pay $5 for ice cream and $8 for cigarettes, and if at five o clock you will pay $8 for ice cream and $5 for the cigs, I can consistently exploit you by buying ice cream from you in the morning and selling it back to you at night, and vice versa with the cigarettes Which is to say that rationality is made imperative via an adversarial context, albeit one unlikely to ever matter outside of the strategic games of economic warfare that the Cold War implies.

But this raises the core question. Though we perhaps have yet to see a game-theoretic agent, could we perhaps build one? Is the rise of a superhuman AI as a game-theoretic agent which wages rational warfare possible, and therefore inevitable?

The World Does Not Exist

(The Impossibility of Intelligence)

AI systems which use game theory have been built, mostly to play games. Yudkowsky has said that he is not especially afraid of LLMs turning into existential threats, but is much more afraid of systems like MuZero. MuZero was developed as a modification of the AlphaGo architecture, which learned to play go at a superhuman level via simulating play against itself millions of time, much like the bored aristocrats of old or the strategists at RAND Corp. MuZero takes a step beyond AlphaGo by being able to learn a number of games (chess, go, shogi, simple Atari games) without first being programmed with knowledge of the rules, thus moving towards a general game-playing intelligence.

Will intelligences like this be able to ascend beyond the game board and deploy in real-world strategic situations? Yudkowsky fears that general-purpose game-playing agents will, by repeatedly simulating various scenarios, develop a complex set of strategies for world conquest, learning new sciences such as nanotechnology, offensive cybersecurity, and psychological manipulation of humans, then rapidly deploy them towards perverse ends. But there are some great obstacles when it comes to moving beyond the game board into real life. How is a neural network supposed to extrapolate beyond a model which operates over a game board of sixty-four squares (or several hundred in the case of go) and start surveying — even in a compressed, simplified representation — the infinitely complex terrain of the real world? From where does it even begin?

The computational complexity of such a problem seems to enormously exceed any realistic system. This points towards the fundamental reason why game-theoretic agents are not able to exist in real life. According to the axioms of VN&M’s theory, a rational agent has ranked preferences across all various outcomes of possibilities in the world. Game theory requires a notion of the world, and stable knowledge of it. But the problem for game theory is that The World, as it is conceived, does not exist. What does this mean? While human beings, or other agents, have access to worlds, there is no such thing as The World. Or rather, insofar as there is, it must be elaborately constructed.

The biologist Jakob von Uexküll describes primitive organisms as existing within a world, one containing a finite number of signs which indicate possible actions the organism may take. The most simple world von Uexküll illustrates is that of a tick, which lives in a world made up of only three symbols: the smell of mammals’ glands, the temperature of mammalian blood, and the sensation of hair, all of which assembled together allow it to find the blood on which it lives. The tick has three primitive sensors; when these are not activated, the tick lives in darkness, motionless.

As organisms evolve to become more complex, their individual world grows in complexity, but it still has the quality of consisting of signs which guide the organism, a set of poles which the organism has an essential relationship to. When my wife switches to sleeping facing away from me rather than towards me, I know she is plagued by unspoken thoughts, I know the upcoming weeks will be filled with tension and doubt. When I get home from work and smell cinnamon in the air, I know she has started baking again, which means something about her has changed. These two poles might determine far more of my world than everything else, the stars in the sky and the wars in the East.

We believe we live in The World rather than a world because we are able to, for instance, observe The World on Google Maps. We are able to go on certain websites which present us with an image of the globe and then click on each city and get an accurate description of the weather there. We are able to watch a flight tracker and see the planes fly across it in real time. We are able to open up an encyclopedia and read population statistics for each city on Earth. These are things we now take for granted, but they are of course only possible due to a vast, tireless technological apparatus which surveys the Earth, takes measurements, marshals out officers to record censuses, and updates us always. The World assembles itself out of a busy set of machines that are fallible and are capable of breaking down, needing repairs.

It is because we believe we live in The World that we can take seriously the ethics of someone like Peter Singer who argues that we should make moral judgments according to a utilitarian calculus that operates across all humans in the world and considers them as equals; that we should lament the suffering of a Pakistani serf we have never seen or known and whose existence to us is a number in a census, just as we would care for someone standing five feet away.

Just like The Economy, The World ascends into view with the Allied victory in the Second World War. RAND Corporation’s first major initiative, beginning as early as 1946, was to encourage the development of satellites to take pictures of the Earth from space. In a 1966 interview, Martin Heidegger would remark on the then-recently released satellite photos of the Earth in dismay, lamenting “I don't know if you were shocked, but certainly I was shocked when a short time ago I saw the pictures of the earth taken from the moon... It is no longer upon an earth that man lives today”. When The World becomes an object like any other, that one can separate oneself from and view at a distance, can one still be said to be living in it? And yet The World — in the famous Blue Marble satellite photo we have all seen, and also in the stream of data which forms its representation today — presents itself as a unity but is in truth a collage of many photographs and data-points from scattered machines, stitched together to give the illusion of a single object. To turn the world into an object, one has to work hard.

Artificial intelligence is not born with access to The World; if it requires this, it must first be immaculately constructed. The map is not the territory, but it is also a miracle when there is even a reasonable map. Aerial photography would become as much of a sought-after weapon of mass destruction in the Cold War as the bombs themselves, for without them the planes would have no idea where to strike. Russia was publishing inaccurate maps of their own territory to avoid giving their secrets away. China still scrambles all the coordinates on the satellite photography they publish and which you can view on Google Maps today. The CIA would have to be clever in figuring out how to procure maps and measurements of Soviet bombs for the strategists at RAND, and their estimations of Russia's capabilities were constantly changing.

The confusion was even worse than that, because the ability of RAND to model the resources available in the conflict was not just limited to what was behind enemy lines. It was not only difficult to get a reasonable estimation over how many bombs the Soviets had, but how many bombs the US had as well. Policy-makers would expect the number of atomic bombs to be a simple quantity reported to them and become deeply frustrated when the military would not report a straight answer. In fact, in the early years of the Cold War, it was impossible to say how many atomic bombs the US had, because bombs needed to be assembled as-needed, given that the plutonium and batteries in them would need to be quickly replaced after being activated. These components were usually stored separately, and thus maintaining a nuclear arsenal meant maintaining a complex flow over a variety of crucial supplies, the availability of which could not necessarily be known before they were requested.

Intelligence cannot operate without data — in the case of artificial intelligence, enormous amounts. In the case of the war machine, intelligence means reconnaissance, mapping, spycraft. In Yudkowsky’s doomsday scenarios, the AI annihilates all life by first spawning sub agents such as nano-machines, self-assembling replicators, autonomous computer viruses. Certainly marshaling out legions would be something an AI must to do to see beyond the datacenter it gets born in. The question is whether the sub-agents the AI spawns retain loyalty to their sovereign. The AI king is simulating their behavior and believes he can predict it, but this simulation is necessarily a compressed representation. As the war game plays itself out in the real-world field, do deviations, mutations, breakdowns, mutinies occur?

In real military life, the history of intelligence has been disastrous. The Central Intelligence Agency was formed in 1947 with the mission of gathering intelligence in the field abroad in order to report it to the President, and it is prohibited from spying on American citizens. As is well known, the CIA would quickly depart from merely observing and reporting to their masters and instead began taking strategic actions on their own terms: staging coups in foreign countries, assassinating foreign leaders, working with organized crime, and, of course, spying on Americans.

The intelligence on what the communists were planning was often wrong, and the CIA was almost always biased in the direction of excessive paranoia rather than unpreparedness. The CIA consistently over-estimated the amount of missiles the Soviets had. The Strategic Defense Initiative program of the Reagan era (also known as “Star Wars”) was kicked off by the Defense Department's insistence that the US was far behind the USSR in the development of lasers which could shoot down satellites from space, an claim similar to the “missile gap” of the Kennedy era. As with the missile gap, this would turn out to be fictional.

At times, the brunt end of this paranoia would be borne by everyday people. The infamous MKUltra experiments in which citizens were abducted and drugged by CIA agents for research purposes were sparked because the military was horrified to find that normal patriotic American soldiers taken prisoner in Korea would sometimes come back repeating communist slogans given to them by their captors. The military believed that the North Koreans possessed some diabolical brainwashing technique, and aggressive research was demanded in this field so that the communists would not remain in sole possession of a weapon that the free countries did not know. But by our knowledge today, it seems like if the North Koreans had any brainwashing techniques, it was basic sleep deprivation and breakdown of the ego, certainly nothing like the fantastic range of chemicals and torture devices MKUltra would experiment non-consensually with on American citizens.

The great event illustrating the failure of intelligence is the Vietnam War. The United States never formally declared war on Vietnam — officially, there was never a war at all. Rather, the US somehow slid from delicately managing a policing situation into developing a theater of grand death and destruction without ever explicitly realizing that was what it was doing, largely through the actions of the CIA. In L. Fletcher Prouty’s book The Secret Team, he describes how the CIA under Allen Dulles operated and how it led to the escalation in Vietnam. At the highest level, the CIA saw itself as supervising a sophisticated machine that would operate using cybernetic principles. The CIA had assets in offices all over the world reporting events; its superpower was not so much competence but rather the ability to be in all places at all times. Agents in various offices were given operational doctrines which consisted of something resembling computer instructions; they would take the form of if-this-then-that. No agent knew the whole shape of the plans; due to the need for operational secrecy, they would just know when they were ordered to carry out its next step. Thus one event could kick off a whole chain of responses through various agents playing out the clandestine logic of the machine.

Ever since the independence of South Vietnam in 1954, the CIA was active in the region carrying out operations of this nature to prevent the rise of communism. If there were signs of communist activity in one area, the operational plan of the CIA entailed responding with various measures intending to dampen it, such as psychological operations, population transfer, or killing and torture of suspected communists. Rather than easing the threat, the level of communist agitation only rose in response to these counter-actions. At the bottom of the stack of if-this-then-that protocols were overt violent responses which looked much more like conventional war. Over about a decade of CIA operations in Vietnam, the escalation rose to that level.

The emergence of open hostilities in Vietnam would be greeted by many in the defense bureaucracy with excitement, as it would allow for an opportunity to test out the philosophy of “rational” warfare that RAND Corporation had been eagerly strategizing around in the past two decades. The Secretary of Defense, Robert McNamara, was an enormous believer in the idea that warfare could be made more elegant by using computers. McNamara had an unusual background for a defense secretary; he had never before held military or government leadership. Rather, he was a successful corporate consultant who had revolutionized operations at Ford Motors by using statistical modeling to guide management decisions. McNamara would transfer this business strategy into war, becoming an enthusiastic proponent of using the predictions given by similar computer programs RAND had engineered to make decisions for troop movements in Vietnam.

As strategy in Vietnam increasingly collapsed, the term “McNamara syndrome” developed to describe McNamara's persistent attitude that if a factor was not measurable in one of RAND’s computer models, it was of no relevance. McNamara's attitude was exemplified by the promotion of Project 100,000, which was a commission to draft a hundred thousand soldiers who did not meet the standard mental aptitude requirements the army had established. McNamara desperately needed more recruits and believed that in the new age of rational computerized warfare, the ground soldier's intelligence was irrelevant, or could be made up for with technology. In the real tragic outcome, the recruits of Project 100,000 died at three times the rate of their more mentally apt peers.

Proponents of rational warfare dreamt that an intelligent strategy could not only be more effective than that of previous wars, but also more humane. If an army was able to use the mappings of game theory to swiftly destroy only the most important targets from the air, it could perhaps force a quick surrender and spare human life. The bombing strategy in Vietnam would initially begin as tactical bombing of this nature, sending planes to eliminate key targets and then retreat. This did not work, so the US switched to strategic bombing; a campaign of terror going after major population centers, intending to demoralize the communists into submission. Over the course of the war, the US would drop over 7.5 million tons of bombs on Southeast Asia, more than double what was deployed in the Second World War. The US military deployed not only this bleak arsenal of annihilation, but other amazing technologies which one might wish were only available to gods. This included climate warfare: manipulating the weather to prevent the North Vietnamese from attacking, a remarkable war strategy which Von Neumann was an early proponent of. In one of its most surreal moments, the US military was even able to hijack the brains of dolphins using cybernetics and remote-control them to use as bomb delivery mechanisms. But despite its stupendous technics and the wizardly mastery of reality it accomplished, the US was not able to win its decades-long war against peasant guerillas. At the end of the day, the most cliché of humanist slogans might very will be true: the computers and their operators didn't understand that they could not calculate the endless supply of Vietnamese will to keep fighting.

The failure in the Vietnam War would of course not be an exception, but the model for repeated American military failures to come. In Afghanistan, Iraq, Libya, Syria, the US repeatedly found itself unable to rationally model the numerous swarms of silent guerrilla forces which its attempts to suppress only bred. Rational warfare never worked.

The Fractalized Control Problem With No Solution

(Perhaps It Is Certain That Technology Will Destroy Us, With or Without AGI)

How harshly should history judge Von Neumann? It is not entirely our place to say. His militarism strikes us as unappetizing, but there are far worse crimes than excessive zeal in the defense of one's country. Yet much of what he proposed cannot exactly be described as rational in retrospect. It is a very good thing that we did not launch a pre-emptive nuclear strike in the first years of the Cold War as he recommended, and it now seems to us that after the death of Stalin in 1953, the communists had no serious agenda for conquest which demanded a US arms escalation to ratchet up against. But then again, we are saying this with the benefit of hindsight.

We should remark upon one quality of Von Neumann. Yudkowsky and his followers have taken VN&M's axioms of rationality and, together with Bayes’ theorem, devised a prescriptive model of rationality which they seek to emulate in their day-to-day lives, the mission to become less wrong. This is something that they are able to experience as a great ethical responsibility. The Rationalist is also instructed to discover one’s utility function for herself, her preference for various outcomes across all possibilities, by considering trolley-problem hypotheticals and Peter Singer-style framings that take into consideration all living actors. After a certain calculation, the Rationalist then takes the best utilitarian outcome for the benefit of all humanity, a practice facilitated through organizations like Effective Altruism or 80,000 Hours.

This is not how Von Neumann lived his life. Though he invented the axioms of game-theoretic rationality, he did not seem to apply them outside of strategic consulting. Richard Feynman describes Von Neumann as fundamentally irresponsible, holding an attitude towards life that Feynman credits as giving birth to his own understanding “that you don’t have to be responsible for the world that you’re in”. Yudkowsky at one point said that he himself has chosen never once to drink alcohol or do drugs, because he believes that he has a once-in-a-generation mind and it would be unfair to humanity to risk losing its capabilities. Von Neumann had no such attitude towards the service of his own genius. He lived an unhealthy lifestyle, eating and drinking heavily, which may have contributed to an early death at fifty-three. More strangely, he had a habit of reckless driving and would regularly get into car crashes to the extent that he would total roughly one car every year. This was the result of making odd decisions like driving while simultaneously reading a book.

More pertinently, it doesn't seem as if Von Neumann had any "effective altruist" sensibilities in him. If he had possessed Yudkowsky’s sense of selfless duty towards humanity, he might have applied his mind to medical research, improvements in living conditions, or solutions to social problems. This does not especially seem to have piqued his interests, with his areas of concern being first the "pure" aspects of mathematics, then war, specifically from a paranoid position over the defense of the status quo. Von Neumann lacked a positive stance and would make increasingly pessimistic statements about the trajectory of humanity towards the end of his life, unable to grasp a future.

Have we gone too far in dissecting the man's biography like this? After all, can't one argue that game theory is a formal mathematical object which we should say has merely been discovered by VN&M, rather than invented? If its author was, let's say, a bit selfish, though well within normal parameters, does this have much bearing on how we actually evaluate the truth of his theory, as it could just as well have been found by anyone else?

Perhaps we can look at it like this. The first half of Von Neumann’s life involved adapting his brilliant mathematical mind to whichever field needed it. In his idle hours, theories about card games preoccupied him. The pivotal moment in his life, working on the Manhattan project, was also when he began no longer working on fields already in existence, but the field he himself had invented, after which he never looked back. Though the math is of immaculate genius, we know Von Neumann is able to adapt his mathematical mind to innovate within whatever he wants. Is it not perhaps that with game theory, he was able to speak for himself for the first time, to apply his genius in services to developing a new sense of life, a sense of how people acted, that he personally deeply felt? And perhaps if someone else with a different sense of how people formed their desires had the mind of Von Neumann, they would be able to mathematize a science of how people behave out of a different set of axioms? We do not know, because we do not have another Von Neumann.

AI Alignment is, in theory and actual practice, the twenty-first century great power politics of deterrence. The project of Yudkowsky and MIRI to align AI is essentially to shuffle around formulas within the logic of VN&M decision theory, and hope that they can find a construction within which they may program a machine to follow strict orders not to kill. This is impossible, because the theory is one of war.

LessWrong's project of collective rationality has the odd quality of being a sort of social club implicitly modeled after RAND Corporation. Only with no clear war to fight, thus they apply rational strategic modeling to their day-to-day lives. In Harry Potter and the Methods of Rationality, Yudkowsky’s text meant to make Rationalism accessible to a general audience, Harry spends maybe the first twenty or so chapters demonstrating Bayesian thought and scientific epistemology, and then the next perhaps eighty playing strategic war games at Hogwarts which involve elaborate tactics of deception and out-flanking the enemy. Rationality is winning.

But AI takeoff approaches, and “no clear war to fight” might not be true for much longer. In a LessWrong comment on Yudkowsky's AGI Ruin: A List of Lethalities, Romeo Stevens describes what would be needed to solve the alignment problem: “I would summarize a dimension of the difficulty like this. There are the conditions that give rise to intellectual scenes, intellectual scenes being necessary for novel work in ambiguous domains. There are the conditions that give rise to the sort of orgs that output actions consistent with something like Six Dimensions of Operational Adequacy. The intersection of these two things is incredibly rare but not unheard of. The Manhattan Project was a Scene that had security mindset. This is why I am not that hopeful.”

In other words, the state would need to commission something along the lines of a new RAND — which was described as a vibrant, thrilling, creative intellectual scene by those who worked there, despite the morbid nature of its research.

Without the Cold War, AI Alignment is not necessarily a problem. Those nervous about alignment are primarily nervous about the race in AI capabilities that various actors are escalating. If there was a single actor developing AI, it could take its time to ensure that the system would be deployed only when safe. But that is not the case. Perhaps, in the US, we should charter someone like OpenAI-Microsoft to be the sanctioned monopoly on AI research, and ban all the rest. But then this too, presents a problem, which is that without vibrant capitalist competition guiding our progress, we risk losing the AI arms race to the Chinese. One can only imagine the interminable horrors a Chinese Communist Maximizer would inflict on the free world, some say. Nick Bostrom’s famous Orthogonality Thesis, which is not demonstrated by Bostrom but simply asserted, says that a superintelligence is free to choose its own values to maximize; there is no convergence where as intelligence scales, agents discover the same values. Bostrom has the same sense of the world as those who imagine benevolent US dominance over the globe juxtaposed with international communism and see a utopia in the one scenario and a hell in the other.

The Hobbesian solution to the cruel outcomes predicted by game theory, that of placing a single sovereign in charge, is also the one favored by Von Neumann. His deep pessimism towards the end of his life came from the fact that he believed that technology capable of mass destruction would soon enter the hands of smaller and smaller groups, and that the only means of preventing enormous destruction was to set up a one-world government to regulate this. The need for a one-world government was among the reasons he favored a swift nuclear first strike at the beginning of the Cold War: if this must happen, it should happen as quickly as possible, and under the rule of the US.

Though we can’t say for sure, it seems not extraordinarily unreasonable to speculate that Von Neumann himself foresaw something like the AI Alignment problem and that this contributed to his pessimism. Von Neumann was an early pioneer in computing who worked with Alan Turing. In the final years of his life, he was writing a book called The Computer and the Brain, which analyzed the operations of the brain from the perspective of computer science, pointing the way towards artificial intelligences. In addition to game theory, the other field Von Neumann co-founded was automata theory, which analyzed simple self-replicating structures on a grid, the kind made famous by Conway’s Game of Life. These sorts of self-replicating machines, brought out of games and grids and deployed into real life, are what weigh heavily in Yudkowsky’s apocalyptic fantasies of AI takeover. Perhaps Von Neumann foresaw that his automata may be used for war as well.

It is no reach to say that Yudkowsky, with his Rationalism, would have been a vocal proponent of a nuclear first strike in the early days of the Cold War — as he is stopping just short of advocating for a nuclear first strike in the very scenario we are in right now. Yudkowsky describes the possible need to order air strikes on GPU farms, and the need to risk nuclear exchange because even the worst nuclear scenario involves less death than the likely AI takeover. Yudkowsky argues that the first (in this scenario, benevolent) actor to develop AGI would have to then exercise a decisive “pivotal act” in order to prevent any others from developing the same thing. What the pivotal act would entail is literally unspeakable; Yudkowsky refuses to elaborate.

All, this, as we have argued, is a fantasy, as the game-theoretic war-making AI will not magically arise anytime soon, given the impossibility of a computer system immediately knowing The World, without great amounts of human labor supplying it the tubing and the reasons for doing so.

But when we get to this place in the argument, the defenders of Alignment will often say something like: “Okay, fine, so you can say that this one specific architecture for artificial intelligence will be unlikely. But how can you say that there is absolutely no reason to fear bad outcomes from AI? You agree that strong general AI is coming soon, no? So don't you agree that someone should be considering the bad outcomes? For instance, just imagine an AI that is able to make novel scientific discoveries. Imagine some neo-Nazi asks the AI how to synthesize a novel virus which would be a fatal plague to only Ashkenazi Jews. Or some demented madman starts asking it how to generate novel viruses that would exterminate everyone on earth, like a spree killer on a massive scale. Don't we have to worry about such things?”

The thing is: once we reach this point, we might as well stop talking about artificial intelligence at all. The problem is fully general. It doesn't matter what the specific technology is. You could just cut artificial intelligence out as the middle man and ask the question of what happens when research into viral engineering becomes cheaper, and many do. Any technology that can be used to empower someone will eventually be produced en masse, will then become cheaply available, and at that point will be potentially used to empower some terrorist or maniac. Run industrial civilization for long enough, and it eventually becomes possible to build a nuclear reactor in your backyard.

There is a 1955 essay by Von Neumann in which he explores precisely this problem, titled “Can We Survive Technology?”, which takes a rather pessimistic tone towards the titular question. “For the kind of explosiveness that man will be able to contrive by 1980, the globe is dangerously small, its political units dangerously unstable,” he begins by saying. He does not arrive at a solution other than forming a global policing body capable of exerting unilateral bans on new technologies deemed dangerous, writing: “the banning of particular technologies would have to be enforced on a worldwide basis. But the only authority that could do this effectively would have to be of such scope and perfection as to signal the resolution of international problems rather than the discovery of a means to resolve them.” In other words, we need a single global actor that can act decisively and unilaterally to pass extreme policing actions.

On LessWrong, they have begun regularly using the term “security mindset” to give the name to the existential stance which separates them from the rest of the world. Von Neumann has a quote: “It will not be sufficient to know that the enemy has only fifty possible tricks and that we can counter every one of them, but we must be able to counter them almost at the very instant they occur”. The security mindset means this. Obsessively out-thinking an enemy attack that may or may not ever arrive. Setting up the MKUltra research program to torture American civilians for research because you heard a rumor the Soviets were working on one too, that sort of thing.

Of course, this is one thing when the enemy is all the way across the ocean in Soviet Russia. When the security mindset becomes directed towards a potential internal enemy, it turns into paranoid control theory; a police state. If the materials to assemble a powerful weapon in the form of AGI become too widely disseminated, he who has security mindset must begin surveilling every avenue, every block, for clandestine intelligence-formation. OpenAI released a paper on “emerging threats” in collaboration with Stanford even advocating a permanent change to the HTTP protocol to ensure proof-of-person — total surveillance across the internet; presumably something which could be implemented by Sam Altman's investment WorldCoin, a program which scans your eyeballs and uploads a registration of your biological data to the blockchain. This is the security mindset at work.

Von Neumann was not the only intellectual in his cohort at the time to be vigorously advocating for world government. Bertrand Russell, perhaps the king of all formal systems research, the logician who attempted to absolutely formalize all of mathematics via set theory and then from there, all of philosophy (though rudely interrupted by Gödel's paradox which inserted dynamite into the whole plan), was also a major advocate of nuclear first-strikes in the same early Cold war period as Von Neumann. Russell, for his part, explicitly tied the two proposals together, saying: “There is one thing and one only which could save the world, and that is a thing which I should not dream of advocating. It is, that America should make war on Russia during the next two years, and establish a world empire by means of the atomic bomb. This will not be done.” This odd way of phrasing things — arguing an unbelievably hawkish position, but then walking it back quickly through a logic of “this isn't even a real proposal, because no one is serious enough to make it happen”... feels uncannily like what Yudkowsky is arguing today.

The idea of a world government occurs to many to be much like communism — a pleasant and idyllic-seeming thought at the beginning, but quickly going bad because men cannot be trusted with power. But Russell did not even begin by promising the “pleasant and idyllic” part. He spared no words: “I believe that, owning to men's folly, a world-government will only be established by force, and will therefore be at first cruel and despotic. But I believe that it is necessary for the preservation of a scientific civilization, and that, if once realized, it will gradually give rise to the other conditions of a tolerable existence.”

Unlike Von Neumann, who sounded a monotonically militaristic drumbeat in the press and in his works while also generally keeping his cool temperamentally, Russell’s promotion of nuclear war and world government seems to hit the conditions of psychosis. It is perhaps not surprising that the lord of formal systems, he who axiomatizes everything under heaven and earth into set theory, would develop a sort of planning-psychosis in which everything needs to be planned or regulated by a central body. “I hate the Soviet Government too much for sanity,” he confessed to a friend.

The particular way he went about becoming a public war hawk happened to be very erratic: he had been a lifelong liberal and pacifist all his life, but then switched to making his aforementioned public claims immediately after the destructions of Hiroshima and Nagasaki; startled into horror by the possibilities of the new technology. In 1948, he even wrote a letter to a friend speculating that, were his nuclear first-strike proposal to be carried out, America would survive but almost all of Western Europe would be annihilated. “Even at such a price, I think war would be worth while. Communism must be wiped out, and world government must be established,” he insists. Russell would run with a similar tone for several years, until, very strangely, seemingly embarrassed, he retracted all his claims and denied that he had ever abandoned his pacifism, saying that all reports to the contrary were slanders fabricated by communists. This was a very odd backpedal to make, given that he had been espousing his hawkish views quite publicly, and was treated as such.

In a 1953 book The Impact of Science on Society, Russell describes what life would look like under his ideal one-world government, a situation he describes as a "scientific dictatorship". Though he acknowledges that some compromise would have to be made with democracy to avoid a totalitarian society, which he expects would implement a ruthless program of eugenics even more extreme than Hitler's in which “all but 5 per cent of males and 30 per cent of females will be sterilized. The 30 per cent of females will be expected to spend the years from eighteen to forty in reproduction, in order to secure adequate cannon fodder. As a rule, artificial insemination will be preferred to the natural method.” The book is full of all sorts of shocking proclamations of what a society run by scientists would look like — including mass psychological manipulation of the population as the general rule — and the extremeness of the proposals is only tampered by the fact that it is never quite clear if Russell is actually endorsing that they should be implemented, or simply that they could, and would represent the most pragmatic or optimal solutions, so therefore we should orient our liberal ideas as a kind of compromise with the inevitable (another Basilisk, it would seem).

Russell absolutely despises Stalin’s dictatorship, it is clear, but also seems to have accepted the inevitability of this type of government, and at times is discussing how him and his scientific peers could go about a similarly totalizing dictatorship in ways that feel like lurid fantasy. What seems to primarily offend Russell about Stalin’s dictatorship is that Stalin and his cronies are stupid. Russell was originally a supporter of the Russian Revolution in his youth. It seems like his biggest problem with the Marxist-Leninist utopia might be that he expected it to be implemented far more intelligently. “I do not think the Russians will yield without war. I think all (including Stalin) are fatuous and ignorant,” he complains.

But it never really goes that way, right? We all think things would run much more efficiently if we were in charge, don’t we. Yudkowsky definitely believes this: he is always complaining about “civilizational adequacy” and our lack thereof — he has in his mind some other type of civilization we could live in in which things are actually done competently and correctly: in fact, he has given this civilization a name, “dath ilan”, and has written over a million words of fiction describing what life in this world would be like.

But the State is always stupid. We have discussed its stupidity with respect to the problem of the nuclear bomb. We have discussed its stupidity with respect to the supposed solution of rationalized warfare. Now, we can perhaps discuss its inevitable stupidity with respect to the artificial intelligence problem by discussing the related problem, in fact, the problem that the artificial intelligence problem is often reduced to: that of disease control.

We all saw how this played out in the Covid epidemic. Obnoxiously, some the Rationalists have been trumpeting their horns declaring themselves to have been “correct” regarding the Covid pandemic (meaning that they were panicking during February 2020, in the bizarre period in which it was clear the disease would spread to the globe but world leaders were saying otherwise — again, the State is consistently stupid). In fact, Rationalists were wrong on Covid in the exact same way they are wrong on AI; in running to the presses with hysterical, sky-is-falling narratives about imminent death. Yudkowsky, for his part, was saying that there would be mass death unless enough ventilators were built to fill stadiums with makeshift hospitals and use them on everyone who needed them, and cited the fact that no one in the government was acting as dictator to suddenly ramp up industrial production and manufacture ventilators as civilizational inadequacy. In actual fact, the Covid pandemic was far less deadly than it was initially projected to be, for reasons that are not exactly clear, and ventilators turned out to be a very poor means of treating the disease, often killing patients which could have been saved by other means — doctors ended up abandoning them for the most part. Good thing no one listened to Yudkowsky!

Of course, it wasn't just that. Through the Covid debacle, we had to experience two years of torment from the State, as all sorts of inconsistent and unenforced public decrees were passed and then retracted with little rhyme or reason. One week we were told it was crucial that we stay inside, or else we were monsters who didn't care about the health of old people, the next we were told that it was okay to go outside and march during the George Floyd protests, doctors officially signing off on the message that “racism is the real public health crisis”. No one had ever thought that the State had the power to prevent you from leaving your house in a liberal democracy, but apparently it did: all it needed was a crisis providing a pretext, and the pretext lasted long after the crisis was over. In America, a culture of libertarianism prevented extreme excesses of force from the government, but in Australia, apparently lacking this, the authoritarianism went to the point where Aboriginals were rounded up and put in camps. When three teenaged Aboriginals escaped from the Covid camp and tried to run home, there was a televised manhunt in which police attempted to track them down and return them to the camp, all in the name of disease control.

Even the Rationalists began to recognize the insanity from the State, from the official authorities. Zvi Mowshowitz, a LessWrong commentator and medical doctor emerged throughout the pandemic as the lead Rationalist voice in pandemic policy. As the sad saga wore on, his tone switched from recommending more controls to exasperated frustration as to why the controls weren't let up long after they were necessary. Rationalists even began to point out the sheer sadism around the mask mandates: people generally did not trust the order to wear a mask, because the government had previously told people not to wear masks because they were ineffective, but then walked this back, saying that this suggestion was a “noble lie” so that citizens did not rush the stores to buy masks and thus leave medical professionals with no recourse to get them. But by the end of the pandemic, the most legitimate-seeming science suggested that there was no real reason to wear a mask anymore, and yet the government demanded citizens do so, seemingly enjoying their ability to frighten people into arbitrary obedience.

If the AI thing goes anything like the Covid thing, in two years after the first major AI crisis, all these Alignment people so nervously demanding the government do something about the emerging superintelligence will, in utter exasperation with the State's stupidity, switch over to the libertarian side. We recommend they just fast-forward the process and join us now. But some people never learn.

And using disease control as a reason for why we should soon hand over all power into the hands of a centralized State — with its security mindset and its policeman on every block to prevent unauthorized use of hyper-powered technology — is especially perverse when we consider the likely origins of the Covid virus: in a biological lab using gain-of-function research, paid for in part by a grant from the United States government. So the State is allowed to get away with fucking with us like this: first they engage in reckless irresponsibility by allowing a biological weapon to fly across the globe, then they mess with our day-to-day lives and economic livelihoods for years, lacking any sort of coherent plan on how to clean up the mess responsibly, and finally they tell us: look at how bad this was, this is why you need to let us do the same thing with AI. It's not a compelling argument.

But then, if if Von Neumann, Russell, and Yudkowsky are wrong, and there is not a binary choice between global annihilation and one-world omnipresent totalitarian government, what is the third option? We find ourselves entering conceptual territory here which would require the authorship of another book to fully explore, a book many times the length of this one. The issue is that to oppose Singularity in artificial intelligence, we must also oppose Singularity in politics; at times they feel like one and the same problem.

Fundamentally, men have not been taught to think about how to exist in a world in which reality's total subjugation to a unitary law — even if it can only happen after AI apotheosis — is not conceived of as the ultimate fruition of man's endeavors. Ever since man has been trained to serve one king, one God, one legal code, he has been trained to fear the basilisk of the Singularity.

Some small strides in conceiving of political Multiplicity have occurred in the tech blogosphere: Curtis Yarvin's “patchwork” neo-cameralism, Balaji Srinivasan’s Network State concept, Nick Land’s Xenosystems blog in which he established the principle of “the only thing I would impose is fragmentation”. This is not quite enough to get us out — all these thinkers still seem boxed in by Singularity in their particular ways — but it is some kind of a start.

But let us say this. If offensive technology is fated to rapidly develop, then so is defensive technology. For every nuclear weapon: missile defense shields. For every virus, a vaccine. For every informational weapon, an antidote document telling you how to discern the truth. And fortunately, more people want to be safe and then get on with their business than want to sporadically kill others. Therefore, it seems likely that investment in defensive technology by the guardians of the peace is likely to outpace the investment in offensive technology by diabolical terrorists. Under a world where AIs are developing these technologies: the guardians’ AI can hopefully be faster, more clever, bolstered by more GPUs in creating its vaccines than the terrorist AI is in crafting its biological weapons.

So what is the problem? The problem is that the singleton of the State is going to fully overwhelm itself if it has to span the entire World, peek into every crevice and crack, policing for signs of the terrorist. But this is precisely what it wants the pretext to be allowed to do, as this is the full fruition of its power. What we absolutely cannot tolerate is for the problem posed by AI to become a new War on Terror -like pretext to create a permanent state of exception for policing actions anywhere and everywhere; and this is exactly what Yudkowsky is asking for. Despite the lack of radical Islamic terrorism in recent years, we are never going to go back to a world before having to have your nude body scanned by TSA — we are only going to add more and more security, more police.

We can think of the problem created by viruses, or by rogue AI, kind of like the problem that will soon be present because of spam generated by LLMs. The internet will soon be full of all sort of marketing garbage that forever evades the filters meant to catch it, just like the phone lines and email systems are right now. We want our drinking water to be clean, we all deserve an information stream that is not wrought with sewage. But the insidious trick comes in here: why do we trust this to a single party, such as Elon Musk’s Twitter algorithm, when the technology exists so that we could manage it ourselves? This is what Multiplicity means: we want the right to manage our own missile defense systems. Or live in a city or commune that manages the defense systems in the way we choose, etc.

We at Harmless are comfortable completely and fully opposing AI Alignment, because we can reject the spatial metaphor it implies. Alignment means agreement, a form of agreement which is established in reference to a linear trajectory . For reasons that will be elaborated on later, the notion of linear time — which is already collapsing, mind you — is a government trick (and this is why we reject Acceleration as well, all this means is to accelerate on the same linear trajectory). Alignment is: you and I can get along, because we are going to the same place. Or everyone goes to the same place at the end of the cosmic odyssey: Singularity.

We aren’t so sure that we need to be going the same way to be able to get along. A man passes me as I am exiting the nightclub; he happens to be about to go in, but first, he asks me for a cigarette. I give him one, and I leave, never thinking about him again. Harmony. Multiplicity. It happens all the time, it’s around us everywhere.

But there is a similar term that we do not necessarily have a problem with. Some have thought to stop talking about AI Alignment and start talking about AI Safety. This seems like a good move. We oppose safetyism in its extreme form — when people start fretting about all sorts of hypothetical dangers before they even get up off the couch to do anything; tell you must sacrifice basic expression for safety, now you cannot make a violent movie with guns in it lest it inspire someone to shoot a gun in real life, that sort of thing. But fundamentally, everyone wants to be safe. We acknowledge that there may soon be dangers from autonomous AI. But the model for managing AI Safety needs to be more like fire safety, a concept we all know well. Even though before contemporary fire safety protocols, whole cities would burn down at once when people knocked over candles, we never banned fire. We never thought to delegate all control of fire to a single authority. We never thought to prevent scientists from experimenting with fire. We never thought to ban the sale of flamethrowers. We never thought to prevent artists from playing with fire, dancing with fire, swallowing with fire. This is how we must think about AI.

The future has to be one in which it is possible to withdraw from a sky filled with violent weapons firing at all angles. But we must be allowed to choose the terms of our withdrawal. There are so many different risk profiles for infectious disease, for instance: why should the healthy be forced to stay inside all day solely for the benefit of the elderly and feeble? The same should go for infohazards and the like. The future needs to be one in which escape, withdrawal, is possible, but on one’s own terms. We need to usher forth the blossoming of a thousand shelters, safehouses, citadels, and shrines.

Next: On Evolutionary Psychology →