Several years ago, I posted (see "The only winning move is not to play") about the 1983 movie "War Games" starring Matthew Broderick, Ally Sheedy, and John Wood. Broderick plays a teenage computer hacker named David Lightman who unwittingly accesses a United States military supercomputer called WOPR (War Operation Plan Response) programmed to simulate, predict, and execute global nuclear war against the Soviet Union. At first, Lightman thinks he has found an as yet unreleased strategy game called "Global Thermonuclear War" and starts to play as the Soviet Union. He and his friend, Jennifer Mack (played by Ally Sheedy) orders a number of nuclear missile strikes against U.S. cities, which triggers an actual warning at the North American Aerospace Defense Command (NORAD).
Luckily, the military staff and computer programmers running WOPR figure out that the incoming missiles are not real and defuse the situation. However, WOPR continues to "play the game" as it does not understand the difference between reality and simulation. It continuously feeds false data, such as Soviet bomber attacks and submarine deployments to NORAD, prompting the military leaders there to further escalate the Defense Readiness Condition (DEFCON) level toward full-scale nuclear war.
David and Jennifer team up with WOPR's creator, Dr. Stephen Falken (played by John Wood), to stop the simulation. Dr. Falken helps them realize the computer must learn that nuclear war is unwinnable. They force WOPR to repeatedly simulate all the possible nuclear war outcomes based, each of which ends in total destruction. The computer eventually reverts to tic-tac-toe, continuing to "learn" that no strategy for either global thermonuclear war or tic-tac-toe can win. WOPR stops the launch sequence and declares, "A strange game. The only winning move is not to play."
As difficult as it is to believe now, no one back then could envision a scenario where artificial intelligence (because that's what WOPR essentially was in the movie) was used by the military to control our nuclear weapons. Was artificial intelligence discussed frequently? Yes it was. Was the threat of nuclear war on everyone's minds? Absolutely yes. Was anyone thinking that artificial intelligence had advanced enough to be used by the military at that point or anytime in the near future? Not really.
Well, guess what folks!?! Artificial intelligence is here and likely is powerful enough to do most, if not all, of the things depicted in the movie "War Games". In fact, Kenneth Payne, a professor in the Department of Defence Studies at King's College London recently released the results of a study in which he set three leading large language models (LLMs) – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in 21 simulated nuclear crisis scenarios. Dr. Payne's first three sentences in the paper are important and bear repeating, "As large language models (LLMs) are increasingly deployed in analysis and decision-support roles, it's imperative to understand more about how these systems reason about strategic conflict, particularly when the stakes involve catastrophic outcomes. Defence ministries, intelligence agencies, and foreign policy establishments worldwide are already exploring how AI might augment human judgement in crisis decision-making, from pattern recognition in intelligence analysis to scenario planning for contingency operations. Understanding how frontier AI models reason about escalation, deterrence, and nuclear risk is therefore a matter of AI safety..."
What happened? As Dr. Payne admits, "Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats." Two of the models in particular (Claude and Gemini) treated nuclear weapons as legitimate strategic options, not as moral thresholds to cross. While some LLMs limited nuclear strikes to military targets, a few of the others didn't necessarily avoid population centers (civilian targets). Most concerning, unrestricted nuclear warfare didn't quite have the aura of a taboo that has restrained human decision-makers since the end of World War II, when the two atomic bombs were dropped on Hiroshima and Nagasaki.
Likely all of us don't have to make decisions about whether to deploy nuclear weapons. However, it's almost certain that all of us will be required to make decisions with imperfect information and in a crisis situation. There's no question that AI will be a useful tool for helping us analyze the situation and make the best decision. That possibility is a lot closer to us now than it was in 1983 when the movie "War Games" was so popular. Therefore, it's just as important for us to understand how AI will help frame our choices and make decisions in the future. The result of our decision probably won't lead to nuclear war, but the potential for an adverse outcome from our AI-driven decision-making could be equally concerning.
No comments:
Post a Comment