Nuclear Risk Working Group
OUR FIRST ACADEMIC YEAR COHORT
This past Winter quarter, we ran our inaugural Nuclear Risk Research Fellowship, extending our research programming into the academic year. We had four research fellows—with majors ranging from Economics to Political Science to Applied Mathematics— who produced four research outputs. The fellowship focuses on providing undergraduate students with more opportunities to research nuclear risk that are often not available to younger students or outside of masters programs. Every week, we met with each other to discuss the fellow’s papers, listen to guest speakers, and talk about important debates in the nuclear world. While we started with four fellows, we intend on expanding the program in the fall.
This Year’s Research Outputs:
DECLAN HILFERS
Computational and Applied Mathematics, Statistics, University of Chicago ‘26
Deterministic Models of Arms Racing in Europe
This paper focused on applying Lewis Fry Richardson’s model—a mathematical representation of arms races—to a number of case studies following Russia’s invasion of Ukraine. Fry’s model quantitatively examines arms buildup to predict upticks in conflict. Declan worked to test how Richardson’s model applies to modern arms races before updating elements of it to analyze how arms races may be better predicted and anticipated in situations with more than two actors.
WARRICK KWON
Economics and Mathematics, University of Chicago ‘28
This paper examines whether South Korean society is numb to North Korea bound nuclear threats by analysing investor behaviour heterogeneity in the South Korean stock market. If nuclear threats are perceived as routine or insignificant, we expect domestic investors to exhibit minimal reactions, whereas heightened risk perception should trigger significant market responses. This paper studies the intersection between psychology, nuclear risk, and economics, thus developing a new theory of nuclear numbness present in South Korean society.
JADEN CHOI
Biology and Immunology, University of Chicago ‘28
Navigating the Indo-Pacific: A Novel Approach to Diffusing Military Tensions in the South China Sea
This paper examines the escalating militarization of the South China Sea by the People’s Republic of China and assesses the heightened risk of unintended military confrontations with the United States and regional allies. This research demonstrates how China’s deployment of advanced weapons systems, electronic warfare capabilities, and strategic use of the Coast Guard and maritime militia create dangerous conditions for miscalculation.
ARSH KUMAR
Computer Science and Engineering, University of Edinburgh‘26
The Technicalities of Integrating AI Into The NC3
This paper discusses what the integration of AI in nuclear command and control systems may look like given the current state of AI systems, and what the systems need to be trustworthy. While previous efforts on this theme have covered various aspects of AI and nuclear risk, this paper examines the technical aspects and limitations of AI integration into the Nuclear Command, Control, and Communications (NC3) system of the United States. Arsh presented this paper at the Southern Political Science Association’s Summer Conference earlier this year.
PRESENTATIONS THIS YEAR
Part of the Nuclear Working Group’s mission is to present research motivated by existential risk in different communities. At conferences, we gain valuable feedback on how to improve our work while engaging with experts in the broader nuclear and security field. Our goal is to not only present papers as a research group, but to encourage and support our fellows in presenting their research. This year, we presented papers at conferences across the world. Here are some of the papers we were very excited to present!
XLab at Alva Myrdal Center for Nuclear Disarmament, Uppsala, Sweden
Nuclear Risk Program Manager, Madeline Berzak spoke on a moral frameworks in nuclear risk panel at the Alva Myrdal Centre for Nuclear Disarmament’s Annual Conference at Uppsala University. Her paper, “Sentient Shrimp and Nuclear Winter,” built upon prior research, and studied the differences in how effective altruists (EA) and members of the nonproliferation community conceptualize and understand the risk of nuclear weapons. She finds that often, the EA and nonproliferation communities speak past each other rather than being able to communicate clearly. Through examining this difference, the paper explored how different types of communities conceptualize risk and communicate risk to outsiders. The paper creates an original typology for intellectual groups based on accessibility and adherence to explain how well a group is able to communicate its ideas to other communities and the general public.
XLab at MILA, Montreal, Canada
XLab’s student associate Aryan Shrivastava presented research on inconsistencies in language models (LMs) when used in military contexts at MILA-Quebec AI’s conference on the risk of AI integration in the military. His paper, “Measuring Free-Form Decision-Making Inconsistency of Language Models in Military Crisis Simulations,” reveals how LMs can be inconsistent in their responses when fed the same scenarios. Aryan, alongside Professor Jessica Hullman and Dr. Max Lamparth, fed five off-the-shelf LMs a wargame and measured inconsistencies across responses when faced with the same prompt. All LMs exhibited inconsistencies, although the degree varied. Ultimately, Aryan concluded that the inconsistency and unpredictability of LMs could be catastrophic if applied without care in high-stakes decision-making.
XLab at Southern Political Science Association Annual Conference, San Juan, Puerto Rico
XLab presented two papers in January 2025 at the SPSA annual conference in Puerto Rico. XLab’s student operations lead, Rhea Kanuparthi presented a co-authored paper on AI and decision-making in nuclear command and control. The paper delved into how AI/LLMs are being integrated into our intelligence infrastructure—from information collection, analysis, and implementation. The paper examines how integration of AI/LLMs may result in inaccurate decision-making, whether due to inconsistencies within models or because of the potential decision-paralysis induced in a human analyst when faced with a black-box system. While the nature of use of such software in nuclear weapons systems is unclear, this paper looks at possible vectors of integration to identify potential risks to modernizing our systems.