AI Risk

The Risk From Artificial Intelligence

While artificial intelligence continues to improve rapidly, many of the scientists who developed the technology are now raising concerns that it could pose an existential threat to humanity. Echoing Robert Oppenheimer’s warnings about nuclear weapons, top AI scientists such as Geoffrey Hinton, Ilya Sutskever and Yoshua Bengio have suggested that their inventions now rank among humanity’s greatest dangers.

Artificial intelligence may pose an existential risk by empowering bad actors to develop dangerous capabilities or by taking actions that its designers didn’t intend. The former risk, called misuse, could threaten humanity’s survival if future systems enable bad actors to create biological or nuclear weapons, or other capabilities that threaten human civilization. The second risk, called misalignment, could pose an existential threat if a sufficiently intelligent system develops goals that conflict with human values and therefore takes actions indifferent to human welfare.

OUR STRATEGY

Our group runs programs that provide value to both UChicago students and the wider community concerned about AI risk. For UChicago students we run an AI safety fundamentals course, social events, provide research opportunities and more.

While we pay particular attention to providing opportunities and other value to UChicago students, the ultimate goal of XLab is to benefit all of humanity. Thus, we make courses, our research and other resources publicly available for students attending other schools and the wider research community.

CHICAGO SYMPOSIUM ON TRANSFORMATIVE AI

The Chicago Symposium on Transformative AI was an intensive two-day event that brung together 30-40 promising undergraduate students to rigorously examine the implications of transformative artificial intelligence. Held at the University of Chicago’s David Rubenstein Forum, the symposium was an intellectually vibrant environment where participants heard from leading speakers in technical and policy AI research. The program emphasized independent thinking and assumption-challenging through interactive sessions including scenario forecasting and interactive simulations of possible futures.

AI Policy Research

Over the course of two quarters, we have funded 7 research projects related to AI policy. While some deal directly with developing better legistlation, other projects focus on technical research that we believe is essential for informing and enforcing AI regulations. Selected projects were presented at the Chicago Symposium on Transformative AI.

AI RISK FUNDAMENTALS COURSE

XLab’s AI Safety Fundamentals course is a seven-week reading group that brings students together each quarter for dinner and structured discussion of challenges facing the safe development of artificial intelligence. Students examine both technical safety challenges and broader policy considerations such as AI governance frameworks and regulatory approaches. The program incorporates diverse perspectives by dedicating specific sessions to criticisms and counter-arguments within the field giving students a nuanced understanding of ongoing debates.

Our People

Jo Jiao

Jo Jiao

AI Risk Program Student Lead

Zephaniah Roe

Zephaniah Roe

AI Risk Program Student Lead

Julian Huang

Julian Huang

AI Risk Program Student Lead
Scroll to Top