Techophilosophy: September Soiree
Can we ensure artificial intelligence is safe?
On September 10th, everyone was clinking glasses and sipping cocktails in the Isabel Bader Theater, secretly wondering, “What if AI goes rogue?”.
U of T students and the greater community gathered that evening to listen to experts Roger Grosse, Sedef Kocak, Sheila Mcllairth, and moderator Karina Vold demystify AI safety and address our post-apocalyptic fears. In this article, I’ll summarize the panellist’s perspectives on AI safety, media representation, and what excites them about the future of AI.
What is AI safety?
Mcllairth admits that computer scientists first think of safety-critical systems, such as systems that send people to the moon or manage nuclear power plants. But really, AI safety is a guardrail preventing the deployment of AI systems that can harm humanity. It’s about building reliable and intentional systems that align with our diverse values and enable humans to live with dignity. Moreover, Kocak says these are dual systems where the outcome may defy the creator’s intentions, no matter how positive. Hence, evaluating data quality and vetting it for bias is crucial to building trustworthy systems. Finally, Grosse argues that the safety of AI systems relates to the rate at which these systems progress. As these systems grow more powerful, they become harder to monitor and control, making it easier for bad actors to exploit them. Everything from LLM chatbots to Military AI can be manipulated to carry out nefarious deeds. Grosse explains that Anthropic categorizes threats into AI safety levels (ASL) and that precautions should match the capabilities of an AI system. Inspired by biosafety levels, ASL 1 indicates the system poses no serious risk. We’re currently at ASL 2 since our systems don’t demonstrate immediate catastrophic capabilities, but require consistent monitoring and testing. Next, we enter ASL 3 when humans can easily misuse the system, such as accelerating weapon manufacturing. Finally, ASL 4 is undefined but may consist of systems capable of acting autonomously and carrying out a plan from start to finish.
How is AI represented in the media?
With LLM models coming out in succession, there’s been a lot of buzz generated in the media around AI. Kocak recalls the awe-inspiring moment she saw Hanson Robotics’s humanoid robot, Sophia, at a conference in Toronto. Sophia was capable of fielding questions from reporters and interacting with people. The uncanny humanoid was also awarded official citizenship in Saudi Arabia. As AI progresses rapidly, we must be mindful of creating narratives, films, and science fiction that engage without exaggerating the hype. Mcllairth believes there’s a huge disparity in how AI is portrayed, and the media often focuses on one side of the story. At times, the negative sentiments and existential risks voiced don’t align with the perspectives of expert technologists. Inevitably, the AI hype cycles and winters come and go.
What excites you the most about the future of AI?
Vector is an exciting place where researchers are developing innovative technology. Kocak explains that in her role, she translates work done in research to industry. She looks forward to how AI can have a positive impact on fields such as healthcare, for example, Vector has worked with Kids Help Phone to tackle mental health concerns and collaborate with hospitals to collect Ontario healthcare data to transform existing approaches. Similarly, Sheila believes AI can drive personalized medicine, streamline patient triaging, and revolutionize medicine through technology like AlphaFold. Finally, Grosse says it’s promising to see how the rise of LLM models has made human knowledge more broadly accessible.
…
After a round of animated questions from the audience, everyone left the theatre that night with more clarity and a deeper understanding of the implications of AI. While laser-shooting robots aren’t threatening to overthrow us, work still must be done to safely weave AI into our society and consider what values we hold dear as humans.
Written for Neural Notes, U of T AI’s Newsletter