Skip to content
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
· Technology & Future

Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast.

Watch Here


Listen Here


Episode Description

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.  

00:00 Nicholas Carlini's contributions to cybersecurity

08:19 Understanding attack strategies 

29:39 High-dimensional spaces and attack intuitions 

51:00 Challenges in open-source model safety 

01:00:11 Unlearning and fact editing in models 

01:10:55 Adversarial examples and human robustness 

01:37:03 Cryptography and AI robustness 

01:55:51 Scaling AI security research

Related episodes

No matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.