
Darren McKee on Uncontrollable Superintelligence
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
View episodeDarren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
View episodeMark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems.
View episodeDan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception.
View episodeSamuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures.
View episodeSteve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark.
View episodeJohannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it.
View episodeTom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky.
View episodeRobert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance.
View episodeJason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI.
View episodeOn this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments.
View episodeJoe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon.
View episodeDan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely.
View episodeNo matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.