About the Episode 🎙️
Lex Fridman interviews Sam Altman, CEO of OpenAI, for the second time. They discuss the OpenAI board saga, the future of AI including GPT-5 and Sora, the lawsuit from Elon Musk, AI safety, and the potential for AGI. Altman shares his personal reflections on the challenges and opportunities ahead.
Key Takeaways 💡
- (07:51) Altman describes the OpenAI board saga as the most painful professional experience of his life, characterized by chaos and shame, but also highlighting the outpouring of support he received. He acknowledges the experience helped build resilience within the company for future challenges on the road to AGI.
- (14:45) Altman believes the previous board lacked sufficient experience and that the new board members bring valuable expertise in governance, law, and technology. He emphasizes the importance of the board answering to the world, not just themselves, and needing technical experts and people with different perspectives.
- (20:01) Altman admits to initially accepting his removal from OpenAI, even becoming excited about focusing on AGI research, but was persuaded to return by the executive team. He describes the period as a "battle fight in public" and extremely exhausting.
- (25:43) Altman expresses his love and respect for Ilya Sutskever, emphasizing Ilya's deep concern for AI safety and his long-term thinking about technology. He clarifies that Ilya has not seen AGI, but is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.
- (35:36) Altman states that OpenAI's mission includes putting powerful technology in the hands of people for free as a public good. He believes that providing free or low-cost AI tools is crucial for fulfilling this mission and enabling people to build an incredible future.
- (37:43) Altman views the lawsuit from Elon Musk as unbecoming of a builder and expresses sadness that Musk, whom he respects, is attacking OpenAI. He suggests that Musk should focus on competing and building better AI rather than pursuing legal action.
- (46:04) Altman acknowledges the potential dangers of Sora, including deepfakes and misinformation, and emphasizes the need for thoughtful consideration before releasing the system. He also believes that creators of valuable data deserve compensation for its use in AI training.
- (48:59) Altman believes that AI will impact the tasks people perform, allowing them to operate at a higher level of abstraction, rather than simply replacing jobs. He envisions AI as a tool that will enhance human capabilities and creativity.
- (51:51) Altman states that GPT-4 is amazing, but also sucks relative to where AI needs to be, and where it will be in the future. He hopes that GPT-5 will be as big of a leap from GPT-4, as GPT-4 was from GPT-3.
- (57:02) Altman envisions a future where AI models have extensive context windows, potentially incorporating a user's entire history and knowledge base. He emphasizes the importance of user choice and transparency regarding data privacy in such systems.
- (01:00:08) Altman acknowledges the risk of AI models hallucinating and emphasizes the importance of fact-checking, especially in critical applications. He hopes for a future where society incentivizes in-depth, balanced journalism that celebrates achievements while also providing constructive criticism.
- (01:16:18) Altman believes that compute will be the currency of the future and advocates for heavy investment in increasing compute capacity. He identifies nuclear fusion as a potential solution to the energy challenges associated with large-scale compute.
- (01:19:56) Altman expresses concern about the potential politicization of AI and the risk of theatrical events shaping public perception. He emphasizes the importance of truth and balanced understanding of the risks and benefits of AI.
- (01:30:09) Altman states that OpenAI is working hard to avoid bias in its models and suggests publishing the desired behavior of models to get public input. He believes that this would help clarify whether unexpected behavior is a bug or an intended policy.
- (01:33:48) Altman anticipates that safety will become the primary focus of OpenAI, with the entire company involved in addressing various aspects of AI safety. This includes technical alignment, societal impacts, economic impacts, and security threats.
- (01:35:28) Altman is excited about GPT-5 being smarter across the board, not just in specific areas. He looks forward to AI systems that can better understand and connect with human intentions and intuitions.
- (01:36:53) Altman believes that AI will lead to a shift in programming, with more people programming in natural language. He also anticipates a return to robotics at OpenAI, envisioning humanoid robots or other physical embodiments of AI.
- (01:39:17) Altman expects that by the end of the decade, there will be AI systems that are quite capable and remarkable, but he believes that AGI is a mile marker, not an ending. He defines AGI as a transition point that fundamentally changes the world, similar to the internet or Google Search.
- (01:44:47) Altman believes that no one person should have total control over OpenAI or AGI and advocates for robust governance systems. He expresses willingness to be misunderstood in his support for government regulation of AI, believing it is necessary for ensuring responsible development.
- (01:49:47) Altman clarifies that his lack of capitalization on Twitter is not a deliberate statement or philosophical choice, but simply a habit stemming from his early days on the internet. He states that he does not think about it and is willing to capitalize if it is seen as a sign of respect.
- (01:53:25) Altman acknowledges that Sora increases his belief that we may live in a simulation, but it is not the strongest piece of evidence. He believes that AI will serve as a gateway to new ways of seeing reality.
- (01:57:46) Altman wants to believe that there are other intelligent alien civilizations out there, but finds the Fermi paradox puzzling and scary. He hopes that AI will help us understand the nature of intelligence and recognize forms beyond our current understanding.
- (01:58:56) Altman finds hope in the trajectory of human civilization, despite its flaws and challenges. He believes that the collective effort of humanity, building upon the knowledge and achievements of previous generations, is inspiring and gives him hope for the future.