Ilya Sutskever, a prominent name in the AI world and co-founder of OpenAI, is embarking on a new adventure.
After leaving OpenAI in May, Sutskever made public Safe Superintelligence Inc. (SSI), a company that aims to develop a powerful yet secure AI system. Joining him on this journey are Daniel Gross, a former AI lead at Apple and ex-Y Combinator partner, and Daniel Levy, an ex-OpenAI engineer known for his work on large AI models.
Sutskever’s departure from OpenAI wasn’t without drama. Internal disagreements over prioritizing AI safety versus rapid product development had been simmering for a while. Things reached a boiling point last year when Sutskever pushed for the ousting of OpenAI CEO Sam Altman.
Following his exit, other key figures like AI researcher Jan Leike and policy researcher Gretchen Krueger also left OpenAI, citing concerns that safety processes were being overshadowed by commercial goals.
SSI is determined to change this narrative by focusing exclusively on creating a superintelligent AI that is safe for humanity.
As Sutskever put it, “Our business model means safety, security, and progress are all insulated from short-term commercial pressures.” This approach allows SSI to “scale in peace,” ensuring that safety measures evolve alongside AI capabilities.
Unlike OpenAI, which has diversified its projects and partnerships with companies like Apple and Microsoft, SSI is zeroing in on its mission of safe superintelligence. Sutskever envisions an AI system that not only avoids harm but also upholds key values like liberty, democracy, and freedom.
This philosophy is deeply embedded in SSI’s mission and product roadmap, emphasizing the integration of safety and advanced capabilities through engineering and scientific breakthroughs.
Gross echoed this sentiment, noting that SSI’s mission aligns with its investors and business model, insulating the company from distractions and enabling it to advance its AI technology responsibly.
He also mentioned to Bloomberg that raising capital isn’t expected to be a hurdle, given the interest in AI and the team’s impressive credentials.
Despite the philosophical foundation of AI safety, SSI is all about practical and technical solutions. Sutskever and his team are working on methods to ensure their AI systems stay aligned with human values and interests.
While they’re keeping specifics under wraps for now, the goal is clear: create an AI system that operates autonomously yet safely, benefiting humanity. Sutskever highlighted the ambition to go beyond current AI systems, which are often limited to conversational interactions, towards a more general-purpose AI capable of expansive and autonomous technological development.
SSI is already making strides towards its ambitious goals with offices in Palo Alto and Tel Aviv. The company is actively recruiting technical talent to join its mission of creating safe superintelligence.
If you liked this story, please follow us and subscribe to our free daily newsletter.