OpenAI’s co-founder and former Chief Scientist Ilya Sutskever has started his own AI firm, Safe Superintelligence Inc., with two other former OpenAI employees Daniel Levy and Daniel Gross. Like its name suggests, the startup is solely focused on building a powerful AI in a safe manner. “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company writes in a post Wednesday. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”Sutskever was reportedly involved in the events and discussions leading up to OpenAI CEO Sam Altman’s stepping down as CEO and removal from the company’s board (Altman was later reinstated as CEO and board member). Last year, Sutskever opined about the concept of “superintelligence” in a blog post with then-OpenAI executive Jan Leike, who is now at rival firm Anthropic. They said that while superintelligence “seems far off now, we believe it could arrive this decade.”Now, Sutskever is laser-focused on delivering on that hypothesis. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg of his new startup. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
This Tweet is currently unavailable. It might be loading or has been removed.
The startup’s website is a simple white page with a generic font and little to no formatting. It bears the same statement shared on Twitter. The company notes it’s currently hiring and plans to assemble a “lean, cracked team” of engineers.
Recommended by Our Editors
Levy expressed his enthusiasm for the new company online, as well. “I can’t imagine working on anything else at this point in human history,” he wrote Wednesday, claiming that Safe Superintelligence will be a “high-trust team that will produce miracles.” But safe AI could mean different things to different people. For Sutskever, it’s on a much more major scale. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,'” the founder said.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.