Business News

Analysis: Sam Altman’s firing at OpenAI reflects a schism over the future of AI development

SAN FRANCISCO: The rift that cost young artificial intelligence whiz Sam Altman his CEO job at OpenAI reflects a fundamental difference of opinion on security, in general, between two camps developing the software that changes the world and thinking to its societal impact.

On one side are those, like Altman, who see rapid development and, in particular, public deployment of AI as essential to testing and perfecting the technology. On the other side, there are those who argue that the safest path is to first develop and fully test AI in the laboratory to ensure that it is, so to speak, safe for consumption human.

Altman, 38, was fired Friday from the company that created the popular chatbot ChatGPT. For many, he was considered the human face of generative AI.

Some warn that hyper-intelligent software could spiral out of control, leading to catastrophe — a concern among tech workers who follow a social movement called “effective altruism,” which believes advances in AI should benefit society. humanity. Among those who share these fears is OpenAI’s Ilya Sutskever, the chief scientist and board member who approved Altman’s ouster.

A similar divide has emerged between developers of self-driving cars – also controlled by AI – who say they need to be let loose on dense urban streets to fully understand the vehicles’ capabilities and weaknesses; while others call for restraint, concerned that the technology presents unknowable risks.

These concerns about generative AI came to a head with the surprise ouster of Altman, who was also the co-founder of OpenAI. Generative AI is the term for software that can generate coherent content, such as essays, computer code, and photo-like images, in response to simple prompts. The popularity of OpenAI’s ChatGPT over the past year has accelerated the debate over how best to regulate and develop the software.

“The question is whether it is just another product, like social media or cryptocurrency, or whether it is a technology capable of surpassing humans and spiral out of control,” said Connor Leahy, CEO of ConjectureAI and security advocate. “Does the future then belong to machines?

Sutskever reportedly felt that Altman was putting OpenAI software into users’ hands too quickly, potentially compromising security.

“We do not have a solution to pilot or control potentially superintelligent AI and prevent it from becoming malicious,” he and a lawmaker wrote in a blog post published in July. “Humans will not be able to reliably supervise AI systems much more intelligently than we can.”

OpenAI is understood to have announced a series of new commercially available products at its developer event earlier this month, including a version of its ChatGPT-4 software and so-called agents that function as virtual assistants.

Sutskever did not respond to a request for comment.

The fate of OpenAI is seen by many technologists as critical to the development of AI. Weekend talks on Altman’s relocation failed, dashing the hopes of the former CEO’s acolytes.

The release of ChatGPT last November sparked an investment frenzy in AI companies, including $10 billion from Microsoft in OpenAI and billions more for other startups, including from Alphabet and Amazon. com.

This may help explain the explosion of new AI products as companies like Anthropic and ScaleAI rush to show off progress to investors. Regulators, meanwhile, are trying to keep up with AI development, including guidelines from the Biden administration and pushes for “mandatory self-regulation” from some countries, as the European Union is working to implement broad software monitoring.

While most use generative AI software, such as ChatGPT, to supplement their work, such as writing quick summaries of long documents, observers are wary of versions that may emerge, known as “general intelligence artificial” or AGI, which could perform increasingly complex tasks without any incentive. This has raised concerns that the software could single-handedly take control of defense systems, create political propaganda or produce weapons.

OpenAI was founded as a nonprofit eight years ago, in part to ensure that its products were not driven by a profit motive that could lead it down a slippery slope toward dangerous AGI, which is designated in the company charter as any threat to “harm.” to humanity or unduly concentrate power. But since then, Altman has helped create a for-profit entity within the company for fundraising and other purposes.

On Sunday evening, OpenAI named Emmett Shear, former head of streaming platform Twitch, as interim CEO. He pleaded on social networks in September for a “slowdown” in the development of AI. “If we are currently at a speed of 10, a break is 0. I think we should aim for a 1-2 instead,” he wrote.

The precise reasons for Altman’s ouster remained unclear Monday. But it is safe to conclude that OpenAI will face big challenges in the future.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button