"I could be wrong. But I think it could be a near-term thing."

Eco System

As AI seems to grow more powerful every day, one of the main people invested in making it smarter is saying that soon, it might even become self-sustaining and self-replicating.

In a podcast interview with the New York Times' Ezra Klein, Anthropic CEO Dario Amodei discussed "responsible scaling" of the technology — and how without governance, it may start to, well, breed.

As Amodei explained to Klein, Anthropic uses virology lab biosafety levels as an analogy for AI. Currently, he says, the world is at ASL 2 — and ASL 4, which would include "autonomy" and "persuasion," may be just around the corner

"ASL 4 is going to be more about, on the misuse side, enabling state-level actors to greatly increase their capability, which is much harder than enabling random people," Amodei said. "So where we would worry that North Korea or China or Russia could greatly enhance their offensive capabilities in various military areas with AI in a way that would give them a substantial advantage at the geopolitical level."

Autonomous AI

When it comes to the "autonomy side" of things, however, his predictions get even wilder.

"Various measures of these models," he continued, "are pretty close to being able to replicate and survive in the wild."

When Klein asked how long it would take to get to these various threat levels, Amodei — who said he's wont to thinking "in exponentials" — said he thinks the "replicate and survive in the wild" level could be reached "anywhere from 2025 to 2028."

"I’m truly talking about the near future here. I’m not talking about 50 years away," the Anthropic CEO said. "God grant me chastity, but not now. But 'not now' doesn’t mean when I’m old and gray. I think it could be near-term."

Amodei is a serious figure in the space. Back in 2021, he and his sister Daniela left OpenAI over directional differences following the creation of GPT-3 — which the CEO and cofounder helped build — and the company's partnership with Microsoft. Soon after, the siblings founded Anthropic along with other OpenAI expats to continue their responsible scaling efforts.

"I don’t know," he continued during the Klein interview. "I could be wrong. But I think it could be a near-term thing."

While AI doomsday talk is pretty par for the course these days, Amodei's insider perspective adds a lot of weight to his argument — and makes Anthropic's mission "to ensure transformative AI helps people and society flourish" seem all the more worthy.

More on AI presaging: Mistral CEO Says AI Companies Are Trying to Build God


Share This Article