What if Dario Amodei Is Right About A.I.?

Published 2024-04-12
Back in 2018, Dario Amodei worked at OpenAI. And looking at one of its first A.I. models, he wondered: What would happen as you fed an artificial intelligence more and more data?

He and his colleagues decided to study it, and they found that the A.I. didn’t just get better with more data; it got better exponentially. The curve of the A.I.’s capabilities rose slowly at first and then shot up like a hockey stick.

Amodei is now the chief executive of his own A.I. company, Anthropic, which recently released Claude 3 — considered by many to be the strongest A.I. model available. And he still believes A.I. is on an exponential growth curve, following principles known as scaling laws. And he thinks we’re on the steep part of the climb right now.

When I’ve talked to people who are building A.I., scenarios that feel like far-off science fiction end up on the horizon of about the next two years. So I asked Amodei on the show to share what he sees in the near future. What breakthroughs are around the corner? What worries him the most? And how are societies that struggle to adapt to change and governments that are slow to react to them supposed to prepare for the pace of change he predicts? What does that line on his graph mean for the rest of us?

This episode contains strong language.


Mentioned:

- Sam Altman on The Ezra Klein Show (www.nytimes.com/2021/06/11/opinion/ezra-klein-podc…)
- Demis Hassabis on The Ezra Klein Show (www.nytimes.com/2023/07/11/opinion/ezra-klein-podc…)
- On Bullshit (press.princeton.edu/books/hardcover/9780691122946/…) by Harry G. Frankfurt
- “Measuring the Persuasiveness of Language Models (www.anthropic.com/research/measuring-model-persuas…) ” by Anthropic

Book Recommendations:
- The Making of the Atomic Bomb (www.simonandschuster.com/books/The-Making-of-the-A…) by Richard Rhodes
- The Expanse (www.hachettebookgroup.com/series/the-expanse/) (series) by James S.A. Corey
- The Guns of August (www.penguinrandomhouse.com/books/180851/the-guns-o…) by Barbara W. Tuchman

Thoughts? Guest suggestions? Email us at [email protected].


You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at www.nytimes.com/article/ezra-klein-show-book-recs.


This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

All Comments (21)
  • @cmw3737
    The note about Claude knowing internally that it is lying, or at least is uncertain needs to be made accessible. The getting the agents to ask questions themselves can be a big improvement to zero shot tasks. Writing a prompt with enough detail to guide it toward a correct solution can be tedious and instead of the agentic flow of having to correct its first answer saying that's not quite right and then saying what is wrong it can be better to tell it to ask any questions if anything is ambiguous or unclear or it needs more information before giving an answer that it has a high confidence in. In order to do that it needs to access it's own level of certainty. That way you don't have to think of all details and instead let it create the model of the task and ask you (or a collaborative agent with a fuller picture) to fill in the details as needed until it reaches a threshold of confidence rather than making stuff up to give whatever best zero shot answer that it can come up with.
  • @dianes6245
    " They found that the A.I. didn’t just get better with more data; it got better exponentially. The curve of the A.I.’s capabilities rose slowly at first and then shot up like a hockey stick." I read that sci paper. The called it emergent. but later, another paper contradicted it. The second paper said that small increases were not noticed. So there was no hocky stick. Actually... increases are log - linear. It take logarithmicly more compute to get a linear increase in ability. But the trends go all over the charts - so its hard to make sense of this. Sometime a U curve is noticed. High error rate followed by low then high again. Be careful about cherry picking.
  • @kyneticist
    So, just to clarify - academics and researchers have figured out the most likely risks, scale and general scenarios that AI development will likely make real in the short term. They also reason with confidence that once those risks materialise as actual catastrophes, nobody will do anything about the risks because there's too much money at stake.... and nobody sees a problem with this.
  • You have to admit that Dario’s transparency and openness is remarkable, courageous and very valuable. In contrast, think of the type of conversations you see from other CEOs in other organizations (across every industry) that hide behind business speak and never talk (or even hint) about risks, threats, concerns, etc. I think what we are seeing from CEOs and founders like Dario Amodei, Sam Altman, Mustafa Suleyman, etc is drastically different than what we see to from 99.9% of all other CEOs in “power” today. Also, Ezra is one amazing interviewer.
  • Ezra, your questions and your guidance of this conversation was masterful - you took a topic that is complex and jargon is tic and brought it to a level of easy consumption while still allowing your guest to explain the topic at a good depth.
  • @BrianMosleyUK
    This is such an entertaining and informative discussion. Well done and thank you.
  • @rmutter
    I feel fortunate to have been able to listen in on this outstanding discussion. I really enjoyed their bantering and wordplay. I find myself in awe of the intellectual power that has been harnessed in the creation of AI. Now, if we humans can find a means to adapt to the exponentially growing intellectual power of maturing AI systems, we may actually benefit from using them, instead of them using us.
  • @somnambuIa
    1:02:15 EZRA KLEIN: When you imagine how many years away, just roughly, A.S.L. 3 is and how many years away A.S.L. 4 is, right, you’ve thought a lot about this exponential scaling curve. If you just had to guess, what are we talking about? DARIO AMODEI: Yeah, I think A.S.L. 3 could easily happen this year or next year. I think A.S.L. 4 — EZRA KLEIN: Oh, Jesus Christ. DARIO AMODEI: No, no, I told you. I’m a believer in exponentials. I think A.S.L. 4 could happen anywhere from 2025 to 2028.
  • @penguinista
    I am sure the people with access to the godlike AIs will be eager to hand off that power and privilege 'when it gets to a certain point'. Like the old saying: "Power causes prosocial motivation, ultimate power causes ultimate pro social motivation."
  • @glasperlinspiel
    This is why anyone making decisions about the near future must read Amaranthine: how to create a regenerative civilization using artificial intelligence. It’s the difference between SkyNet and Iain Banks’ “Culture” and “Minds.”
  • I like when he says that even though AI compute uses a lot of energy, we have to consider the energy it takes to produce the food a worker eats.
  • @grumio3863
    Thank you for calling that out. "Lord grant me chastity but not right now" I'd love to hear an actual game plan for actual democratization, instead of empty virtue signaling
  • @AB-wf8ek
    47:43 Listen, if we're going to figure out how to make these dinosaur parks safe, we have to make the dinosaurs
  • @geaca3222
    Great very informative conversation, thank you
  • @mikedodger7898
    34:08 This is an especially relevant section. Thank you! "Are you familiar with the philosoper Harry Frankfurt's book on bullshit?"
  • i live alone and am sliding gracefully into old age so the idea of an interesting dynamic AI assistant. is exciting up to a point . One that can organise life's essentails and also have an interesting conversation would be great . However . The thought that its higher functioning "Parent" AI has no real conception of Human alignment is terrifying !!
  • These are the same scaling “laws” that enabled Moore’s “Law” but we are investing even more
  • @cynicalfairy
    "Your scientists were so preoccupied with whether or not they could they didn't stop to think if they should."
  • @831Miranda
    Excellent interview, thank you to both of you! Amadei is one of the better 'builder of psychopaths' (aka builders of AI tech) we have in the world today.