6 months ago, in April 2025, Dwarkesh announced the AI 2027 project on his podcast, interviewing authors Daniel Kokatajlo and Scott Alexander. Now, Karpathy justified his much longer timelines to Dwarkesh, on what’s holding back coding agents, the first step in the AI 2027 timeline:

For AI experts, Karpathy’s view is a better counterargument to short timelines than ours. But for non-AI-experts, we think the practical considerations we raised are worth reflecting on with 6 more months of evidence. As forecasters, this is more of an “outside view” – regardless of how exactly AI improves, what problems might slow down an R&D-based takeoff scenario?

An AI takeoff as soon as 2027, in the scenario, depends on a stupendous capital investment in running a vast number of expensive AI agents to do AI research inside the companies. So are they actually preparing for this, and trying it?

And if Google turns out to be “OpenBrain”, they are so large, so slow moving, and so regulated, that it seems unlikely they could drive anything like the AI 2027 scenario.

So we take this as (light) evidence in favor of our original view that an R&D-based AI takeoff will take much longer than the AI 2027 scenario.

We also take this as (light) evidence that the difference in our forecasts in April 2025 were directionally correct. Of course, if a world-transforming AI takeoff happens in 2029 or 2032 as these forecasters think, they were on net closer to the truth than we were. The beauty of public forecasting is tracking the change over time, not so much the pure accuracy on one forecast.

Personally, I think I’m closer to the SF house party timeline than Karpathy (and than the FutureSearch median forecast). I suppose we’ll check in once more, 6 months from now, and see!

Andrej Karpathy: “There’s some over-prediction going on in the industry…” Dwarkesh Patel: “What do you think will take a decade to accomplish? What are the bottlenecks?” Andrej Karpathy: “Actually making it work.” —Dwarkesh Podcast, Oct 17, 2025

This piece requires at least one frontier lab to dedicate the majority of their resources towards building AI for their own internal use. We have reason to doubt that many of them will.

“My median is roughly 2032, but with AGI by 2027 as a serious possibility (~15-20%).” —AI Futures Blog, June 28, 2025

“This has been one of the most important results for my personal timelines to date. It was a big part of the reason why I recently updated from ~3 year median to ~4 year median to AI that can automate >95% of remote jobs from 2022” —AI Alignment Forum, July 2025

“This has been one of the most important results for my personal timelines to date. It was a big part of the reason why I recently updated from ~3 year median to ~4 year median to AI that can automate >95% of remote jobs from 2022”

Andrej Karpathy: “There’s some over-prediction going on in the industry…”

Dwarkesh Patel: “What do you think will take a decade to accomplish? What are the bottlenecks?”

One key point was: “Commercial Success May Trump the Race to AGI”. We wrote:

Most importantly: if Anthropic is in fact the one frontier lab focused heavily on internal R&D speedup, then the fact that they are also the most safety-conscious with their Responsible Scaling Policy, and likely to intervene or slow down at certain risk levels, to me significantly reduces the chance of an AI 2027 like scenario with them as the “OpenBrain”.

Here is how we interpret the other AI Futures authors updated timelines.

Daniel Kokotajlo – Since AI 2027: Median +1 year, to 2029

Eli Lifland – Since AI 2027: Median ~2032 (giving 15-20% to AGI by 2027)

Nikola Jurkovic – Since AI 2027, Updated from ~3 to ~4 year median

I (Dan Schwarz) personally really appreciate Karpathy’s not-doomer, not-skeptic middleground take:

“When AI 2027 was published my median was 2028, now it’s slipped to 2029 as a result of improved timelines models & slightly slower than expected progress in general” —Twitter/X, August 22, 2025

“When AI 2027 was published my median was 2028, now it’s slipped to 2029 as a result of improved timelines models & slightly slower than expected progress in general”

“My median is roughly 2032, but with AGI by 2027 as a serious possibility (~15-20%).”

This is the xdefiance Online Web Shop.

A True Shop for You and Your Higher, Enlightnened Self…

Welcome to the xdefiance website, which is my cozy corner of the internet that is dedicated to all things homemade and found delightful to share with many others online and offline.

You can book with Jeffrey, who is the Founder of the xdefiance store, by following this link found here.

Visit the paid digital downloads products page to see what is all available for immediate purchase & download to your computer or cellphone by clicking this link here.

Find out more by reading the FAQ Page for any questions that you may have surrounding the website and online sop and get answers to common questions. Read the Returns & Exchanges Policy if you need to make a return on a recent order. You can check out the updated Privacy Policy for xdefiance.com here,

If you have any unanswered questions, please do not hesitate to contact a staff member during office business hours:

Monday-Friday 9am-5pm, Saturday 10am-5pm, Sun. Closed

You can reach someone from xdefiance.online directly at 1(419)-318-9089 via phone or text.

If you have a question, send an email to contact@xdefiance.com for a reply & response that will be given usually within 72 hours of receiving your message.

Browse the shop selection of products now!

Reaching Outwards