Even in the fast-moving world of AI, patience is still a virtue, according to Andrej Karpathy.
The OpenAI cofounder, and de facto leader of the vibe-coding boom, appeared on the Dwarkesh Podcast last week to talk about how far we are from developing functional AI agents.
TL;DR — he’s not that impressed.
“They just don’t work. They don’t have enough intelligence, they’re not multimodal enough, they can’t do computer use and all this stuff,” he said. “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working.”
“It will take about a decade to work through all of those issues,” he added.
Agents are among the most talked-about innovations in AI, with many investors dubbing 2025 “the year of the agent.” While definitions vary, agents are virtual assistants capable of completing tasks autonomously — breaking down problems, outlining plans, and taking action without user prompts.
Karpathy is a famously fast talker. So he wrote a follow-up post on X for listeners who couldn’t quite parse everything he said. On the topic of agents, he reiterated his earlier frustrations.
“My critique of the industry is more in overshooting the tooling w.r.t. present capability,” he wrote. “The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless.”
He doesn’t want to live there.
In Karpathy’s ideal future, humans and AI collaborate to code and execute tasks.
“I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something. I want to learn along the way and become better as a programmer, not just get served mountains of code that I’m told works,” he wrote.
The con of building the kind of agents that render humans useless, he said, is that humans are then useless, and AI “slop,” the low-quality content generated by AI, becomes ubiquitous.
Karpathy isn’t the only one to raise concerns about the functionality of AI agents.
In a post on LinkedIn last year, ScaleAI growth lead Quintin Au talked about how the errors agents make are compounded with every additional task they take on.
“Currently, every time an AI performs an action, there’s roughly a 20% chance of error (this is how LLMs work, we can’t expect 100% accuracy),” he wrote in a post on LinkedIn. “If an agent needs to complete 5 actions to finish a task, there’s only a 32% chance it gets every step right.”
While skeptical of the current state of AI agents, Karpathy said he isn’t an AI skeptic.
“My AI timelines are about 5-10X pessimistic w.r.t. what you’ll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics,” he said.