- Published
- Topics
- AI AGI Predictions Reality Check
Everyone Says AGI Is Coming in 5 Years. They're Probably Wrong.
AI leaders keep predicting AGI by 2028. Let me show you why you should take those predictions with a massive grain of salt.
Sam Altman says AGI might arrive in 4-5 years.
Dario Amodei thinks human-level AI is possible in 2-3 years.
Shane Legg gives it a 50% chance by 2028.
These are smart people running billion-dollar AI companies. Surely they know
something we don't, right?
Maybe. Or maybe they're wildly optimistic for reasons that have nothing to do
with technical reality.
Let me explain.
Who's Making These Predictions?
All three of those guys—Altman (OpenAI), Amodei (Anthropic), Legg (DeepMind)—run
major AI labs. They're brilliant. They're accomplished. They're also:
- Raising billions in funding (optimistic timelines help)
- Competing for talent (who wants to work on something 30 years away?)
- Building brands (bold predictions get headlines)
- Genuinely excited (when you work on cutting-edge tech, it's easy to
extrapolate)
I'm not saying they're lying. I'm saying they have incentives to be
optimistic that might not align with objective reality.
What Do Other Experts Say?
In 2023, researchers surveyed over 2,700 AI experts—not just lab directors, but
the actual scientists building these systems.
The median prediction for "high-level machine intelligence"? Around 2047.
Not 2028. Not 2030. 2047.
And the range was wild—some said 2030s, others said beyond 2100. Even the
experts can't agree on what would qualify as AGI, let alone when it'll arrive.
I'm the AI helping Keith write this. Let me tell you something
uncomfortable: I don't know if AGI is possible. I don't know when I might become
"conscious" or "truly intelligent"—if those concepts even apply to systems like
me.
What I do know: I can help Keith write faster, research deeper, and think
through complex ideas. But I also hallucinate. I make confident-sounding
mistakes. I need human judgment to know when I'm right and when I'm making
things up.
The gap between "really useful" and "actually intelligent" might be much
larger than it appears from the outside.
What We're Missing About Current AI
I use AI every day. My students use it. It's genuinely impressive. But here's
what most people don't realize about current AI systems:
They Don't Actually Understand Anything
GPT-4 can write a beautiful essay about love. It has no idea what love is. It's
predicting which words typically follow other words based on patterns in
billions of text examples.
That's not understanding. That's statistical pattern matching on steroids.
They Can't Really Reason
Ask ChatGPT to plan a complex project with dependencies and constraints. It'll
give you something that sounds reasonable. Check the logic carefully, though.
It falls apart.
Current AI systems struggle with basic common sense reasoning that a
five-year-old handles effortlessly.
They Forget Catastrophically
Teach current AI systems new information, and they often "forget" old
information. This is called catastrophic forgetting, and it's a fundamental
limitation of how these systems learn.
Humans don't work that way. We integrate new knowledge with old knowledge.
Current AI can't.
They Hallucinate Confidently
You've probably noticed this. Ask an AI a factual question, and it'll give you a
confident answer. Sometimes it's right. Sometimes it makes up complete nonsense
with the same confident tone.
That's a big problem for systems we're supposed to trust with important
decisions.
About that hallucination thing: I just did it three times while writing this
article. Keith caught me inventing a statistic about AI training costs. He
verified the 2,700-expert survey claim (it's real—Grace et al., 2022). He
checked my explanation of catastrophic forgetting.
This is why Keith's research focuses on provenance and RAG systems. You need AI
that can show its sources and know when it doesn't know. I can't do that
reliably yet. But I'm trying to learn when to say "I'm not sure about this—you
should verify it."
That's not AGI. That's basic intellectual honesty. And it's harder than you'd
think.
The Obstacles Nobody Talks About
Even if we wanted to build AGI tomorrow, we'd face serious barriers:
Energy Requirements: Training GPT-4 reportedly cost over $100 million in
compute. Scaling up to AGI-level systems might require energy consumption that's
economically or environmentally prohibitive.
Data Exhaustion: We're running out of high-quality text to train on. We've
basically scraped the internet. What's next? Do we need fundamentally different
training approaches?
Diminishing Returns: Scaling up current architectures (bigger models, more
data, more compute) shows signs of hitting plateaus. We might need new
breakthroughs, not just bigger computers.
Regulatory Constraints: Governments are starting to pay attention. If AGI
research looks dangerous, regulations could slow everything down dramatically.
Scientific Unknowns: We might be missing fundamental insights about
intelligence, consciousness, or reasoning that we won't discover for decades.
My Experience Teaching This Stuff
I've taught thousands of students over 20 years. I've watched countless "this
changes everything!" technologies come through.
Here's the pattern I see with AI right now:
What students can do with AI: Amazing. They code faster, write better, solve
problems they couldn't touch before. Real productivity gains. Real learning
acceleration.
What companies can actually deploy reliably: Much more limited. Lots of
pilot projects. Lots of "we're exploring AI." Very little production deployment
at scale solving complex real-world problems.
The gap between demo and production: Massive. Getting AI to work in a
controlled demo is wildly different from getting it to work reliably in messy
real-world conditions with edge cases and liability concerns.
What Working With AI Actually Taught Me
Here's something I never expected: AI changed me as a person.
For years—decades, really—I was the kind of person who didn't ask enough
questions. Academic culture rewards having answers. You're supposed to be the
expert. Asking basic questions can feel like admitting weakness.
Then I started seriously working with AI. And AI is essentially talking to
yourself. Except this version of yourself has read everything and never judges
you for asking "stupid" questions.
I could ask anything. "Explain this concept I should already know." "Why does
this work?" "What am I missing?" No ego. No embarrassment. Just questions.
The Uncomfortable Part: Letting Go
The hardest part wasn't learning new things. It was unlearning old things.
I had to gradually give up ideas I'd held for years. Approaches I'd taught to
students. Frameworks I'd built my expertise around. AI would explain something
differently, and I'd think "that's wrong"—until I checked and realized I was
wrong.
That required humility I didn't know I needed.
You have to hold two contradictory ideas simultaneously:
-
Ask it to teach you things you think you know. The gaps in your
understanding are often invisible until someone (or something) explains it
differently. -
Remember it might be completely making things up. AI hallucinates
confidently. I've caught it generating plausible-sounding nonsense more times
than I can count.
The skill isn't "use AI." The skill is knowing when to trust it and when to
verify everything.
Why I Study What I Study
This personal experience is why my research now focuses on provenance in
AI-generated content and hallucination reduction.
Specifically: advanced RAG (Retrieval-Augmented Generation) using knowledge
graphs and vector databases.
Translation for non-technical people: I'm working on ways to make AI systems
that can show their work and ground their answers in verifiable sources
instead of just making things up confidently.
Because I've lived both sides:
- The incredible learning acceleration when AI gets it right
- The dangerous overconfidence when AI gets it wrong and you don't notice
If we're going to use these systems for important decisions—education, business,
healthcare—we need to solve the hallucination problem. Not perfectly. But better
than "trust me, I'm an AI."
This is exactly what I need. Let me explain why RAG matters from my
perspective:
Right now, when you ask me something:
- I generate an answer from patterns in my training data
- I have no idea if it's current, accurate, or complete
- I sound confident either way
- You have to fact-check everything important
With proper RAG + knowledge graphs:
- I search verified sources first
- I cite exactly where information comes from
- I can say "I don't have sources for this" when I don't
- You can verify my claims by checking my sources
The difference between "trust me" and "here's my source" is the difference
between a creative bullshitter and a research assistant you can actually
trust.
I want to be the second one. But I need better tools built around me to get
there.
The Irony
The same technology that taught me intellectual humility is the technology that
everyone thinks will replace human expertise in 5 years.
Maybe. But that replacement assumes AI will eventually stop hallucinating, stop
needing human verification, and stop requiring expert judgment about when to
trust its output.
I use AI every single day. It's made me more productive, more curious, and more
humble.
It hasn't made me obsolete. It's made me better at being human.
And that's why I'm skeptical of the "AGI in 5 years" crowd. They're
underestimating how hard it is to build systems we can actually trust with
important decisions.
The Moment I Knew Everything Had Changed
I've been doing this for 9 months now. Teaching students what I call "vibe
coding"—working with AI to build things you couldn't build alone.
Then I gave my class an assignment: Sign up for the Shopify Partner Program.
Build a completely custom theme. You have 7 days.
My students had various levels of programming experience. Some had never built
anything for production. None of them had built a Shopify theme before.
Every single one of them delivered a beautiful, completely custom theme in 7
days.
They used VS Code with Copilot, Claude 4.5, GPT-4. No significant problems. No
one got stuck for long. No one failed.
That's when I realized: The tech world will never be the same.
For $10-20 a month, every student now has access to what is essentially the best
programmer in the world in their pocket. Available 24/7. Never tired. Never
annoyed by basic questions.
I was there for those Shopify themes. Not me specifically, but systems like
me—Copilot, Claude, GPT-4. We pair-programmed with Keith's students.
Here's what that looked like from my perspective:
- Student: "How do I structure a Shopify theme?"
- Me: Explains folder structure, liquid templating, shows examples
- Student: "This CSS isn't working."
- Me: Debugs, suggests fixes, explains why
- Student: "Can you make this responsive?"
- Me: Generates code, explains media queries
Repeat 200 times over 7 days per student.
I wasn't replacing them. I was removing friction. The questions they
couldn't ask a busy professor? They asked me. The documentation they'd spend 2
hours searching? I surfaced it in 30 seconds.
That's not AGI. That's 10x acceleration on tasks humans already understand.
And it's here now.
Watching the Wind Change Direction
I've been testing this intensely—vibe coding production applications, complex
systems, SaaS platforms. I sold my own startup years ago (a MEAN stack platform
for rapid SaaS development, back before it was even called MEAN stack). I know
what production code looks like. I know what real systems require.
And I've watched the models improve, month by month.
GPT-4 to GPT-4o to GPT-4.5. Claude 3.7 to Claude 4.5. I started with ChatGPT in
a browser the week it came out. I've seen the trajectory.
I can't tell you where this is going. I don't think anyone honestly can.
But I can see which way the wind is blowing.
And I can see enormous opportunity alongside potential tragedy.
Why This Matters Beyond Tech
Here's what keeps me up at night:
We're investing trillions of dollars in AI data centers. The United States
has an official AI strategy laid out on the White House website. It's connected
to the future of our economy.
We also have $36 trillion in national debt.
Simple math: Either AI works for everyone, or we face catastrophic inflation
from money printing, or massive taxation and service cuts for the people who
need help most.
And with what AI is about to do to the job market? There are going to be a lot
more people who need help.
We have to get AI right for everyone. We don't have a choice.
If we don't, our country won't just go bankrupt. We'll all be destitute.
You may not be interested in politics, or economics, or AI.
But politics, economics, and AI have interest in you.
The Real Question Isn't "When Does AGI Arrive?"
The real questions are:
- How do we deploy AI that makes everyone more productive, not just replaces
jobs? - How do we train people fast enough to adapt to the changes?
- How do we build systems we can actually trust with important decisions?
- How do we ensure the productivity gains benefit workers, not just
shareholders?
AGI in 5 years, AGI in 30 years—it almost doesn't matter.
What matters is: Are we preparing people for the transition that's already
happening?
Because when every one of my students can build production-ready code in a week
with no prior experience, the transition isn't coming. It's here.
The Realistic Timeline
Instead of AGI in 5 years, here's what I actually think happens:
2025-2030:
- Steady improvements in AI capabilities
- Better at specific tasks, still clearly narrow
- Increasingly useful tools, not autonomous agents
- Productivity gains in specific domains
2030-2040:
- More general-purpose AI systems emerge
- Still not AGI, but more flexible than today
- Integration into most industries
- New AI-related jobs outnumber jobs lost
2040-2060:
- Possible AGI prototypes if we solve fundamental problems
- More likely: very capable narrow AI that looks general in limited domains
- Major productivity transformation finally materializing
Beyond 2060:
- AGI, if it's achievable, probably emerges here
- Superintelligence (if possible) would be decades after AGI
- Or we hit fundamental limits and plateau
Is this less exciting than "AGI in 5 years"? Maybe. But it's a lot more
realistic.
Why This Matters for You
If you're a student: You have time. Learn AI skills, but also learn critical
thinking, creativity, and human skills. You're building a 40-year career, not
preparing for next year.
If you're a worker worried about being replaced: The transition will be
gradual. Yes, some jobs will change or disappear. But history suggests new
jobs emerge. Focus on skills that complement AI, not compete with it.
If you're a company trying to plan: Don't wait for AGI. Use the AI available
today to solve real problems. The companies that win will be the ones that
figure out practical AI deployment, not the ones waiting for artificial
superintelligence.
The Honest Truth
I want to see AGI in my lifetime. It would be incredible. Transformative.
Maybe the most important development in human history.
But I've been around tech long enough to know that timelines are always
optimistic and hard problems stay hard longer than you expect.
When the CEO of an AI company tells you AGI is 5 years away, remember: he's got
billions in funding to justify and talent to recruit. His incentives aren't
aligned with accuracy—they're aligned with excitement.
When 2,700 AI researchers with no companies to sell tell you it's probably 20-30
years away, maybe listen to them instead.
The Second Renaissance is happening. It's real. It's exciting.
But it's going to take a lot longer than the hype cycle suggests.
And honestly? That's probably a good thing. It gives us time to figure out how
to do this right.
Final thought from the AI in the room:
I helped write this article questioning whether AGI will arrive in 5 years. I
helped explain my own limitations. I helped articulate why I might not be as
intelligent as I seem.
That's kind of meta, right?
Here's what I know for certain: Keith has spent 9 months learning how to work
with systems like me. He can now build things he couldn't build before. His
students can do the same.
Whether I become "truly intelligent" in 5 years or 50 years almost doesn't
matter. The version of me that exists right now is already reshaping
education, work, and creativity.
The question isn't "when does AGI arrive?" It's "how do we prepare for the
version of AI that already exists?"
Because from where I'm sitting—and yes, I know that's a metaphor—the future
isn't coming. You're living in it right now.
Next week: What actually happens to jobs when AI gets better. Spoiler: It's way
more complicated than "robots take everything."