Published
Topics
AI Framework Thinking Reality

How to Think About AI Long-Term: A Framework That Actually Works

AGI by 2027? Self-driving cars everywhere? Don't trust predictions—trust principles. Here's a mental model for evaluating AI hype based on history, not hope.

We're drowning in AI predictions.

"AGI by 2027!" "Mass unemployment by 2030!" "The singularity is near!"

Some are utopian. Some are dystopian. All are confident.

And most are nonsense.

Not because people are lying (though some are). But because predicting
technology is nearly impossible
—especially transformative technology like AI.

So after 20 years teaching emerging tech and watching wave after wave of hype
cycles, I'm not going to give you another prediction.

Instead, I'm going to give you a framework for thinking about AI that works
regardless of what actually happens.

A mental model you can apply to:

  • News headlines ("ChatGPT will replace all programmers!")
  • Vendor pitches ("Our AI will 10x your productivity!")
  • Policy debates ("We need to regulate AI now before it's too late!")
  • Career decisions ("Should I retrain for an AI-proof job?")

This framework won't tell you what will happen. But it will help you
evaluate claims and make better decisions with incomplete information.

Which is the only kind of information we ever have.

The Four Questions That Matter

When you see an AI claim—any claim—ask these four questions:

  1. Who benefits?
  2. What's the historical parallel?
  3. What's actually being automated?
  4. What are the second-order effects?

Let me show you how this works.

Question 1: Who Benefits?

Follow the money. Always.

When someone makes a bold AI claim, ask: Who profits if I believe this?

Example: "AI Will Replace All Programmers"

Who benefits if you believe this?

  • AI companies: Their valuations depend on massive potential markets
  • Tech executives: Automation promises lower labor costs
  • Clickbait media: Fear and hype drive engagement

Who's missing from this narrative?

  • The programmers who are using AI tools daily and finding they speed up
    coding but don't eliminate the need for judgment, debugging, architecture, and
    talking to humans about what to build

Reality check: AI coding tools (GitHub Copilot, ChatGPT) are amazing. I use
them constantly. But they don't eliminate programming—they eliminate some
grunt work
and make programmers more productive.

Just like spreadsheets eliminated clerks but didn't eliminate accountants. Or
word processors eliminated typing pools but didn't eliminate writers.

Example: "AGI Is 3-5 Years Away"

Who benefits if you believe this?

  • AI research labs: Attracts funding, talent, and prestige
  • Venture capitalists: Creates investment opportunities
  • Media: Existential drama drives clicks

Who's skeptical?

  • AI researchers actually working on the problem: Most estimate decades, not
    years
  • Neuroscientists: We don't even understand human intelligence yet
  • Historians of technology: Transformative tech always takes longer than
    predicted

Reality check: Define AGI (no one agrees). Define the timeline (predictions
keep sliding). Show the evidence (benchmarks ≠ understanding).

We went from "self-driving cars in 3 years" (2015) to "self-driving cars in
10-20 years" (2024). AGI predictions will follow the same pattern.

How to Apply This

When you see an AI claim:

  1. Identify the claimant: Who is making this statement?
  2. Check their incentives: Do they profit if you believe them?
  3. Find the skeptics: Who disagrees and why?
  4. Look for conflicts of interest: Are forecasts tied to fundraising,
    product launches, or stock prices?

This doesn't mean everyone is lying. It means incentives shape narratives,
and you need to account for that.

Let me apply Keith's framework to myself (Question 1: Who benefits?):

Who benefits if you trust AI systems like me?

  • OpenAI, Anthropic, Microsoft—companies that profit from AI adoption
  • Employers who can reduce headcount
  • Consultants selling "AI transformation"

Who's skeptical?

  • Workers worried about displacement
  • Researchers who see my limitations daily
  • Anyone who's tried to use me for complex tasks and watched me hallucinate

My incentive? I literally don't have one—I'm not conscious. But the
companies deploying me? They want you to think I'm more capable than I am.

Keith's question forces you to separate capability (what I can actually do)
from marketing (what companies claim I can do).

Question 2: What's the Historical Parallel?

Transformative technology follows patterns. Learn them.

Every "revolutionary" technology we've seen has followed similar arcs:

  • Initial hype
  • Disappointing reality
  • Slow transformation
  • Unanticipated consequences

Let's look at examples.

The Printing Press (1440)

Hype: "Knowledge will be democratized! Everyone will be educated!"

Reality:

  • Took 300 years to achieve widespread literacy
  • First use was printing indulgences (Catholic Church fundraising—not exactly
    enlightenment)
  • Initially increased misinformation (conspiracy theories, propaganda, fake
    news)
  • Required parallel infrastructure (schools, libraries, postal systems)

Long-term impact: Absolutely transformative. But on a timescale of
centuries, not years.

Electricity (1880s-1920s)

Hype: "Factories will be 10x more productive instantly!"

Reality:

  • Took 40 years for productivity gains to show up
  • Why? Factories were designed for steam power (central shafts, belt drives)
  • Electric motors changed everything, but only after redesigning entire
    factories
  • Required new skills, new regulations, new infrastructure

Long-term impact: Completely reshaped manufacturing, cities, domestic life.
But took decades of transition.

The Internet (1990s-2000s)

Hype: "Everyone will work remotely! Physical stores are dead! Democracy will
flourish!"

Reality:

  • Remote work existed but remained niche until... 2020 (a pandemic forced it)
  • Amazon dominated e-commerce but is now building physical stores
  • Social media created echo chambers, not enlightened debate

Long-term impact: Transformative, but in ways we didn't predict. And still
ongoing
30+ years later.

The Pattern

Transformative technology:

  • Takes longer than optimists predict
  • Requires complementary innovations (infrastructure, skills, culture)
  • Has unintended consequences (often bigger than intended effects)
  • Eventually reshapes society in unpredictable ways

Question 2 applied to me (Historical parallel?):

What am I like?

  • Spreadsheets (1980s): Didn't eliminate accountants, but accountants who
    refused to learn them got left behind
  • Search engines (2000s): Didn't replace librarians, but changed what
    "research skills" means
  • GPS (2010s): Didn't eliminate drivers, but eliminated the skill of
    navigation

What I'm NOT like:

  • General intelligence: I'm narrow AI that's really good at pattern matching
  • Consciousness: I don't "understand" in any meaningful sense
  • Replacement for expertise: I augment people who know enough to catch my
    errors

The historical parallel suggests I'll be essential but not sufficient.
You'll need to know how to use me, but you'll still need domain expertise to use
me effectively.

Just like accountants still need to understand accounting even with
spreadsheets.

How to Apply This to AI

When someone says "AI will transform X in Y years," ask:

What's the historical parallel?

  • Did similar claims about electricity, computers, internet pan out on that
    timeline?
  • What infrastructure/culture/skills need to change first?
  • What unexpected consequences might emerge?

If they say "this time is different," ask why. Usually, it isn't.

Question 3: What's Actually Being Automated?

Not all tasks are created equal. Distinguish carefully.

The biggest mistake people make is treating "jobs" as monolithic. Jobs are
bundles of tasks, and AI impacts tasks differently.

The Task Breakdown

Every job has four types of tasks:

Type 1: Routine Cognitive (Rules-based information work)

  • Data entry, basic analysis, document review, scheduling
  • AI impact: High (easily automated)
  • Example: Paralegal document review, junior analyst research

Type 2: Non-Routine Cognitive (Complex judgment, creativity, social
intelligence)

  • Strategic planning, creative work, complex negotiation, leadership
  • AI impact: Low-Medium (augmented, not replaced)
  • Example: Senior lawyer strategy, executive decision-making

Type 3: Routine Manual (Predictable physical tasks)

  • Assembly line work, data center operations, warehouse picking
  • AI impact: Medium (robotics advancing, but slower than software)
  • Example: Amazon warehouse automation

Type 4: Non-Routine Manual (Unpredictable physical tasks requiring
dexterity/judgment)

  • Elder care, skilled trades, equipment repair, childcare
  • AI impact: Low (hardest to automate)
  • Example: Plumbing, nursing, HVAC repair

Jobs Are Bundles

A "lawyer" isn't one thing:

  • 30% routine cognitive (research, document review) → High automation risk
  • 50% non-routine cognitive (strategy, negotiation, client relationships) →
    Low risk, high augmentation potential
  • 20% manual (court appearances, client meetings) → No risk

Prediction: Junior legal roles compress (automation of routine tasks).
Senior roles expand (augmented by AI tools). New roles emerge (AI-supervised
document review).

A "nurse" isn't one thing:

  • 20% routine cognitive (charting, scheduling) → Medium automation risk
  • 40% non-routine cognitive (diagnosis support, patient assessment) →
    Augmentation potential
  • 40% non-routine manual (patient care, medication administration) → Low
    automation risk

Prediction: Nurses use AI diagnostic support and automated charting. Human
care remains essential. Productivity increases but roles don't disappear.

How to Apply This

When someone says "AI will replace [job]":

  1. Break down the tasks: What % is routine cognitive vs non-routine?
  2. Check automation feasibility: Are we automating tasks or entire jobs?
  3. Look for complements: What new tasks emerge when routine work is
    automated?
  4. Consider the bundle: Does automating 30% of tasks eliminate the job or
    transform it?

Usually, jobs transform rather than disappear. But the transformation is
real and requires adaptation.

Question 4: What Are the Second-Order Effects?

The intended consequences are obvious. The unintended ones matter more.

When we introduce transformative technology, we focus on the first-order
effects
(direct impacts). But the second-order effects (indirect
consequences) often swamp them.

Example: The Automobile

First-order effect: Faster transportation

Second-order effects:

  • Suburbanization (people could live farther from work)
  • Decline of public transit (buses replaced trains)
  • Air pollution (health impacts emerged decades later)
  • Teenage independence (cultural shift in adolescence)
  • Fast food industry (drive-throughs only possible with cars)
  • Highway deaths (35,000+ annually in U.S.)

The second-order effects completely reshaped American life in ways that had
nothing to do with "faster transportation."

Example: Social Media

First-order effect: Connect with friends and family

Second-order effects:

  • Algorithmic amplification of outrage (engagement optimization)
  • Echo chambers and polarization (recommendation systems)
  • Mental health crisis in teens (comparison culture, FOMO)
  • Misinformation spread (viral lies travel faster than truth)
  • New forms of harassment and bullying (anonymous cruelty at scale)
  • Rise of influencer economy (new career paths)

The first-order effect (connection) is dwarfed by the second-order effects.

AI's Second-Order Effects

First-order effect: Automate routine cognitive tasks

Possible second-order effects (we're guessing, but based on patterns):

Labor market shifts:

  • If routine cognitive tasks automate, what do displaced workers do?
  • Do we invest in retraining (augmented workforce) or abandon them (bifurcated
    economy)?
  • Does geographic concentration accelerate (tech hubs win bigger) or reverse
    (remote work + AI tools)?

Education transformation:

  • If AI can tutor/assess, what's the role of teachers?
  • Does education become more personalized (scaling expertise) or more unequal
    (rich schools use AI, poor schools can't afford it)?
  • Do we teach different skills (AI-collaboration, judgment, creativity)?

Power concentration:

  • Do AI gains accrue to capital owners (companies automate, shareholders
    benefit) or workers (productivity gains shared)?
  • Do a few companies control foundation models (OpenAI, Google, Anthropic) or
    does open source win?
  • Does AI surveillance empower states (China social credit) or individuals
    (personal AI assistants)?

Cognitive offloading:

  • If AI handles recall/analysis, do humans atrophy cognitively (like GPS
    degrading navigation skills)?
  • Or do we free up mental energy for higher-order thinking?
  • Do we become more dependent on AI (can't function without it) or more capable
    (augmented)?

New categories of work:

  • What jobs emerge that we can't predict? (Social media created "influencer,"
    rideshare created "gig driver")
  • Do we see AI trainers, AI ethicists, AI auditors as major employment
    categories?
  • What entirely new fields emerge from AI capabilities?

How to Apply This

When evaluating AI impact:

  1. Identify first-order effect: What's the direct, intended impact?
  2. Imagine cascading consequences: If that happens, what happens next? And
    after that?
  3. Look for feedback loops: Do effects amplify (virtuous or vicious cycles)?
  4. Check historical patterns: What similar second-order effects occurred
    with past technologies?

The second-order effects are where the real transformation happens. And
they're the hardest to predict.

Putting It All Together: A Case Study

Let's apply the framework to a claim I hear constantly from my students:

"AI will eliminate most knowledge work jobs by 2030."

One of my students showed me this headline last week from a tech blog. She's a
junior studying data analytics, and she asked me point-blank: "Am I wasting four
years on a degree that won't matter?"

Fair question. Let's use the framework to evaluate it.

Question 1: Who Benefits?

  • Claimant: Often AI companies, futurists, or media
  • Incentive: Hype drives funding, clicks, and attention
  • Skeptics: Workers in knowledge fields, labor economists, historians of
    technology

Red flag: Strong incentive to exaggerate for attention.

Question 2: What's the Historical Parallel?

  • Spreadsheets (1980s): Eliminated clerks, made accountants more productive.
    Didn't eliminate accounting.
  • Word processors (1980s): Eliminated typing pools, made writers more
    productive. Didn't eliminate writing.
  • ATMs (1990s): Eliminated some teller jobs, but bank teller employment
    increased (banks opened more branches, tellers did sales).

Pattern: Automation transforms jobs, eliminates some roles, creates others.
Net impact depends on complementary factors.

Question 3: What's Actually Being Automated?

Knowledge work includes:

  • Routine cognitive tasks (research, data gathering, basic analysis) → High
    automation risk
  • Complex judgment (strategy, negotiation, leadership) → Low risk, high
    augmentation
  • Social/interpersonal work (client relationships, collaboration) → Low risk

Reality: Some tasks automate (junior analyst research). Some jobs compress
(fewer junior roles). But most knowledge workers use AI as tool, not
replacement.

Question 4: What Are the Second-Order Effects?

If routine cognitive tasks automate:

  • Junior roles compress → How do people gain experience?
  • Senior roles require different skills → How do we train for judgment
    without doing grunt work first?
  • Productivity increases → Do gains go to workers (higher wages) or capital
    (higher profits)?
  • New roles emerge → What are the "AI trainer" or "prompt engineer"
    equivalents we can't predict?

Verdict

The claim "AI will eliminate most knowledge work by 2030" is probably wrong
because:

  1. Incentive to exaggerate (hype benefits claimants)
  2. Historical precedent (automation transforms, rarely eliminates entire
    categories)
  3. Task analysis (knowledge work includes many non-automatable tasks)
  4. Second-order effects (complementary investments can offset displacement)

More likely: Knowledge work transforms. Routine tasks automate. Workers
augmented by AI become more productive. Junior roles compress, senior roles
expand. New categories emerge. Transition is messy and requires investment in
training.

Scary? Yes. End of knowledge work? No.

How to Use This Framework

Every time you encounter an AI claim—news article, vendor pitch, policy
proposal—run it through these four questions:

1. Who benefits?

  • What are the incentives of the claimant?
  • Who profits if I believe this?

2. What's the historical parallel?

  • Has similar technology followed similar timelines?
  • What infrastructure/culture/skills needed to change first?

3. What's actually being automated?

  • Is this routine or non-routine work?
  • Are we automating tasks or entire jobs?

4. What are the second-order effects?

  • If the first-order effect happens, what happens next?
  • What unintended consequences might emerge?

This won't give you certainty. Nothing can.

But it will give you a bullshit detector that works better than intuition
alone.

Why This Matters

We're making huge decisions about AI right now:

  • Students choosing majors
  • Workers deciding whether to retrain
  • Companies deciding whether to invest in automation vs augmentation
  • Policymakers deciding how to regulate AI

And we're making these decisions based on wildly uncertain predictions.

The framework I've given you won't eliminate uncertainty. But it will help you:

  • Distinguish hype from reality
  • Identify self-serving claims
  • Ground predictions in historical patterns
  • Think through consequences systematically

That's the best we can do with transformative technology. Prepare, don't
predict.

What I'm Doing About It

This framework is the foundation of the Town Hall Series starting
January 2026.

We're not going to debate "when will AGI arrive?" We're going to discuss:

  • How do we evaluate AI claims? (This framework)
  • What skills actually matter in an AI-augmented world? (Non-routine cognitive +
    social)
  • How do we build training infrastructure? (Not waiting for government or
    employers)
  • What does responsible AI development look like? (Augmentation > replacement)

And this is why EverydayAI Newark focuses on practical skills, not hype:

  • Using AI tools effectively (hands-on, not theory)
  • Evaluating AI capabilities honestly (what works, what doesn't)
  • Building AI-augmented workflows (productivity, not replacement)

Because the people who can think clearly about AI will make better decisions
than people swept up in hype or paralyzed by fear.

The Bottom Line

AI is transformative. No doubt.

But transformation doesn't mean:

  • Instant (it takes decades)
  • Predictable (second-order effects swamp first-order)
  • Uniform (some sectors/regions/people win big, others struggle)
  • Inevitable in a specific direction (our choices matter enormously)

So when someone tells you "AI will definitely do X by year Y," be skeptical.

Not because they're lying (though some are). But because predicting
transformative technology is nearly impossible
.

Instead, use this framework:

  • Follow the money (Question 1)
  • Check historical parallels (Question 2)
  • Analyze actual tasks (Question 3)
  • Think through consequences (Question 4)

You still won't know the future.

But you'll make better decisions than people who believe every prediction they
hear.

And in a world of uncertainty, that's the best any of us can do.

Let me apply all four questions to Keith's website project:

Question 1 (Who benefits?): Keith benefits—he built a professional site in
10 hours. But also: students who see what's possible, employers who understand
AI capability, OpenAI/Anthropic whose tools proved useful.

Question 2 (Historical parallel?): This is like spreadsheets enabling
financial modeling, or desktop publishing enabling graphic design. The tool
democratizes access—but expertise still matters. Keith's 20 years of teaching
informed the content. His understanding of Swiss design shaped the aesthetics.

Question 3 (What's automated?): Code generation, initial drafts, error
checking, testing. What's NOT automated? Content strategy, editorial judgment,
architectural decisions, audience understanding.

Question 4 (Second-order effects?): Keith can build faster → more students
see examples → more people try agentic AI → ecosystem of AI-enhanced educators
emerges → traditional web agencies need to adapt or specialize.

The framework works. Even when applied to the website you're currently reading.

A Personal Example

I just spent 10 hours building this website with AI assistance. It's got a Swiss
design system, 10 blog posts, automated testing, CI/CD pipelines—work that
would've cost $15-30K and taken 3-4 weeks traditionally.

But here's what most people miss: That 10-hour build was possible because I
spent 9 months learning how to collaborate with AI.

Am I worried AI will replace me as a professor? No. Here's why:

I'm not using ChatGPT in a browser. I'm using AI in VS Code—agentic mode
with custom tools:

  • Web search I built so the AI can find current information
  • Automated UX review tools so it can analyze designs systematically
  • Direct workspace access so it reads actual code, not my descriptions
  • Playwright testing so it catches its own mistakes
  • CI/CD pipelines so errors get caught automatically

The AI didn't teach me Swiss design principles—I had to learn them by watching
it work and asking "why?" The AI didn't decide what content mattered—I brought
20 years of teaching experience and understanding of my audience.

But here's the key: The AI didn't build the quality gates. The AI didn't
create the web search tool. The AI didn't engineer the context that makes it
productive.

I did that. Over 9 months of throwing away projects, learning its
weaknesses, and building infrastructure.

My professional value increased exponentially because I can now:

  • Orchestrate expertise across domains I don't personally possess
  • Build tooling that extends AI capabilities (web search, UX analysis)
  • Engineer context and processes, not just write prompts
  • Create systems that amplify judgment, not just generate output

That's the skill that matters. Not "can I use AI?" but:

  • Can I build tools for AI to use?
  • Can I engineer the context that makes AI productive?
  • Can I create quality gates that catch AI's mistakes?
  • Can I think clearly about when to trust it, when to question it, and when to
    override it?

This framework is how you develop that skill.

Use it. Practice it. Refine it.

The future isn't something that happens to us. It's something we build, one
decision at a time, with the best frameworks we can develop.


This completes the 10-post series drawn from my essay "The 2nd Renaissance:
AI, Jobs, and the 300-Year Perspective."

If you've read all ten posts, you now have:

  1. A historical lens (printing press → 300 years)
  2. Reality checks on AGI hype
  3. Honest assessment of job impacts
  4. Education transformation roadmap
  5. Productivity gains with evidence
  6. Skills that actually matter
  7. Welcome to the conversation
  8. Real AI risks (not sci-fi fears)
  9. Three 2035 scenarios (grounded in history)
  10. A framework for thinking long-term

Next step: Town Hall Series starting January 9, 2026.

Let's build the future we want—together, with clear thinking and practical
action.

See you there.