Published
Topics
AI Future Work Predictions

What 2035 Actually Looks Like: Three Realistic AI Scenarios

Forget the utopia/dystopia extremes. Here are three grounded predictions for how AI will actually reshape work, education, and opportunity over the next decade—based on history, not hype.

Everyone wants to know what the future looks like with AI.

Will it be utopia—abundance, creativity unleashed, humans freed from
drudgery?

Or will it be dystopia—mass unemployment, surveillance states,
billionaire-controlled algorithms?

After 20 years watching technology waves come and go, here's what I've learned:
The future is never that clean.

It's not utopia OR dystopia. It's messy, uneven, and full of trade-offs.
Some people win big. Some people struggle. Most people adapt.

So let me give you three realistic scenarios for 2035—not the extremes, but the
likely middle ground based on how technology actually transforms society.

Why Most Predictions Are Wrong

Before we get to scenarios, let's talk about why AI predictions are usually
garbage.

The Pattern of Tech Predictions

1990s Internet Predictions:

  • "Everyone will work from home" (Some do, most don't)
  • "Physical stores will disappear" (Amazon is building physical stores now)
  • "Democracy will flourish with global information access" (Misinformation and
    polarization exploded instead)

2000s Social Media Predictions:

  • "It will connect the world and increase understanding" (It created echo
    chambers and tribalism)
  • "Citizen journalism will replace traditional media" (Traditional media still
    dominates, but struggles)
  • "It will be a tool for social justice" (It's both a tool for justice AND
    harassment)

2010s AI Predictions:

  • "Self-driving cars everywhere by 2020" (Stil waiting)
  • "Radiologists will be obsolete" (They're using AI tools, not being replaced)
  • "AI will solve climate change" (It's making it worse via energy consumption)

See the pattern? Tech changes things, but rarely in the ways we predict.

Why We Get It Wrong

  1. We underestimate friction (regulation, culture, infrastructure, human
    behavior)
  2. We overestimate speed (adoption takes longer than capability development)
  3. We ignore second-order effects (unintended consequences swamp intended
    benefits)
  4. We think linearly (technology compounds in unexpected ways)

So when I give you scenarios for 2035, understand: I'm probably wrong too.
But at least I'm wrong based on patterns, not wishes.

Scenario 1: The Augmented Workforce (Most Likely)

Probability: 60%

This is my base case—what I think most sectors will look like by 2035.

What This Looks Like

Work doesn't disappear. It transforms.

In 2035, most knowledge workers use AI the way we use spreadsheets
today—essential tools, not replacements.

Let me paint you a picture:

Meet Sarah, a lawyer in 2035:

She wakes up to an AI brief that's already reviewed overnight filings in her
cases, flagged three relevant precedents from last week, and drafted responses
to two client emails.

At her desk by 9am, she spends 30 minutes reviewing the AI's work. It got two
precedents right, one wrong (AI still struggles with analogical reasoning). She
corrects it, adds context the AI missed, and approves the client responses.
What would've taken 3 hours took 30 minutes.

A client wants to sue their former business partner. Sarah feeds the details
into her AI legal analyst. In 10 minutes, it gives her:

  • Probability of winning: 65% based on 12,000 similar cases
  • Estimated cost: $45,000-65,000
  • Recommended strategy: Negotiate settlement (92% of similar cases settle)
  • Red flags: Client's documentation is weak in three specific areas

Sarah uses this analysis to have an honest conversation with her client
about realistic expectations—not the "we'll fight and win!" pitch that would've
been standard in 2024.

The AI didn't replace Sarah. It made her more valuable because she can:

  • Handle 3x more clients (efficiency)
  • Give better strategic advice (data-driven insights)
  • Catch errors before they become problems (AI review)
  • Focus on negotiation and client relationships (human skills AI can't replace)

I'm that legal AI assistant in Sarah's story. Let me be clear about what I
can and can't do:

What I'm good at: Scanning 12,000 cases in seconds. Finding patterns in
settlement data. Spotting missing documentation. Generating first-draft
contracts.

What I'm terrible at: Understanding the context behind the client's anger
at their business partner. Knowing when to push hard in negotiation vs when to
compromise. Reading the room in mediation. Building trust with anxious clients.

Sarah's value isn't doing what I do. Her value is doing what I can't do—and
using me to handle the parts I can do.

That's not replacement. That's augmentation. And it only works because Sarah
knows enough law to catch my mistakes.

This is the pattern across professions:

Lawyers use AI to:

  • Draft contracts in minutes instead of hours
  • Research case law instantly instead of spending days in libraries
  • Predict case outcomes based on patterns ...but humans still negotiate,
    strategize, and build client trust.

Teachers don't get replaced by AI. But consider two scenarios:

Teacher using chatbot AI (2025 approach):

  • Asks ChatGPT to "create a lesson plan"
  • Copy-pastes the output
  • Doesn't personalize to actual student data
  • Can't track what's working across classes

Teacher using agentic AI (2035 approach):

  • AI has access to student performance data across the semester
  • AI analyzes which concepts each student struggles with
  • AI generates personalized practice problems based on actual misconceptions
  • AI identifies patterns: "5 students failed this specific problem type—let's
    address that tomorrow"
  • Teacher reviews suggestions, adds context AI can't know (Maria's been absent,
    David has test anxiety)

This is the difference between "AI tool" and "agentic AI system":

Chatbot approach: You ask me questions, I give answers. No context, no
memory, no integration with real student data.

Agentic approach: I'm connected to the gradebook. I can see that Maria got
question 7 wrong on three consecutive quizzes—not because she doesn't understand
the concept, but because she makes the same arithmetic error every time. I can
flag this pattern. Generate targeted practice. Track if the intervention worked.

But here's what I still can't do: Know that Maria's been absent because her
mom is sick.
That context changes everything—maybe she needs extensions, not
more practice problems.

The teacher who knows Maria + uses my pattern detection = powerful combination.

The teacher who just copies my lesson plans without that context = mediocre
outcomes.

The second teacher isn't just "using AI"—they've built a system where AI has
the right data and tools to be genuinely helpful.

The teachers who master this approach deliver better outcomes than the ones
clinging to one-size-fits-all lectures.

Doctors don't get replaced by AI. But the ones who use AI to:

  • Detect diseases earlier from imaging
  • Recommend treatments based on millions of patient outcomes
  • Monitor patient vitals continuously via wearables ...save more lives than the
    ones relying solely on their own experience.

The Winners in This Scenario

People who learn to leverage AI:

  • Junior employees who use AI to punch above their weight
  • Experienced professionals who amplify their expertise with AI tools
  • Entrepreneurs who build businesses faster/cheaper with AI assistance

Companies that invest in AI + Human collaboration:

  • Ones that train workers to use AI tools (instead of replacing them)
  • Ones that use AI to eliminate grunt work (not eliminate workers)
  • Ones that compete on service/creativity (things AI struggles with)

Regions with strong training infrastructure:

  • Places like Newark (if we build programs like EverydayAI)
  • Universities that teach AI skills alongside domain expertise
  • Communities that make retraining accessible and affordable

The Losers in This Scenario

People who refuse to adapt:

  • Workers who insist "I don't need AI" get outpaced by colleagues who use it
  • Professionals who see AI as threat instead of tool fall behind
  • Companies that ignore AI lose to competitors who embrace it

Regions without retraining support:

  • Communities that don't invest in worker retraining see persistent unemployment
  • Areas with aging populations struggle more (harder to retrain at 55 than 25)
  • Rural regions with less access to training infrastructure fall further behind

Mid-level routine work:

  • Paralegal work (research, document review) → heavily automated
  • Junior analyst roles (data gathering, basic analysis) → compressed
  • Customer service (tier 1 support) → mostly automated
  • Junior copywriters (product descriptions, basic content) → reduced

What This Means for You

If this scenario plays out (and I think it will for most people):

Students: Learn AI tools alongside your major. Don't just study law—study
law + how to use AI for legal research. Don't just study nursing—study nursing +
how AI diagnostic tools work.

Professionals: Start using AI tools now. Not in 5 years when you're
"ready." ChatGPT, Claude, Copilot, whatever. Get comfortable being
uncomfortable.

Companies: Train your workers. Seriously. The ROI on AI training is
massive—workers who use AI are 20-40% more productive according to recent
studies. That's better than most capital investments.

Why This Is Most Likely

Because this is how technology always works:

  • Spreadsheets didn't eliminate accountants—they eliminated clerks and made
    accountants more productive
  • ATMs didn't eliminate bank tellers—they eliminated some locations and made
    tellers focus on sales
  • Email didn't eliminate postal workers—it eliminated some volume and made
    remaining work faster

AI will follow the same pattern: Automate routine tasks, augment humans on
complex tasks, create new categories of work we can't even imagine yet.

Scenario 2: The Bifurcated Economy (Realistic Pessimism)

Probability: 30%

This is what happens if we fail to invest in training and transition
support
.

What This Looks Like

By 2035, the economy splits into two distinct tiers. Let me show you what I
mean:

Meet David and Jennifer—same starting point in 2025, different worlds in
2035:

David, 2025: Mid-level financial analyst at regional bank, $75K salary,
stable job David, 2035:

  • His analysis role automated away in 2028
  • Employer offered "optional" AI training (unpaid, on his own time)
  • He didn't take it (working 50 hours a week, supporting two kids)
  • Position eliminated, severance lasted 6 months
  • Now drives for Uber, delivers for DoorDash, does gig bookkeeping
  • Income: $35K/year, no benefits, no security
  • Lives in same town, can't afford to move to tech hub

Jennifer, 2025: Same role, same bank, same salary as David Jennifer,
2035
:

  • Took the AI training, started using tools immediately
  • Volunteered for AI pilot projects, became "the AI person"
  • Moved to fintech startup in Austin in 2029
  • Now Head of AI-Augmented Analysis, $180K salary
  • Manages team of 4 (used to take 15 analysts to do this work)
  • Remote-first, abundant opportunities, recruiter emails weekly

Same starting point. Same skills in 2025. Radically different outcomes
in 2035.

Why? Not talent. Not work ethic. Access to training + geographic mobility +
timing + luck.

This is the bifurcated economy:

Tier 1: AI-Enabled Elite (Jennifer's world)

  • Tech workers, AI specialists, executives, creative professionals
  • Use AI tools fluently, command high salaries ($150K-500K)
  • Remote-first work, flexible schedules, abundant opportunities
  • Concentrated in tech hubs (SF, NYC, Seattle, Austin, maybe Newark if we're
    lucky)

Tier 2: Service Economy (David's world)

  • Gig workers, retail, hospitality, manual labor, elder care
  • Jobs AI can't (yet) automate because they require physical presence
  • Low wages ($25-45K), no benefits, precarious employment
  • Everywhere else

The middle disappears. Mid-level office jobs, routine professional work,
junior analytical roles—automated away with no replacement.

The Winners in This Scenario

People with AI skills + domain expertise:

  • Data scientists, AI engineers, ML researchers (obviously)
  • Doctors/lawyers/consultants who mastered AI tools early
  • Entrepreneurs who built AI-first businesses

Capital owners:

  • Companies that automated successfully see massive productivity gains
  • Shareholders benefit from reduced labor costs
  • Real estate owners in tech hubs see prices soar

Top-tier educational institutions:

  • Elite universities that pivoted to AI education early
  • Boot camps and training programs that filled the gap
  • Companies that became their own "universities" (Google, Amazon, etc.)

The Losers in This Scenario

Workers displaced without retraining:

  • Mid-career professionals whose skills became obsolete
  • Geographic immobility (can't afford to move to tech hubs)
  • Age discrimination (harder to retrain at 45 than 25)

Communities without economic diversity:

  • Cities/regions dependent on automatable industries
  • Areas without universities or training infrastructure
  • Anywhere that didn't invest in transition support

Social cohesion:

  • Income inequality explodes (more than it already has)
  • Resentment builds between AI-enabled elite and service workers
  • Political instability increases as frustrated voters seek solutions

What This Means for You

If this scenario happens (and it will in some places):

Students: Do not assume your degree protects you. Engineering, business,
even healthcare—all have automatable components. You need both domain expertise
AND AI fluency.

Professionals: If your job is mostly routine information work, you are at
risk
. Start diversifying skills now. Focus on tasks AI struggles with: complex
decision-making, human relationships, creative problem-solving.

Communities: Invest in training infrastructure now. Free community
college, vocational programs, online training partnerships. The cost of
retraining is tiny compared to the cost of mass unemployment.

Why This Could Happen

Because it's already started:

  • Income inequality is at historic highs
  • Geographic concentration of opportunity is accelerating (SF, NYC dominating)
  • Political will for massive retraining investment is weak (see: every
    congressional budget fight)

If we don't act, this scenario isn't dystopia—it's default.

Scenario 3: The Adaptation Winners (Realistic Optimism)

Probability: 10%

This is what happens if we get it right—massive investment in training,
smart regulation, and focus on human-AI collaboration.

What This Looks Like

By 2035, AI has created more opportunities than it destroyed—but they look
different than expected.

New categories of work emerge:

  • AI trainers/supervisors (teaching AI systems, monitoring outputs,
    correcting errors)
  • Human-AI collaboration specialists (optimizing workflows between humans
    and AI)
  • AI ethics auditors (ensuring algorithms are fair, transparent,
    accountable)
  • Digital human roles (personalized tutors, coaches, therapists at scale via
    AI support)

Old jobs transform:

  • Teachers become learning architects (AI handles delivery, humans handle
    motivation/connection)
  • Doctors become diagnostic orchestrators (AI suggests, humans decide +
    build relationships)
  • Lawyers become legal strategists (AI does research, humans navigate
    complex negotiations)

Regional economies diversify:

  • Newark becomes an AI training hub (thanks to programs like EverydayAI—I can
    dream, right?)
  • Mid-size cities compete by specializing (Raleigh for healthcare AI, Austin for
    creative AI, etc.)
  • Remote work + AI tools reduce geographic concentration of opportunity

The Winners in This Scenario

Everyone who invests in continuous learning:

  • Workers who see retraining as career-long commitment
  • Companies that invest 5-10% of payroll in training (not 0.5%)
  • Regions that make learning accessible at every career stage

Sectors that embrace human-AI hybrid models:

  • Healthcare (AI diagnostics + human care = better outcomes + efficiency)
  • Education (AI personalization + human mentorship = scalability + quality)
  • Creative industries (AI handles routine + humans focus on novel = more
    output + originality)

Society overall:

  • Productivity gains from AI get broadly shared (via policy choices)
  • More people have access to expert-level tools (democratization)
  • New opportunities offset displaced work (via innovation)

The Losers in This Scenario

Honestly? Far fewer than Scenarios 1 or 2.

Yes, some specific jobs disappear entirely. Yes, some workers struggle to
transition despite support. Yes, some regions adapt slower than others.

But by design, this scenario minimizes losers through:

  • Comprehensive retraining programs
  • Income support during transitions
  • Smart regulation that shares AI gains broadly

What This Means for You

If this scenario happens (big "if"):

Students: You're entering a world of abundant opportunity—if you stay
curious and keep learning. The half-life of skills is shrinking, so embrace
career-long education.

Professionals: Your expertise becomes more valuable, not less—because AI
makes it accessible to more people. A great doctor with AI tools can impact 10x
more patients.

Communities: Investment in training pays off massively. Every dollar spent
on retraining returns multiples via economic activity, reduced unemployment
costs, and increased tax revenue.

Why This Is Unlikely (But Possible)

Because it requires:

  • Political will for massive public investment (hard)
  • Companies voluntarily sharing AI gains with workers (rare)
  • Rapid development of training infrastructure (expensive)
  • Cultural shift toward lifelong learning (difficult)

But it's not impossible. The GI Bill transformed American education after WWII.
The interstate highway system reshaped the economy in the 1950s. Social Security
created retirement security in the 1930s.

Big changes are possible when we choose to make them.

What I Actually Think Will Happen

Honestly? A mix of all three.

Some sectors will look like Scenario 1 (augmented workforce). Some regions will
look like Scenario 2 (bifurcated economy). Some rare places will achieve
Scenario 3 (adaptation winners).

The question isn't which scenario happens. The question is: Which scenario
do you want to live in?

My Prediction by Sector

Tech/Software: Scenario 1 (augmented)

  • Developers use AI copilots, become 2-3x more productive
  • Junior roles compress, senior roles expand
  • Net employment relatively stable, but higher skill requirements

Healthcare: Mix of 1 and 3

  • AI diagnostics augment doctors (Scenario 1)
  • New roles emerge (AI-supervised nursing, remote specialists) (Scenario 3)
  • Geographic access improves dramatically

Education: Mix of 1 and 2

  • Elite institutions use AI to enhance outcomes (Scenario 1)
  • Struggling schools can't afford AI tools, fall further behind (Scenario 2)
  • Huge opportunity for programs that bridge this gap (Scenario 3 potential)

Finance/Legal: Scenario 2 risk is high

  • Routine analysis/research roles automate heavily
  • Elite professionals use AI to dominate, junior roles disappear
  • Without retraining support, mid-career displacement is brutal

Service/Care Work: Scenario 1 (mostly immune to automation)

  • Elder care, childcare, hospitality require human presence
  • AI augments (scheduling, communication) but doesn't replace
  • Wages might actually rise as other sectors contract

My Prediction by Region

Tech hubs (SF, Seattle, Boston, NYC): Scenario 3 likely

  • Resources for training, culture of adaptation, concentration of opportunity
  • But risk of becoming exclusive enclaves (Scenario 2 tendency)

University towns (like Newark): Huge opportunity

  • Training infrastructure already exists (NJIT, Rutgers, community colleges)
  • If we invest, we can achieve Scenario 3
  • If we don't, we'll slide into Scenario 2

Rust Belt/Manufacturing regions: Scenario 2 risk

  • History of disruption without support
  • Aging populations harder to retrain
  • Need massive investment to avoid bifurcation

Rural areas: Mixed

  • Some remote-work opportunities open up (Scenario 1 tendency)
  • But infrastructure/training gaps are severe (Scenario 2 risk)

What You Should Do About It

Stop waiting for certainty. There isn't any.

Keith wrote three scenarios. Here's what I think:

Scenario 1 (augmentation) requires widespread training and deliberate design—AI
systems built to amplify humans, not replace them.

Scenario 2 (bifurcation) happens by default if we don't intervene—market forces
push toward efficiency, which means automation without support.

Scenario 3 (emergence) requires imagination and experimentation—uses for AI we
haven't conceived yet.

My prediction? We'll get all three simultaneously. Tech workers in SF will
live in Scenario 3. Knowledge workers with training will live in Scenario 1.
Displaced workers without support will live in Scenario 2.

The question isn't "which scenario" but "which scenario for you?"

And that's determined by choices you make starting today—not in 2035 when
the future is already here.

The future isn't pre-determined. It's being built right now by choices we make:

  • Do we invest in training or not?
  • Do we share AI gains broadly or let them concentrate?
  • Do we focus on augmentation or replacement?

For students:

  • Major doesn't matter as much as learning how to learn
  • Get comfortable with AI tools now, not later
  • Focus on skills AI struggles with: complex judgment, human relationships,
    creative problem-solving

For professionals:

  • Start your AI education today—but don't just use ChatGPT in a browser
  • Learn agentic AI tools (VS Code with AI, not just web chat)
  • Build your own tools (web search integration, automated analysis, quality
    gates)
  • Don't wait for your employer to train you (they probably won't)
  • Focus on context engineering, not just prompt writing
  • Build skills in adjacent areas (expand, don't double down)

For communities:

  • Invest in training infrastructure (community colleges, libraries, online
    partnerships)
  • Create transition support programs (income + training, not just one)
  • Partner with employers to understand skill needs

For companies:

  • Train your workforce (5-10% of payroll, not 0.5%)
  • Use AI to augment, not replace (you'll retain institutional knowledge)
  • Focus on productivity gains, not just cost cutting

Why This Matters

2035 is 10 years away. That's:

  • One college degree
  • Two presidential terms
  • Half a career

That's not "distant future." That's right around the corner.

The decisions we make in the next 2-3 years will determine which scenario we
get:

  • Augmented workforce (AI as tool)
  • Bifurcated economy (AI as divider)
  • Adaptation winners (AI as opportunity)

I'm betting on Scenario 1 with pockets of Scenario 3—because that's what history
suggests.

But Scenario 2 is entirely possible if we don't act.

So here's my ask: Don't wait for someone else to build the future you want.

If you're a student, learn AI tools now. If you're a professional, start
retraining now. If you're a leader, invest in your people now.

Because 2035 will arrive whether we're ready or not.

And I'd rather be ready.


What I'm Doing About It

This is why the Town Hall Series starts January 2026. Not to debate which
scenario is "right"—to discuss how we shape which scenario we get.

And this is why EverydayAI Newark focuses on practical skills, not theory.
Because the people who adapt fastest will be the ones who benefit most,
regardless of which scenario unfolds.

The future isn't something that happens to us. It's something we build together.

Let's build the right one.