Short version

Software shock now. Labor stress next. Physical disruption later. Beyond that is scenario territory.

AGI as overlapping waves, not a single threshold. Wave one is already visible in software. Wave two is a late-2020s labor transition. Wave three is early physical-economy disruption. After that: scenarios.

Waves overlap and amplify. They don't wait for the previous one to finish.

4
Waves
5
Signals
7
Theses
March 6, 2026
Curated update

Now through 2027

Software Disruption Now

Software disruption is already underway across writing, coding, support, research, and operations.

The practical question is not whether a clean AGI day arrives, but how much useful work shifts before the label catches up.

This wave starts first, but labor and physical-economy effects begin before it is finished.

2026 signal Signal

Agent benchmarks are improving faster than reliability in adversarial or long-horizon work.

Software systems are already taking on more writing, coding, and support tasks, but they still fail in operationally important ways.

Why it matters: Expect partial automation and labor reshaping before dependable autonomy.

Caveat: Benchmark gains are not the same thing as trustworthy end-to-end job replacement.

Open link
2026 deployment signal Signal

Cheaper frontier-style models widen the set of routine software tasks firms can automate.

Faster, cheaper models matter because they make automation practical in everyday workflows, not just demos.

Why it matters: The near-term spread of AI will be driven by deployment economics as much as by raw capability.

Caveat: Cheap inference expands usage, but it does not erase supervision, quality control, or integration costs.

Open link
2026-2027 framing Thesis

The right frame for the current moment is utility and labor effect, not waiting for a ceremonial AGI day.

Software shock is the part moving fastest right now.

Why it matters: The timeline should foreground work displacement and productivity shifts instead of theatrical AGI countdowns.

Caveat: Real utility can be economically disruptive even when systems are still uneven.

Open link

2027 through 2031

Broad Labor Stress

Labor-market stress becomes broad enough that it is hard to dismiss as isolated sector churn.

This is a transition period, not automatic collapse: painful reallocation, tighter management, and messy bargaining over where automation actually sticks.

Software disruption keeps spreading while labor stress rises unevenly across occupations, firms, and regions.

Late 2020s transition Thesis

The late 2020s are the first plausible window for broad labor stress from cumulative software automation.

Broad labor stress can arrive without a net economic collapse.

Why it matters: The transition may feel painful because firms can reallocate labor faster than workers can retrain or move.

Caveat: A turbulent reallocation period is not the same as instant permanent unemployment.

Open link
2026 warning signal Signal

Research attention is shifting toward whether agent performance maps to real work rather than benchmark abstractions alone.

The important question is increasingly whether AI systems can do real work that organizations will trust.

Why it matters: Labor stress becomes more plausible once the conversation moves from demos to workflow fit.

Caveat: Real-work evidence still has to survive integration, compliance, and management friction.

Open link
Policy response window Thesis

Management tightening, retraining efforts, and policy fights arrive before any clean long-run equilibrium.

Firms and governments will likely improvise through the labor shock rather than meet it with one coherent plan.

Why it matters: Expect a stretch of uneven rules, retraining pushes, and disputes over where automation is allowed to land.

Caveat: Policy can slow deployment in some sectors while accelerating it elsewhere.

Open link

2030 through 2035

Physical-Economy Disruption

The first plausible window for broad physical-economy disruption is early in the 2030s, through robots, autonomous logistics, and tightly managed deployment.

Industrial robot momentum is real, but it does not prove that general-purpose humanoids flood the economy tomorrow.

Physical deployment stacks on top of software and labor shocks rather than waiting for them to conclude.

2026 robotics signal Signal

Embodied and humanoid benchmarks are improving, but mostly inside constrained tasks and controlled environments.

Robot progress is worth taking seriously, but today it looks more like narrow industrial momentum than universal physical autonomy.

Why it matters: The physical-economy shock should be modeled as staged deployment in high-ROI settings first.

Caveat: A better benchmark or demo is not proof of cheap, safe, mass deployment.

Open link
Early 2030s window Thesis

The first plausible broad physical-economy disruption window opens in the early 2030s, after software disruption is already established.

Physical disruption likely comes later than software disruption.

Why it matters: The early 2030s are the first plausible period for broad logistics and industrial effects.

Caveat: Industrial robot progress should not be overstated into a claim that humanoids will flood the whole economy tomorrow.

Open link
Infrastructure gating Thesis

Energy, maintenance, supply chains, and safety cases dominate the pace of physical deployment.

The physical-economy wave will be paced by energy, parts, maintenance, and regulation.

Why it matters: Even strong robot capability does not translate into instant economy-wide saturation.

Caveat: The bottleneck is industrial capacity and operational reliability, not only smarter models.

Open link

After 2035

Scenario Territory

Beyond 2035 the right frame is scenarios, not forecasts.

Energy, supply chains, regulation, and real-world reliability dominate the outer boundary more than abstract capability curves.

Long-run outcomes remain path-dependent on how the first three waves interact with power, politics, and industrial capacity.

2026 policy signal Signal

Policy discussion is already expanding toward energy, environmental cost, and governance constraints around advanced AI.

Long-run AI outcomes will be constrained by power, regulation, and environmental cost.

Why it matters: Past a certain point, governance and infrastructure matter as much as capability progress.

Caveat: Long-run scenarios can diverge widely because these constraints are political and industrial, not only technical.

Open link
Post-2035 scenarios Thesis

After 2035 the right framing is branching scenarios, not a single forecast line.

The farther out the timeline goes, the more humility matters.

Why it matters: Beyond 2035 the honest move is to compare scenarios, not to promise a single date.

Caveat: False precision is especially misleading once energy, supply chains, and regulation start to dominate the path.

Open link
Long-run branch point Thesis

Long-run divergence depends on the interaction of grid power, chip supply, regulation, and real-world reliability.

Very long-run outcomes hinge on industrial and political capacity, not just capability curves.

Why it matters: The same technical frontier can yield very different futures under different energy and governance conditions.

Caveat: Reliability failures or infrastructure scarcity can cap deployment long before abstract capability ceilings are reached.

Open link