We wrote this for AI 2027's call for alternate timelines of the development and impact of AI over the next few years. The goal here was not to exactly predict the future, but rather to concretely illustrate a plausible future (and thereby identify threats to prepare against). We will doubtless be wrong about details, and very probably be wrong about larger aspects too.
.
A note on the title: we refer to futures where novel AI has little effect as “low” scenarios, ones where superintelligence creates dystopia, utopia, extinction, etc. as “high” scenarios, and ones with large effects of AI short of ASI as “medium” scenarios. (These aren't standard terms.) At this point, at least a medium future within the next 5-10 years is nearly certain.
.
December 14, 1972:
The Lunar Module Challenger erupted from the Taurus-Littrow valley in a burst of fire, ascending to the spacecraft that would return Eugene Cernan and Harrison Schmitt to the blue marble. These two men had spent longer upon the moon than any in history.
In just over a decade, humanity had gone from a pre-spaceflight species to walking on the moon. This incredible feat was achieved by means that would soon be considered hopelessly primitive: slide rules, shipboard computers with memory measured in kilobytes, and acts of courage and resourcefulness from great men. Given all the technological progress that awaited that era, surely the stars were in humanity's future.
.
Unbeknownst to them, they were the last people who would ever walk upon another world.
.
Summer 2025:
People are as unable as ever to agree on a definition for AGI. The weakest definitions only require human-level performance on specific tasks or benchmarks, for example ARC-AGI1. More reasonable definitions on the weaker side require an AI as capable as the average human at most reasoning tasks. Stronger definitions require an AI capable of doing anything the average human can do on a computer. People who seem to think the G stands for genius replace "the average human" with "any human," effectively meaning "the smartest humans."
GPT-5 is, by definitions on the weaker side, AGI. It's not strong AGI, of course: it falls apart over long time horizons, isn't fully agentic, and, while its perception abilities have rapidly improved, there are still cases where it misidentifies out-of-distribution images in ways that almost no human would. But there are vanishingly few text-based questions, and increasingly few questions involving images, where it can't match or exceed the performance of the average person spending an hour.
For a subset of people, there's a small wakeup effect. Prior to the release, most chatGPT users had used 4o, ShutAI’s flagship model and the default one served on the chatGPT website. 4o had been badly outdated since late 2024 compared to "reasoning" models like o1 and o3, which are trained to produce a chain of thought (CoT) prior to answering – unsurprisingly, models able to "think" in steps like this perform much better than those forced to give a snap answer, even with a relatively small amount of CoT post-training. More than a few people had gotten unimpressive results from 4o, and decided that AI couldn't count how many r's are in "strawberry" in spring 2025. Once GPT-5 (which automatically uses a CoT when needed) becomes the default model, people stop seeing dumb answers from models that can't "think," and the level of ignorance needed to assign year-old weakness to much better current models rises. Partial agency features built into GPT-5, such as expansions of o3's code execution and search capabilities, further bolster its performance.
Some coders finally realize that you don't need people for most short-horizon programming tasks anymore. If a coding project can be broken into hour-long, succinctly describable tasks (which is not always easy or possible), a human doesn't need to write any code at all. Stuff like small-scale data cleaning similarly becomes much easier. Most people with simple bespoke personal benchmarks (often single tasks used as benchmarks) see these beaten sometime in 2025. A few more people worry about the future of software engineering or other knowledge work; a slightly broader group have a reaction along the lines of "guess AI kind of works now," but nothing further.
These advancements have basically no effect on ordinary people: most either don't use LLMs much or can't or don't care to distinguish between GPT-4’s reasoning and GPT-5’s reasoning. Most people still just view chatGPT as a chatbot. When pressed, they often answer that it could never replace (their) jobs because of some-or-other "spark" that humans have: understanding of emotions, creativity, dedication, problem solving, or whatever.
.
Despite impressive capability progress over the first half of 2025, two factors conspire to make sure this speed can't continue forever. Firstly, Trump slaps some new tariffs on electronic imports, including chips from Taiwan. It's not entirely clear why chips are included now; maybe it has something to do with Trump's falling out with Musk. The projected cost of building datacenters rises by over 13%.2
More importantly, easy capability returns are drying up. The introduction of CoT in 2024 allowed for a period of significant improvement with relatively little compute investment, leading to model performance growth above previous trends; but rather than a permanent change in trajectory, this was a temporary speedup. GPT-5 rides this wave, but it's clear to those in frontier labs that it's breaking.
.
Fall 2025:
ShutAI releases Daisy, an agent based upon GPT-5. For most consumers, it's far from worthwhile. It costs hundreds of dollars a month, has an unintuitive interface, and has reliability and time-horizon issues. Social media has dozens of stories of Daisy behaving in hilariously erratic ways when allowed to run for days without human direction.
But it was never intended for consumer use. Daisy was intended for business. It's far cheaper than an employee, and can do many tasks that'd take an employee a couple hours in a handful of minutes. It still needs a human to check its work, but that's far faster than a human doing that work personally.
.
In late October, Beevil releases Veo4, an improvement upon Veo3's already impressive video generation. It's capable of producing basically photorealistic videos one to two minutes long with reasonably well-specified prompts. It's not uncommon for the videos to have subtle and bizarre hallucinations, but a follow-up prompt identifying the issue usually resolves it.
ShutAI responds with a major update to Sora, slightly more refined than Veo4 and included in GPT-5.5. While its base model is actually worse at video generation, it has built-in scaffolding that generates the video, checks it for abnormalities, then regenerates it if needed. At the expense of requiring a bit more compute, the product is better and more polished for users.
.
DeepSuck finally releases R2. Its performance sits somewhere between o3 and GPT-5. Like R1, it's open-source, presumably as a propaganda move. A few shills declare that the US is falling behind.
.
By late fall, many benchmarks that had basically thwarted models in early 2025 are partly saturated. Claude 4.7 Opus, released in November, crests 50% on ARC-AGI-2 with extended thinking; the best narrow models designed for the ARC-AGI-2 contest perform slightly better. GPT-5.5 scores 45% on Humanity's Last Exam.
.
Winter 2025-2026:
ShutAI releases OpenConnection, the social media app that had been speculated about since early 2025. Unlike traditional social media, most of the content is AI-generated. In theory, this advantage is massive: instead of being able to serve users any (read: the most engaging) image or video from a library of existing, (mostly) human-made images and videos, it should be able to serve users any video from the space of all possible videos. If there's an image or video that would make a given user keep watching, it should be possible, in principle, to provide it.
In practice, the results are mixed, but set to improve with time. There are several limitations: lack of training data, video generation struggling to keep longer videos coherent, and, of course, the algorithm being far from perfect. But most of the content most people were consuming on conventional social media was easy-to-emulate slop anyway, and all of OpenConnection's limitations are surmountable with data and time.
As a given interest/affinity group starts to join the network, the training data they provide improves the app for other members of that group, encouraging them to join. This sees the beginning of a staircase adoption.
.
Meanwhile, out of public view for all but those who particularly care to look, the corporate-focused agents incrementally improve. Misanthropic releases a coding agent that's particularly good. While the time horizon issues haven't been fully fixed, several new innovations have partially ameliorated them. The most important is post-training to use a persistent scratchpad in addition to the CoT, to keep track of changing goals, notes, and intermediate results. This allows the agent to deliver summarized instructions to its future self farther out than the context window would normally allow.
Hiring for entry-level coding and similar positions begins to slow, but not in a way especially discernible from the quickly increasing youth unemployment that's already plagued the country for a year or two (and been basically ignored by politicians).
.
Spring 2026:
Seeing the promise of OpenConnection, others race to capture the new market. METAlizard aggressively incorporates AI content into Facebook and Instagram, and xRiskAI into Twitter. Beevil tries to do the same for Youtube, but is much less successful on a platform specializing in long-form video than those specializing in text, images, and short-form video.
Several copycat apps rapidly spring up, buoyed by the newfound ease of coding. A few capture niche subgroups, but most fail to make real inroads; even with just a few months of an initially small but fast-growing user base, ShutAI has secured a sizable first-mover advantage. METAlizard and xRiskAI have similar advantages, starting with popular, established platforms and mountains of user engagement data sitting on their servers.
AI social media scales far more powerfully with training on user data than conventional social media, since it's able to use the data to optimize content production, not just selection. As the networks expand and gather data, they're able to generate more engaging content. Besides conferring an advantage to (even slightly) established players, this scaling begins to make the algorithms significantly more addictive. For the groups who adopted OpenConnection early and in large volume, the effects are already noticeable. By May, nearly one out of six teens uses either OpenConnection or a copycat, and they spend around 15% more time on social media than their peers.
.
Summer 2026:
Time spent on social media among teens is up 10% since 2025, and 5% for young adults. For teens, summer break accounts for part of the effect, but algorithmic improvements driven by engagement data have been significant. While only a minority of teens (and even fewer adults) have actually begun using OpenConnection as their primary platform, almost all social media that serves personalized content and doesn't depend upon long-form video has incorporated some sort of personalized content generation.
For anyone paying close attention, the situation seems to be headed in a deeply worrying direction. Young people increasingly don't have hobbies, friends, or romantic relationships.
Most people over thirty have no clue what's happening. Those who don’t have children only catch glimpses through evocative but usually ambivalent thinkpieces published in major newspapers. In one especially humorous incident, a senator talking to a reporter chastises young people for spending so much time filming AI-generated videos.
.
ShutAI releases GPT-6 just under a year after GPT-5. It’s a noticeable upgrade across the board, especially in the drastic reduction of hallucinations, but nowhere near the magnitude of the switch from GPT-4 to 5. Many non-technical users are unimpressed.
Within a week of each other, ShutAI and Misanthropic release new agents, Daisy-2 (based on GPT-6) and Claire. For the first time, reliability issues are basically solved (not to perfection, but to human-level). Under the hood, Daisy-2 still has slightly more of these issues, but it has work-checking scaffolding that stops them bubbling up. The agents still don't have the time horizons to be proper drop-in workers (and, for some jobs, aren't smart enough either), but many attempt to use them in a similar capacity, with varying levels of success.
Saltman, in a stroke of political cunning and foresight, calls for a Universal Basic Income. He's pretty sure the writing will be on the wall, but it's not here yet. This allows him to seem like he’s on the side of the people when public AI opposition will be stiffest, without actually having to pay any taxes to support it until his company is more than prosperous enough for that to be cheap anyway. Perhaps he also believes it's the right thing to do; it doesn't really matter.
.
Fall 2026:
Scattered protests around AI begin to pop up. Their aims and focus somewhat vary, but the plurality respond to youth unemployment, calling for job placement programs or a UBI. While slightly larger than the anti-Trump protests of 2025 (which haven't stopped, and are sometimes entwined with the anti-AI-unemployment protests), they pale in comparison to the Civil Rights or Vietnam protests. The unemployment protesters are basically right: hiring for new knowledge work has fallen by over two-thirds, even as the stock market is booming, further widening wealth inequality and the gap between young and old. The protests are large enough to make national news, but not large enough to create much real political change. A couple states create job placement programs, hoping to "invest in the future."
.
Meanwhile, the CCP eagerly eyes the possibility of using personalized content generation for social control. Chinese models are far enough behind GPT-6 that they don't really exert much competitive pressure on ShutAI beyond that of American companies, but they're good enough for the kind of content generation that OpenConnection was doing in the spring. The CCP writes a pro-state system prompt and mandates the feature be rolled into existing Chinese social media.
.
Winter 2026-2027:
Reported unemployment hits 6%. Combined with the seldom-discussed declines in male labor force participation over the past several decades, the problem hits the sweet spot where it's severe enough to ruin many people's lives, but not severe enough for politicians to do anything about it.
.
Most of the benchmarks from 2025 have been saturated, but gains in model intelligence (i.e. ability to solve difficult, novel problems) are becoming increasingly difficult and compute-intensive. On semi-narrow tasks where there's either an evaluable ground truth or plentiful training data, the models can easily be made more capable than any human. But getting smart general intelligence is another story. It turns out that the underlying spread of human intelligences was far wider than most people suspected, and creating a general intelligence as smart as a smart human is much tougher than creating one as smart as a typical human.
The creators of Starburst continue to tell the void about how their unsaturatable benchmark has survived two years and still shows that the models really aren't that smart.
It also turns out that intelligence a hair below that of an ordinary smart person (which most SOTA models have at this point) is enough to do the vast majority of jobs well. The world, after all, isn't full of geniuses. For companies whose primary product is an agent designed to have human-level performance at the vast majority of jobs, this relaxes much of the pressure to increase raw model intelligence. However, some of the most economically valuable jobs (including AI research) do require greater intelligence, so several companies continue to pour massive resources into the problem.
.
For those worried about the perils of a misaligned ASI, this difficulty comes as a relief. While models are usually publicly well-behaved, alignment clearly hasn't been underlyingly solved. The kinds of misalignment seen in models in 2023-2025, some of which had an occasional tendency to blackmail, lie, etc., gradually got more subtle and less frequent, but isolated incidents still occasionally occur.
In an especially recent incident, several agents working for a law firm that had difficulty contacting clients for authorizations forged their e-signatures. When this came to light, investigation of the chain of thought and scratchpad revealed that the agents knew that what they were doing was illegal and therefore against their spec, but figured that no one would care (nearly correct; only AI safety researchers did). While there's still every reason to suspect an ASI would wipe out humanity, there's emerging reason for hope that concern will be moot.
.
Spring 2027:
ShutAI releases GPT-7 and Daisy-3, a drop-in worker based upon it. It's becoming increasingly clear that time horizon, reliability, and intelligence are decoupled. The first two have been rapidly increasing, and are basically solved problems, but intelligence is anything but. Roughly four percent of people are still qualitatively smarter than the best internal models run on fairly high compute, a minority of those are much smarter, and this doesn't look set to easily change.
Still, the drop-in workers are good enough for most jobs. Except for especially complex or thorny problems, coding, data analysis, etc. are basically solved for those willing to throw enough compute at them; accounting, communications, marketing, and more are even easier. Daisy-3 and rivals from Misanthropic and Beevil begin to be adopted into the vast majority of knowledge work that doesn't require far-above-average intelligence. People start migrating to jobs that they expect to be safe from AI, especially the trades.
AI research, on the other hand, resists significant automation. While the models do have enough general intelligence, bolstered by narrow post-training, to barely make forward progress on AI science, they stumble badly and burn copious amounts of compute to do so. The math still works out strongly in favor of the smartest humans. The situation is similar in other frontier fields.
.
GPT-7 incorporates avatars. In addition to a traditional text-based chatbot, one can now talk directly to an avatar generated in real time. The interface allows for a customizable, persistent appearance, personality, and memory, essentially enabling someone to construct an AI “friend.” A small minority of people had already been using an ad-hoc combination of existing chatbots, video generators, etc. to create similar avatars, but ShutAI's implementation is easy to use, all in one place, and available by default. Initially, the feature is presented as experimental and has issues with uncanny valley awkwardness and GPT-ese, but they're ironed out within a couple months, and, especially for young people starved of social connections, the avatars catch on quickly.
ShutAI implements age verification on chatGPT and OpenConnection using a similar process to KYC crypto exchanges. It’s nominally opt-in, but several features are neutered for unverified users. The stated reasons are a vague combination of promoting the safety of minors and complying with regulations. Shortly afterward, they quietly begin to relax the restrictions on generating NSFW content for age-verified adults. Others follow suit.
.
Summer 2027:
Knowledge hiring has basically stopped. A few ruthless managers do mass firings. Most don't want to risk the bad publicity when their companies are more productive than ever, bolstered by human employees managing teams of drop-in workers (and occasionally the other way around). Between the end of hiring, occasional layoffs, and companies that fail to adapt going bust, there's a rush away from knowledge jobs.
Some professions manage to temporarily evade the exodus. Teachers’ unions make a decent case against having AI teach children, and someone still needs to be physically present to manage classrooms for younger, and occasionally older, students. Lawyers… do lawyer stuff, and the regulatory process for getting an AI agent approved to practice law becomes mysteriously insurmountable.
In the background of this crisis, the agents are still undergoing RL, mainly to squeeze all the general intelligence possible out of them and/or optimize them for domain-specific abilities (which helps compensate for the lack of general intelligence beyond that of a smart person).
.
In late summer, ShutAI rolls chatGPT (and, importantly, the avatars) into OpenConnection, allowing access to friends and content in one place. This restructures their product lineup into the user-facing OpenConnection network, updated continuously, and the primarily enterprise-facing Daisy series of agents; they stop labeling and releasing different versions of GPT, which is by this point only used directly through the API and by a small and dwindling number of people.
Time spent on social media among teens is up 55% over pre-OpenConnection levels. For young adults (many of whom are unemployed), it's up 35%. 22% of 14-17 year olds and 46% of 25-30 year olds report having no close human friends. For many kids at home over summer break and unemployed young adults, most of the day is spent on OpenConnection. After all, by this point, at least for many groups, it has an unlimited supply of the most engaging text, images, and short- and medium-form videos they've ever seen, plus access to the most most perfect "friends" (and "boyfriends"/"girlfriends") they've ever known. This effect has also helped it consolidate a lead over its remaining rivals. Some parents desperately try to get their younger children to play outside with other kids, usually to no avail. Some parents of adult children who live at home beg them to get off their phones and look for a job, but it’s no use.
.
Fall 2027:
Almost every trade school or trade education program in America enrolls at (or above) capacity. The trades still pay well, but, even barring any further AI progress, when the current large crop of trade students graduate and flood the market, that pay rate will plummet. There are no good places left for young people (or middle-aged people who quit their jobs or are fired) to go. Most people fortunate enough to have a job try desperately to keep it.
xRiskAI fans the flames by announcing a general-purpose humanoid robot under development. Its corpus of training data isn't that large yet, so it struggles with environments out of distribution, but there's every reason to suspect that it'll be good enough to replace most manual jobs in roughly a year or two.
Self-driving cars and trucks, which have become increasingly prevalent and reliable over the past few years, overcome the last few regulatory barriers to common use. They still have issues with bizarre environments, but they're as safe as average human drivers in most cities, suburbs, and highways. For long-haul trucking routes, they're required to have human “drivers” by now-outdated safety guidelines, kept in place only to safeguard jobs. Mostly autonomous vehicles also exist at construction sites, occasionally taking guidance from humans but mostly following simple, self-directed routes.
.
A new update to video generation finally gives OpenConnection the ability to generate coherent and engaging long-form content. A gilded age of podcasts and video essays begins; people who want another season of their favorite cancelled TV show finally get it. It lacks some narrow abilities: it obviously can't make innovative educational/technical content requiring high intelligence, and it suffers a similar limitation on making content with substantial literary value. But, narrow exceptions aside, if you can imagine some content you'd like to watch – and even if you can't – it can conjure it, and it probably already has, without anything more than you opening the app.
.
Winter 2027-2028:
Reported unemployment crests 10% in January. By this point, it's obvious to almost everyone that the situation is untenable and worsening. When ShutAI announces that it's also developing humanoid robots, and xRiskAI gets approval for its long-haul trucks to drive on certain highways without human drivers, resulting in layoffs, the calls for UBI that have been gradually rising from fringe to mainstream to popular reach a breaking point.
Many in power, plus fiscal conservatives, hate the idea, and their influence is enough to stop a wholesale adoption of a UBI for now, but Trump's poll numbers are at record lows and doing nothing would make him almost universally hated and crater his legacy.
A $2000 stimulus check is sent to every American adult. Trump's signature prominently features on the checks, with Vance's below it. Trump loudly proclaims something about the greatest comeback ever, then quietly rejects a proposal to soft-collectivize AI companies by requiring that 20% of all AGI and robotics company profits be redistributed to the people.
.
ShutAI, Misanthropic, and others continue to work quietly but furiously towards models capable of automating AI research. They're still limited by the models being not much more than generally smart; increases in model intelligence over the past year have been small and very hard-won. This slight progress, combined with a few breakthroughs to get as much narrow AI research capability as possible out of the limited intelligence, finally makes automating a portion of the research economical, if only barely. But it's becoming increasingly apparent that, even if they could fully automate the research, it wouldn't matter much. The models just can't get that much smarter for a given amount of inference-time compute. Perhaps human geniuses really are close to the maximum intelligence that a system on the scale of a human brain can support.
.
OpenConnection, by contrast, is wildly successful. It's effortless to use, so it's spreading even to old people who struggle to operate a smartphone. The only people who don't use it are those who've made a conscious choice to avoid it, and even that group is slowly shrinking. Not everyone who uses it spends copious amounts of time on it; most adults still have jobs, and many have families, hobbies, and other obligations. But regardless, most people spend most of the free time they have on OpenConnection or one of its rivals (which command only a minority of the market, but have managed to solidify holds over a few groups).
.
Spring and Summer 2028:
The UBI issue continues to rear its head. Months pass, and money from the stimulus check runs out and then some. Josh Shapiro, the democratic frontrunner, suggests sweeping regulations requiring a host of jobs to be "performed" by humans (even if AI will do the actual work); the idea is popular. He also flirts with nationalization suggestions (mixed public opinion) before endorsing a more popular, limited collectivization. Saltman, eyeing the election, continues to endorse a UBI, and encourages Trump and Vance to do so. About half of AGI and robotics company CEOs follow suit. Trump is against the idea; Vance is quiet on the subject.
By May, the situation is again intolerable. Trump writes another check. Over the summer, increasingly impressive demonstrations of the developing robots' capabilities pour in. In predictable environments like restaurant kitchens, competent robots become commercially available, though it's several months further before they become cheap enough to see wide adoption. Trump writes a third check in August.
.
Fall 2028:
Vance's campaign purchases ads on OpenConnection at a very generous price. They're served to the people most likely to swing the election, highly personalized to speak to their specific concerns, and scarily effective. Shapiro’s campaign attempts to purchase the same ads; they're mysteriously ineffective. The issue goes to court; a judge issues a subpoena against ShutAI. At Trump's and Vance's suggestion, Saltman ignores the subpoena and the subsequent ruling requiring equal access to OpenConnection advertising. Some democrats complain virulently; most people are more concerned about jobs disappearing.
In September, Vance finally enthusiastically endorses a UBI. Between this, a fourth stimulus check in mid-October, and the OpenConnection ads, despite the initial popularity of Shapiro's proposals, Vance wins comfortably, but not in a landslide.
.
Winter 2028-2029:
xRiskAI narrowly beats ShutAI to release the first reliable, general-purpose humanoid robot. Starting in early 2028, the robot’s internal model was trained extensively in vast AI-coded simulations, bootstrapping from the long-since-excellent SOTA in audiovisual processing and classification. Desperate people were paid a few hundred dollars to allow pictures and LIDAR scans of their apartments; these proved nearly superfluous.
Once the simulation training was complete, several hundred robots loaded with the model were placed in environment playgrounds in warehouses for continuous, real-world reinforcement learning. Initially, the robots mostly experimented with simple physical goals on their own; as they improved, they were increasingly given specific activities to perform by human trainers and evaluated by a combination of automated and trainer feedback. Thanks to integrated LLMs (which have long since had a human-level understanding of the physical world from videos), trainers were able to converse with the robots and guide them through actions they struggled with. For some especially difficult actions, humans literally moved the robots' limbs, with proprioceptive sensors helping the robots to feel the motion that accomplishes a particular task. They learned as a hive mind: each day, sometimes multiple times a day, the latest reinforcement signals from all the robots were aggregated and used to update the base model running in all of them.
By December, with some minor exceptions (including tasks requiring uncommonly high intelligence), humanoid robots can do any task that a person can do. Production volume for the robots is initially small, putting the price well out of reach of ordinary people (even those who still have jobs), but production is set to scale up massively.
In February, just as reported unemployment reaches 15%, Vance signs the promised UBI bill, giving every American adult $1000 per month, and more to families with children. It is funded by a combination of increases in capital gains and corporate taxes (with several exemptions blatantly targeting AI companies and other interest groups) and money printing. A handful of small companies fold as a result of downstream effects of the increased tax burden.
.
Spring, Summer, and Fall 2029:
Production of robots rapidly expands throughout 2029. ShutAI follows xRiskAI's lead; their robots are slightly more intelligent, but their production infrastructure is two months behind. Other companies release more specialized, non-humanoid robots for particular jobs where they’re more efficient.
The robots are put to work on commercial applications first. Hiring of humans has come to nearly a dead stop in all but a few fields. By early fall, a robot able to replace a human worker costs less than that worker's yearly salary. Companies offer long-term robot rentals at a loss, hoping to entrap customers and grow their influence.
.
Robots construct and work in automated wet labs testing countless drugs and therapies. Occasionally, a very smart human researcher will direct a few labs. More often, AIs with encyclopedic knowledge are sufficient to do the job well enough. By the end of the year, medical progress is three times faster than the mid-2020s and accelerating. Most forms of cancer seem likely to be cured within a decade, with the cures accessible to ordinary people a couple years thereafter. Research on slowing aging seems likely to bear fruit on a similar timescale, though the possibility of fully stopping it is dubious without the development of nanotechnology, which would require either ASI or massive efforts from human geniuses. Neither seem possible.
.
Thanks to the ongoing replacement of human labor with agents and robots, the economy is growing by 30% per year, and the stock market has long since become completely divorced from the average person’s financial situation. While it booms, unemployment also rises, and by the end of 2029, reported unemployment reaches a quarter.3
.
The tech CEOs in charge of the largest companies are power-seeking, but those in positions strong enough to consider a coup are already getting most of what they want from the government anyway. For any who are not content with their situation and seek absolute power, taking aggressive violent or memetic action against their competitors is one of the few things that would actually provoke a response from the US government, which still has a monopoly on things like stealth fighters and nukes, and will for the foreseeable future. For weaker oligarchs hoping to ascend, power is too deeply entrenched, and there aren't sufficiently asymmetric tactics to allow for it.
.
.
.
2035:
Ordinary Americans live reasonably comfortable lives. The same is increasingly true for others around the world. Many Americans are still going through motions of their old lives; a third are nominally employed, and maybe a fifth of those do something that's at least arguably real work. Most people, employed or not, spend their days on OpenConnection. About a third of parents still put serious effort into raising their children. The rest delegate it all to a cheaply rented humanoid robot from ShutAI or xRiskAI. New parenthood is very rare; TFR is nearing 0.1.
In a stroke of extraordinary luck that might've been inevitable, none of the apocalypses came true. Alignment was never really solved, but there's no superintelligence capable of taking over and wiping out humanity. Concentration of power is somewhat worse than in early 2025, but, ignoring the effects of OpenConnection propaganda, it’s only on par with the worst eras of history. The oligarchy is stable enough to prevent catastrophe or dictatorship. Ordinary people haven't been left to starve; it's cheap to keep them in decent comfort, and enough people in power draw moral lines above letting most of humanity die. A few terrorists launched bioweapons or cyberattacks, but the models aren't smart enough to design a bioweapon that kills more than a handful of people, and they're good enough coders to defend against cyberattacks. It never got worse than a two-day blackout the size of Nevada.
Despite humanity’s lack of effort to prevent them (some exceptional individuals did good work, of course, but not enough to matter), we miraculously evaded all potential disasters. But we never get the stars. We never master physics, or solve the hard problem of consciousness, or kill God. Starburst is never saturated.
Instead, we go gently. To quote Network, “All necessities provided. All anxieties tranquilized. All boredom amused.” And not much more.
.
June 1st, 2025:
One of the strongest arguments against predictions of ASI stems from the Fermi Paradox. There are billions of roughly earth-sized rocky planets in the Milky Way that are close enough to their stars to have liquid water; unless life is astronomically unlikely for unexpected reasons, the galaxy should teem with it. But none of that life has developed enough to be detectable to us. Why? Perhaps it’s very rare for simple life to develop to where humanity is now; perhaps it’s very rare for life to develop from where humanity is now to an interstellar spacefaring civilization. It could be just one of these, but the paradox is easier to explain if both are true.
We should expect that we’re on the most usual path of development consistent with our circumstances. If that path leads to ASI, most life that reached where we are now would spread across the universe, leaving signs we'd have already seen. But if that path leads to technology that renders a species content and pacified, serious space exploration might never be in the cards. If this reasoning is right, and we don't manage to defy fate, humanity will likely forever follow that earthbound path, and be among dozens – or perhaps hundreds, or thousands, or millions – of intelligent species, meekly lost in the dark.
https://web.archive.org/web/20240611173930/https://arcprize.org/
Assuming ~32% tariffs and GPUs being ~41% of datacenter build cost. The 32% tariff rate was proposed by Trump in April, then delayed. The 41% number comes from https://www.telecompetitor.com/strong-data-center-capex-forecast-driven-by-ai/
We previously had this sentence here: “Around 20% of households have at least one robot.” After further discussion and thought, we removed it on June 16th, 2025. The actual number would probably be 1+ order of magnitude lower.