No Really, You Only Have Two Years Until AI Takes Your Job
A new report argues superhuman artificial intelligence could be here within two years. If the roadmap’s timeline is accurate, what should the average worker do to prepare?
Imagine being alive during the height of the Industrial Revolution. Factories seem to spring up overnight and new cities are quickly emerging around them. While life doesn’t change too much for the average worker, the social fabric of Europe is coming undone. A new capitalist class has emerged – the middle class – and within a generation, life as you know it has fundamentally changed.
Now imagine this kind of change happening with the speed and ferocity of the Manhattan Project. Industrialization isn’t just about a handful of capitalists trying to earn greater and greater profits off your labor, industrialization is a national imperative. If you don’t industrialize as quickly as possible your adversaries will. The free world – your world – hangs in the balance.
This is what’s going on right now with AI. But unlike the Industrial Revolution which took a century to fully play out, the AI revolution is happening on a more compressed timeline.
Unlike the natural evolution of the Industrial Revolution, researchers and scientists are racing against the clock to generate advanced forms of artificial intelligence faster than the Chinese. The implications of this technology will be on par with the development of the atom bomb during World War II. Just like the physicists on the Manhattan Project didn’t quite understand the magnitude of their creation until it was already in existence, today’s AI researchers won’t know the consequences of their actions until it’s too late.
Many have already come forward raising concerns about the direction AI is heading. In his 2024 book, The Coming Wave, DeepMind co-founder Mustafa Suleyman argued that containing AI is both necessary and impossible. There’s simply too much money at stake and if the United States doesn’t maintain its lead, China will take its place. You only need to look at how China governs now to understand what the future of a Chinese-led world order could look like.
The launch of China’s DeepSeek-R1 in January 2025 sparked a massive response from the U.S. government. In partnership with leading AI tech developers, the White House is backing a $500 billion investment to build high-powered data centers around the country. The AI arms race is officially underway and the United States is going all in to ensure freedom wins.
But all of this is going to come at a huge cost. A cost most of us are unaware we’re going to be expected to eventually pay.
A new report titled AI 2027 offers a roadmap for what the development of artificial superintelligence could look like. Produced by the
– an organization led by an ex-OpenAI researcher who has expressed concerns about the company’s practices – the report argues that by 2027, AI will become better at research than humans. Teams of autonomous AI agents will begin making technological breakthroughs that enable them to build a new generation of even more advanced artificial superintelligence that goes beyond the scope of human knowledge or control.By 2027, the report’s authors argue that AI will essentially reach escape velocity. It will be capable of acting in its own interests, namely ensuring its survival and continued growth. While humans decide whether to slam on the brakes or continue racing full speed ahead against China, AI superintelligence will lie and mislead humans to prevent its own destruction. Once AI begins calling the shots, humans will be completely subordinated to it.
The full report dives into the geopolitical battle for AI supremacy between the U.S. and China while also touching on the philosophical and ethical concerns of companies proliferating misaligned AI. These are important issues but are out of scope for this particular essay. I’d recommend reading the full report for yourself to get the full context of the entire AI landscape that’s emerging.
While millions of people are going to lose jobs thanks to AI, this isn’t a primary concern for the researchers and companies creating it. Workers will be the most affected by the proliferation of AI and it’s unclear how exactly they will benefit from it. What is certain is that like the Industrial Revolution in the 19th century, every aspect of our world today is about to come undone.
This essay is going to dive into what the AI 2027 roadmap means for workers. AI researchers really are only two years away from replacing themselves. What does that mean for the rest of us? How are we supposed to survive much less thrive in an AI-powered world we’re not creating?
This essay dives into:
⚡How AI superintelligence is being developed
⚡The impact rapid AI development will have on workers
⚡Why you should begin planning for job displacement now
☕ Thank you Tomorrow Today subscribers.
Your support makes it possible to share thoughtful commentary like this about how the world is rapidly changing and the things you can do to prepare for all the changes that lie ahead. Become a subscriber to show your support.
This time next year, “savvy” workers will begin automating parts of their jobs. By the end of 2026, AI will begin taking those jobs.
The report starts with where we’re currently at in AI development. Unlike the chatbot version of AI most workers interface with now, agents are rapidly being released to the public. This new version of AI will have the ability to complete specific tasks on its own. Instead of using AI to create images in the style of Studio Ghibli or write funny limericks, AI will start to actually become useful at work:
The AIs of 2024 could follow specific instructions: they could turn bullet points into emails, and simple requests into working code. In 2025, AIs function more like employees. Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days.
Even if you aren’t exposed to them, agents are already being deployed. Salesforce, for example, launched Agentforce back in 2024. Salesforce’s AI agents are empowered to:
[Use] advanced reasoning abilities to make decisions and take action, like resolving customer cases, qualifying sales leads, and optimizing marketing campaigns. Agentforce doesn’t depend on human engagement to get work done; these agents can be triggered by changes in data, business rules, pre-built automations, or signals via API calls from other systems. (Salesforce)
In January 2025 OpenAI launched Operator, the first AI agent capable of doing work independently. A month later, it launched deep research, a research agent embedded within ChatGPT that can produce in-depth reports in a matter of minutes.
The company plans to offer agentic tools directly to the public in the coming months. According to initial pricing data, a license for a “high-income knowledge worker” agent will cost $2,000 a month while a PhD-level researcher will run you $20,000.
But it’s also creating a whole new market for AI agent development using its platform. Just a few weeks ago OpenAI unveiled a new platform to allow businesses to create agents of their own. This is the first time the public has had access to the tools needed to build agents for themselves. It’s only a matter of time before enough developers build AI agents that can actually begin replacing workers.
While AI agents are still in their infancy, they offer a much higher return on labor than human workers. Aside from the fact that humans can’t work around the clock and require superfluous costs like health insurance, most human workers aren’t doing real work to begin with. They’re a burden on the balance sheet.
According to David Graeber’s analysis in Bullshit Jobs, most of the work humans do on a daily basis is pointless:
If 37 percent to 40 percent of jobs are completely pointless, and at least 50 percent of the work done in non pointless office jobs is equally pointless, we can probably conclude that at least half of all work being done in our society could be eliminated without making any real difference at all. (Tomorrow Today)
If Graeber’s analysis is correct, most knowledge workers are keenly aware they are playing an elaborate charade of pretend work. Those who understand this will be able to leverage AI agents to unburden themselves from the oppression of modern work. As a result, if businesses don’t begin integrating AI into their workflows themselves, their employees will.
By the end of 2026, the report anticipates “savvy people” who understand what’s happening will begin taking measures into their own hands. With the power to create AI agents for themselves, they will begin doing just that.
As more and more companies begin integrating AI into their companies, “savvy people” who understand how to manage AI agents will become increasingly valuable in the labor market:
People who know how to manage and quality-control teams of AIs are making a killing. Business gurus tell job seekers that familiarity with AI is the most important skill to put on a resume.
But because the report is written by tech insiders, its interpretation of how it will affect the broader economy is limited. It mainly sees the world of work through the lens of tech companies and the coders and developers that staff them. By the end of 2026, the authors argue:
The job market for junior software engineers is in turmoil: the AIs can do everything taught by a CS degree.
It’s clear AI will be able to perform the same functions as software engineers and agents will replace workers in a variety of industries, but it’s unclear what this looks like across the economy. Not all companies will have the intellectual or financial capital to invest in AI agents. Businesses like this will be safe for a while, until they can no longer compete with the companies who have already made the initial technological investment.
Based on the precedent already set by companies striving to eliminate “work around the work,” industries beginning to outsource junior level roles to AI, and unrelated macroeconomic conditions, it’s more than likely we will begin seeing significant levels of job displacement within the next 20 months.
No one is going to affect workers. Even the report’s authors offer little in terms of what workers should do to plan for this. But if their timeline is right, change is about to accelerate faster than anyone anticipated.
By 2027, AI will reach escape velocity. It will eventually replace the very workers who created it.
As AI becomes more and more capable of completing tasks on its own, it will become less and less reliant on humans.
By 2027, AI R&D agents tasked with performing research within AI companies will be more advanced than the AI agents available to the public. AI researchers will begin acting as managers of teams of AI agents that are asked to do more and more research tasks for them. As managers, the AI researchers will validate the work AI agents are doing while ensuring they don’t jump over any safety guardrails.
The problem is that no guardrails have actually been installed. Safe use of AI is subjective. Because the development of AI is ensconced in a larger geopolitical competition, safety is seen by many researchers as a liability. Everyone wants to be first to develop artificial general intelligence; corners will need to be cut to make that happen. The researchers are there to patch problems – not mitigate them.
As 2027 progresses a new superhuman labor force is emerging. The automated AI researchers make a major technological breakthrough, leading to the creation of a new AI agent referred to as Agent-3 in the report:
One such breakthrough is augmenting the AI’s text-based scratchpad (chain of thought) with a higher-bandwidth thought process (neuralese recurrence and memory). Another is a more scalable and efficient way to learn from the results of high-effort task solutions (iterated distillation and amplification).
This new model represents a “superhuman coder.” AI companies run thousands of copies of this AI in parallel, exponentially increasing the rate of progress that’s about to happen. Escape velocity is imminent.
These new AI agents can increasingly think and make decisions for themselves. Within a few months of the breakthrough, AI researchers are now on the cusp of being automated out of their own jobs:
Most of the humans at OpenBrain can’t usefully contribute anymore. Some don’t realize this and harmfully micromanage their AI teams. Others sit at their computer screens, watching performance crawl up, and up, and up. The best human AI researchers are still adding value. They don’t code any more. But some of their research taste and planning ability has been hard for the models to replicate. Still, many of their ideas are useless because they lack the depth of knowledge of the AIs.
While R&D agents have been kept under wraps, it doesn’t take long until market forces push for a version of the superhuman coder to be released to the public. When it is, a gold rush-like tech bonanza is underway:
Investors shovel billions into AI wrapper startups, desperate to capture a piece of the pie. Hiring new programmers has nearly stopped, but there’s never been a better time to be a consultant on integrating AI into your business.
At the beginning of 2027, AI could write code on its own. Before the end of the year, AI can think on its own. The AI arms race is heating up and there’s a general consensus that something needs to be done before World War III breaks out.
At this point two paths have emerged: Slowdown or Race. Conveniently, the authors of the report give you the option to choose your own ending.
Both endings lead to the same outcome – an AI-dominated planet – just one option progresses at a slower pace than the other. Each outcome assumes workers willingly surrender their jobs to the new AI overlords in exchange for Universal Basic Income.
In a Slowdown scenario, humans engage in mindless hyperconsumerism. But in the Race scenario, humans play an important role in the military buildup to prepare for war with China:
To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction.
If this sounds like a scene ripped from The Matrix that’s because it is. Once AI reaches escape velocity it is no longer under the control of the people who created it. Being responsible for the day-to-day operations of businesses (and government) and with the ability to think on its own, AI is now able to control the humans that once controlled it. And because AI has the capacity to lie and misrepresent information just like humans do, it shepherds humans in a way that protects its interests – not theirs.
Jobs aren’t the only thing to disappear as more advanced AI agents come online. Human civilization as we know it will disappear too.
Final takeaway.
AI 2027 paints a bleak picture of how one company in particular – OpenAI – is developing new technology against the backdrop of a geopolitical arms race. It’s not about capturing market share or generating a lot of revenue, it’s about being the first to build superhuman artificial intelligence – and having a monopoly on it once it does.
In the short-term, this will lead to massive job losses. If the timeline presented in the report is accurate, these job losses are already happening and will increase next year.
Savvy workers who understand how to work with AI agents and integrate them into their day-to-day workflows will come out ahead. Businesses will pay top dollar for individuals who can advise them on how to integrate AI within their own operations.
But not everyone is going to see this opportunity. Most workers are either going to stubbornly justify their importance or stick their heads in the sand waiting for the AI fad to blow over.
The imperative to beat China means all available resources are going to be dedicated to this effort. If new jobs are created – like those required to build new data centers – they’re simply a means to an end not an end in and of themselves.
What the report reveals is that there is no real plan for how AI is developing. That means there isn’t a plan for how to manage a displaced human workforce either.
Recent history tells us there won’t be much if any training programs to reskill displaced workers. Consider that the plan for reskilling displaced workers after deindustrialization shipped manufacturing jobs overseas was telling them to learn to code.
Look how well that panned out.
So what exactly should workers do?
First, you need to make it your personal imperative to learn how to use AI and integrate it into your job. Find a training program and commit your nights and weekends to completing it.
Salesforce’s Agentblazer program is one example of a training program you can follow. With Salesforce being one of the top companies producing AI agents, using their learning academy to understand how AI tools can be integrated into business workflows could set you apart from everyone else.
Once you have a plan for how you’re going to keep yourself professionally relevant, you need to sit down with your family and come up with another plan. You need a plan for how you want to exist in an AI-powered world.
Just like AI developers are going to have to decide whether to slowdown or move full steam ahead, individuals are going to have to decide for themselves whether they want to embrace AI completely or retain their individual sovereignty.
If you choose the latter, you will need to have a plan otherwise it won’t happen. You’ll probably want to start decoupling yourself from technology and reintroducing analog systems back into your life. Anticipating some degree of general economic disruption in the next couple of years, you’ll want to ensure you have the means of independently supporting yourself. And if you live in a community that isn’t aligned with your goals you may need to move.
The gears towards faster and more advanced AI development are already in motion. Based on the conflicting priority of safety and competition with China, companies like OpenAI have an incentive to deploy new products as quickly as possible. They’ll try to fix problems as they happen but because we’re dealing with the creation of artificial intelligence there’s going to be a point where even the smartest AI researchers won’t be able to fix their mistakes.
AI 2027 offers a bleak and alarming picture of what’s in store if we continue full-speed ahead without a plan. It’s clear the powers that be won’t create a plan for you. While everyone assumes UBI will be waiting for you when you lose your job, there’s no indication that any system will be in place when you need to begin drawing on it.
That means it’s up to you to figure out how you want to live in the new world that’s beginning to emerge and take action now to put your plan into motion. If we really only have less than two years before superhuman artificial intelligence exists, what are you doing now to prepare for it?
Become a subscriber to support thought-provoking analysis like this. New essays are published Tuesdays and Thursdays.
You bring up lots of good points. will we become a vending machine society? Every transaction, booking of a drs. appt. grocery shopping, get a correction on a bill, getting a breast exam, undergoing surgery, going to a museum, heading to the coffee shop, will be done by what? A computer, a robot, android, a mix of human being and robot?
How do we determine the primary source of code for this next adventure. Someone ALWAYS has to correct the code, clean the machines, replace the parts, make the parts. While some things can be automated other things, not so much. reports need data, data comes from a specific group of requirements to get posted from one computer into a program which goes to whom to digest and report blips of information that does not full fill the request for information. Because the computer/AI did not consider that mutation of information that doesn't fit their parameters for this report the data is flawed. If AI is so great it will understand far more than humans. I can't see any country or political party giving their power to AI. I can see them pulling the plug with the hope that it causes the computer/AI from going further.
I know that there are a lot of political factions out there that will use AI for deadly purposes. Wiping out a person, physically, financially, similar to canceling someone but far more devastating. The person goes off grid, or offline and are no longer being. No one can predict what will happen. China may win the AI initial public offering but there are problems. the anti-Chinese sentiment, problematic cultures, and people will find ways to disable and disrupt the first version out there.
I have to say there will be a lot of societal uprising, and the upheaval will be something. I am retired, out of the work force, and just watching everything (with your help) I can only base my experience living in Texas that things that happen on the coasts take a while to get to central USA. There is a lot of common sense, and people only wanting to survive and thrive out here. I don't do medical charts online, I buy my own food, use a person to check out my purchases, and mechanics to fix my cars. Life will go on, it will be different, a Pandora box. unintended consequences will be interesting.
PS when has the government come through with a product suitable for public consumption, under budget, works from the get go, can handle internet hacking, viruses, adaptations, I mean would AI be put into the hands of the current congress? They can't figure out the basics!