The Coming Wave: Is It Possible to Contain AI? This DeepMind Founder Doesn't Think So
The Coming Wave by Mustafa Suleyman is a cautionary tale about the rapid proliferation of AI. Without proper guardrails in place, it will be impossible to contain AI.
The proliferation of massively disruptive technologies isn’t happening in the not-so-distant future. It’s already here.
At this year’s World Economic Forum in Davos, artificial intelligence took center stage. World leaders discussed whether or not it should be regulated and what regulation would even look like.
The short answer is yes, AI needs to be regulated. But the reality is that regulation alone is insufficient. With so many different stakeholders vying for power, it’s impossible to find common ground to successfully enforce guardrails that enable the safe proliferation of AI.
The problem is that the emergence of new technologies like artificial intelligence has created a fundamental paradox that regulation simply cannot resolve. With the spread of these new technologies, a new form of government will have to emerge. That government will establish new institutions that are not bound by our current social contract. New norms and values will have to be created as well, some of which we don’t even have the capacity to envision yet.
In his book The Coming Wave, DeepMind co-founder Mustafa Suleyman paints a stark portrait of how the paradoxes inherent in the proliferation of new massively disruptive technologies make it impossible to plan for. He argues that these new technologies must be contained, but at the same time, containment is not possible. The future of mankind lies in solving what appears to be an unsolvable paradox.
This article will summarize some of the key points in the book and analyze what this means for humanity moving forward. If The Coming Wave isn’t on your reading list now, once you finish this essay, it certainly will be.
According to Suleyman, the most important technological revolutions are happening in artificial intelligence and synthetic biology.
The book focuses on two separate but complementary technological revolutions that are happening at the same time: one in the advancement of cognitive capabilities and the other in biology.
The first is artificial intelligence. This isn’t just about the emergence of large language models (LLMs) that more or less act as smarter algorithms. This is fundamentally about the creation of a new cognitive ability to solve complex problems, competing with human existence in a way no other technology has ever done before.
Suleyman writes:
“A big part of what makes humans intelligent is that we look at the past to predict what might happen in the future. In this sense intelligence can be understood as the ability to generate a range of plausible scenarios about how the world around you may unfold and then base sensible actions on those predictions.” (62)
True artificial intelligence will have the ability to analyze patterns and learn from them on par with humans, eventually eclipsing us. The current public discourse focuses on artificial intelligence as a job replacer. While that is certainly an outcome, it isn’t the core problem. Artificial intelligence is a human replacer. Anything we’ve been able to do up until now can be done faster and better by artificial intelligence.
The second concurrent technological revolution is happening in the life sciences. Suleyman refers to this as synthetic biology. What it boils down to is the ability to manipulate the building blocks of life — DNA — and eventually give humans the ability to create entirely new species from it.
At face value, synthetic biology is tremendously beneficial. The ability to edit DNA — combined with the intelligence of AI — means we might be on the cusp of curing cancer or on the verge of developing new drugs that could solve a host of medical maladies.
But this is a double-edged sword. The same technology that can be used to cure cancer can also be used to create a deadly pathogen in an unregulated space — like a home garage — and release it into the world.
Both artificial intelligence and synthetic biology are fundamentally about how we store and process data. This is arguably why data has been considered the new oil of the 21st century. Those who own the data will be the ones who are able to harness the power of these new technologies and profit from it. It’s why Elon Musk bought Twitter, after all.
Combined, these two technologies won’t just disrupt life as we know it. They’ll create it. We’ll enter an entirely new way of existence. Humans will coexist alongside a new class of superhumans who will become demigods in their own right.
The coming wave of new disruptive technologies needs to be contained, but containment isn’t possible.
While there are tremendous benefits that will come out of the coming wave, it isn’t without risk. Without putting guardrails in place, those risks may wind up doing more harm than good. That’s why Suleyman argues the proliferation of these new technologies must be contained. There’s just one problem: they can’t be.
The emergence of these new technologies creates an irreconcilable paradox. Proliferation is essential if we want to perpetuate our way of life. For better or for worse, modern civilization rests on the continuation of economic growth. The problem is the only way to guarantee growth is to diffuse new technologies to sustain the level of growth we need to survive.
This is essential because we face multiple existential crises all at once. We’re depleting natural resources faster than we’re replenishing them. At the same time, climate change is ravaging the delicate ecosystems that allow our planet to support human life. Either we’ll run out of resources to sustain ourselves (i.e., food), or we’ll run out of workers to keep the economic engine going. Both are likely to happen by the end of this century.
To maintain current living standards, we must introduce more technology into our economy. We need to solve climate change, continue producing food, and prepare for population collapse. Because of these existential threats, we have to proliferate new disruptive technologies while simultaneously containing them.
That can’t be done because the creators of these new technologies are incentivized by the large pot of gold that lies at the finish line. They are willing to continue kicking the can down the road, putting the next generation in peril, rather than taking the financial hit up front to ensure the longevity of human existence.
To understand the game that’s now being played in Silicon Valley, just look at the meteoric rise of OpenAI. Within five days of launching, ChatGPT had 1 million users. Two months later, it reached 100 million. This pace of user adoption is unheard of. For reference, it took Facebook 10 months to reach its first 1 million users when it launched back in 2004.
User acquisition is the game that’s being played in Silicon Valley. The more users you have, the more economically viable your platform is. When ChatGPT launched in 2022, OpenAI projected $1 billion in revenue by 2024. It’s now 2024 and OpenAI is reporting $1.3 billion. OpenAI has shown the world what is possible and now every tech entrepreneur is off to the races to make a profit from the windfall that will inevitably come from advances in artificial intelligence.
The problem with this is that they don’t care about the consequences of the technologies they’re creating. Driven by profit, tech entrepreneurs and investors are primarily motivated by the short-term financial gain of the technological windfall at hand. As we’ve seen from the fallout of Facebook, tech executives leading the vanguard don’t care because they don’t have to. They exist above regulatory oversight and are accountable to no one except their shareholders.
To make matters worse, the containment problem isn’t just about the two technologies at the heart of The Coming Wave: artificial intelligence and synthetic biology. These technologies are merely anchors that are ushering in a panoply of equally disruptive technologies. Quantum computing and robotics are just two examples. It’s not sufficient to contain just one of these technologies. You have to contain all of them. If that sounds like a Herculean task, that’s because it is.
The financial problem is only one dimension of the containment paradox. These new technologies are inherently omni-use. Those who are creating them may be out to do good, but that won’t stop someone else from using the same technologies to cause harm at scale.
Suleyman looks at this from the perspective of synthetic biology. The same tools that will inevitably be used to cure cancer, create life-saving drugs, or even prolong life itself can also be used to destroy life. It only takes one reckless garage experiment to unleash a deadly pathogen into the wild. We saw how ill-equipped world governments were prepared to handle COVID. Imagine what would happen if a far more dangerous pathogen is unleashed.
You can’t have your cake and eat it too. AI safety is essential to mitigate some of these risks, but despite the best efforts to do so, safety can’t be guaranteed. That’s because we don’t even know what to begin planning for. The proliferation of these new technologies has put us on the precipice of the limits to human cognitive capabilities.
The book’s author, Mustafa Suleyman, is one of the foremost experts in the field of artificial intelligence. If the creators of AI can’t fathom the various scenarios that might play out, what does that say for the rest of us? How can we even begin planning to mitigate risks if we don’t even know what to look for?
The coming wave is simply uncontainable. We can’t plan for much less mitigate all of the different ways these technologies might be used to cause harm. At the same time, there’s no incentive to slow down. Maximizing the return on investor capital is guiding the ship. That imperative is antithetical to ensuring public safety, thus making it impossible to adequately mitigate future risks.
The inability to contain the coming wave will have profound consequences for society (aka you and me).
One of the prevailing narratives around artificial intelligence is that it isn’t as disruptive as some people make it out to be. We’ve survived the proliferation of cars and the emergence of email, so why should we think of AI any differently?
Instead of thinking about AI from the context of technological advancements over the last century, it’s better to think about it across a longer time horizon. AI is our version of the Gutenberg Press. That single invention dismantled the power structure of Europe — one ruled by the Pope — and supplanted feudalism with capitalism. Because of the Gutenberg Press, new ideas became easier to disseminate. This made the Age of Exploration, the Enlightenment, and numerous scientific discoveries possible.
AI will be equally disruptive. Our current geopolitical order, model of governance, and the structure of our economy will change as AI continues to proliferate.
The nation-state as we know it is dead.
We live in a state-based world order. While each country may have its own preference for how to govern its citizens, the nation-state exists as a framework above a government. Governments come and go, but the state itself largely stays intact.
The idea of states is a relatively new one in world history. They were created out of the Treaty of Westphalia in 1648. Three centuries later, statehood was qualified as a defined territory with a permanent population that had a government that could enter into relations with other states. After World War II, those entities that met these criteria were given membership into the Union Nations, creating the state-based system we know today.
Thanks to AI, nation-states are no longer a viable way to organize the world. There are several reasons for this.
Labor displacing technology and the proliferation of decentralized military power degrades the social contract that gives governments the consent to operate. You and I pay taxes in exchange for the expectation that the government will provide basic public services and protect us. If the state can no longer guarantee these things, it diminishes the return on investment of our tax dollars. What’s the point of continuing to pay taxes if you get nothing out of it?
This is the question many wealthy elites have been asking themselves for years now, and it’s a key tenet of libertarian economic ideology. Those with capital believe the state is ill-equipped to spend it in a way that provides commensurate value. Therefore, the wealthy have done everything in their power to shield their wealth from the government.
Soon, this will trickle down as more and more citizens begin reconsidering the value proposition of the state. This will happen as wages fall, thanks to labor displacement. Without incomes to tax, the government won’t be able to raise enough revenue to support the needs of its population.
Over time, the quality of life will decline, making the state an ungovernable place. As a result, social strife will ensue. Populations have revolted throughout history; this is nothing new. The American, French, and Russian revolutions are all good examples of this. But what makes this time different is that individuals will have the ability to challenge the state militarily. Weaponry will be so advanced and economically accessible that anyone who wants it can get it.
Whatever revolution comes, it will be a multi-front conflict across physical, digital, financial, and possibly even biological domains. One or two individuals will have the capacity to eviscerate whatever state institutions remain.
The new world order that emerges will likely be a multipolar world of sovereign individuals and corporate feudal lords. Those with the means will build private militaries and self-regulated economies. Suleyman posits that corporations like Amazon and Google may even replace states altogether and change the nature of international relations itself.
Democracy cannot exist as the primary form of government in a world where surveillance is required to contain the proliferation of new technologies.
The end of the state-based world order puts democracy on notice. It’s impossible for democracy to exist when entities like corporations operate above the state or flat-out replace it. What’s the point of continuing to elect representatives when they’re no longer running the show?
A new form of government will likely emerge. Suleyman refers to this as “techno authoritarianism.” Rather than governance by consent, new techno-authoritarian regimes will govern by surveillance. This is inevitable as increased surveillance is required to check the proliferation of these new technologies. While regulation is essential, surveillance is an unintended byproduct of it.
To understand what this type of government will look like, turn your eyes toward Beijing. China has already implemented a surveillance state to keep tabs on its citizens. Its social credit score punishes any form of resistance. China wants to be the global leader in AI by 2030. If it succeeds, it will supplant the United States, becoming the new global hegemon. Under this scenario, techno-authoritarianism is all but inevitable.
The final nail in the coffin is the ability for anyone to distribute misinformation at scale. Thanks to the role Cambridge Analytica played in the 2016 presidential election, we’ve already seen the dress rehearsal of what it looks like to undermine the veracity of democratic elections.
This year, the United States is heading into the most significant presidential election in its history. But because a significant minority of the population believes the 2020 election was stolen, the groundwork has already been laid to discredit the results of what happens in November. Either Donald Trump will be reinstated or the election will be fraudulent. A peaceful transition of power is desired but probably not likely.
The proliferation of misinformation will further degrade trust in democratic institutions. This will create a power vacuum for other entities to step in. We’re already seeing the decentralization of power to wealthy individuals. These people — like Elon Musk — aren’t elected, but they control important media outlets and independently move markets. Whether you realize it or not, they exist above democracy and, thus, the rules and norms that the rest of us abide by.
Democracy and the new technologies that are emerging cannot coexist. A new form of government — one that is inherently more authoritarian — is likely to emerge as democracy’s replacement.
Job losses will be profound. This will increase the widening wealth gap to a point that is no longer tolerable or sustainable.
The key question that needs to be answered is what will become of individuals in a “post-sovereign” world?
For those without wealth — the modern peasantry if you will — technological unemployment will be the new norm. This shouldn’t come as a surprise. New technology has been phasing out human labor for quite some time now. Today’s bloated white collar class is a byproduct of offshoring and automation. The offspring of displaced workers were told to go to college and get an education. They did and now their jobs are at risk as AI competes for dominance in the cognitive labor market.
There’s a misplaced belief that the proliferation of advanced technologies will lead to the emergence of new jobs. While this might be the case, as Suleyman notes in The Coming Wave:
”New demand will create work but that doesn’t mean it’ll all get done by human beings.” (180)
Much of the conversation surrounding labor displacement is about how workers will subsist without gainful employment. Whether it’s funded by the government or corporations, some sort of universal basic income is likely to emerge. But that’s not the most important question.
Our society is largely structured around the dignity that work provides. Values like respect and honor are derived through work. How will we peacefully coexist with one another when we are deprived of the dignity and honor that comes with the ability to provide for ourselves and our families?
This is a fault line that is missing from the conversation. Suleyman alludes to it in the book and acknowledges social strife that will emerge from the widening wealth gap. Remember, the French didn’t behead the monarchy because they were wealthy. They beheaded them because working people could no longer afford a meager ration of bread. When humans are deprived of their dignity, they’ll do just about anything to restore it.
Labor disruption is here and it will likely be what Suleyman calls a fragility amplifier. It will highlight the weaknesses of democracy and the state, accelerating its demise. This is one of the most profound unintended consequences of the proliferation of new technologies that tech entrepreneurs and investors are all too eager to ignore. We will reach a tipping point. The question isn’t if but when.
The author offers a number of proposals for how to address the containment paradox. They are well-meaning but woefully inadequate.
The Coming Wave is as much a warning as it is a manifesto of how to put guardrails in place before we get to a point of no return. While Suleyman’s recommendations are practical, they aren’t necessarily feasible.
One example is creating an AI safety program that’s comparable to something like the Apollo program during the Cold War. While, in theory, that seems like a good idea, it’s not possible. AI isn’t being created by public institutions. There is no AI equivalent to NASA. Instead, AI is in the hands of ungoverned corporations that exist above the law.
Even if such a program was feasible, funding would be impossible. Suleyman suggests that “frontier corporations” — those building emerging technologies — should be investing about 20% of their income in safety. This is noble but not pragmatic. Companies operate at the behest of their shareholders, not the public good. Just look at OpenAI’s attempted ouster of Sam Altman as evidence of this. Even if safety is critical, it’s unlikely that profit-seeking shareholders will approve of adding a safety line to the corporate budget.
Another practical albeit unrealistic recommendation that Suleyman offers is the creation of standards to facilitate audits. The logic behind an audit is that transparency engenders trust. Again, this is a noble venture, but it begs two important questions: who will be responsible for creating standards, and how will audits be executed?
The government, of course, is the default candidate to mandate tech-related audits. That expectation, however, gives the government too much credit. The Department of Defense — America’s largest bureaucratic institution — hasn’t passed an audit in the last six years. That isn’t a strong vote of confidence for the government to be the one responsible for overseeing the safe emergence of new technologies. What good is an audit if nothing comes of it?
The most promising of Suleyman’s recommendations is to leverage choke points to buy time. Fortunately, a number of chokepoints already exist. More than 90% of the advanced microchips needed to power AI are generated by TSMC in Taiwan. While it’s probably likely there will be some sort of military showdown over the future of Taiwan, it’s unlikely other countries or rogue nonstate actors will be able to create their own microchip facilities. For now, supply is highly concentrated and extremely limited.
The other chokepoint is the lack of skills needed to bring these emerging technologies online. Very few professionals understand how AI actually works, much less how to leverage it in the workforce. Even though reskilling is essential, it won’t happen overnight. The shortage of workers will slow the pace of proliferation, at least in the short term.
While Suleyman adequately describes the significance of the containment paradox, the unfortunate reality is that we’re heading into uncharted territory. There are simply too many stakeholders with competing interests to make any sort of headway in putting guardrails in place. All it takes is one non-compliant partner — China, for example — or an individual rogue actor to undermine whatever safety regime is put in place.
Final takeaway.
In college, I studied political science, history, international relations, and Middle Eastern studies. I researched heavy topics like suicide terrorism, traveled to active conflict zones to study sovereignty, and wrote more research papers than I can count in the rapidly changing security environment. This is the first book I’ve read that I had to constantly put down because of the weight of the message in it.
We are not equipped to handle the challenges that lie ahead. Period.
As Suleyman notes in the book, “head-in-the-sand” is the default ideology governing the regulation of emerging technologies. Everyone from entry-level white collar workers to sitting heads of state would rather stick their heads in the sand, hoping all of this will just go away.
One of the biggest challenges with containment is that we need to prepare for what could go wrong without knowing the different possible scenarios that could unfold. That’s why acknowledging the coming wave and talking about it is critically important. You can’t prepare for something if you continue to ignore its existence.
The other challenge is that these new technologies are essential. Even though there will be catastrophic risks by proliferating these technologies, we have to let them advance. Our existence depends on it.
Counterbalancing the risks is a number of solutions we need to pursue. Yes, synthetic biology can create a deadly pathogen that might make the Black Death look insignificant. But that same technology could eliminate cancer or other life-threatening diseases while improving our quality of life.
What’s important to recognize is that applications of these new technologies don’t spontaneously come into existence. Technologies are fundamentally ideas. That’s why containing these technologies is paradoxical in nature.
To successfully contain a technology means to contain the ideas that bring technologies to life. The only way to contain ideas is to limit the freedoms of the people generating them. Whether we are ready to accept this premise or not, a tech-dominated future is going to be inherently undemocratic in nature. The deeper down the rabbit hole we go, the less free we will become. There is no way around it.
Suleyman admits the path forward is narrow, but it is a path we must walk. We need more oversight and regulation without going down the slippery slope that ushers in a Chinese-style surveillance state. At the same time, we need safety, but not at the expense of economic progress.
The question is how much are we willing to give in order to preserve the thing that matters most — our humanity.
The good news is that we aren’t completely powerless. We, the consumers, dictate how fast these new technologies proliferate or whether they proliferate at all. Tech entrepreneurs and investors can create applications for these technologies to their heart’s content. But if there isn’t consumer demand for these technologies, they’ll eventually wither.
There is power in collective action. It is up to all of us to establish the boundaries dictating how technology will coexist with our lives moving forward.
The book is The Coming Wave by Mustafa Suleyman. This is essential reading for every citizen who wants to preserve democracy, freedom, and human dignity. As more and more new technologies come online, it is your individual consent and acquiescence that will determine how fast these technologies proliferate and the direction they proliferate.