Tomorrow Today

Tomorrow Today

Share this post

Tomorrow Today
Tomorrow Today
Will AI Kill Us?

Will AI Kill Us?

The ‘godfather of AI’ thinks it’s possible. Here’s what you should know.

Amanda Claypool's avatar
Amanda Claypool
Jun 19, 2025
∙ Paid
2

Share this post

Tomorrow Today
Tomorrow Today
Will AI Kill Us?
Share
Diary of a CEO screenshot
Diary of a CEO

One of the most important debates happening in AI right now isn’t what’s going to happen to you when AI takes your job.

It’s how to build AI safely so it doesn’t wipe out humanity.

This is a concern for Geoffrey Hinton, the “godfather of AI.” A professor of computer science and a Nobel prize winner, Hinton is a pioneer of artificial intelligence. His work on artificial neural networks – essentially programs that simulate the function of brain cells on a computer – led to the creation of AI chatbots that we use today.

Hinton was recently interviewed on Diary of a CEO where he expressed his concerns about the development of AI and the road we’re heading down. You can watch the full episode here:

A decade ago, Hinton co-founded a company with former student and OpenAI cofounder Ilya Sutskever. That company was acquired by Google where Hinton worked until 2023. Shortly after ChatGPT was released and the dangers of AI began to manifest, Hinton left Google to warn the public about the risk artificial general intelligence could pose if it wasn’t created responsibly.

AI safety is a core part of building AI. It’s arguably as important to building AI as AI itself. But there’s disagreement on how AI safety should be executed and what AI safety even is to begin with.

In one camp sits experts like Geoffrey Hinton, Mo Gawdat, Mustafa Suleyman, and Ilya Sutskever. These individuals have warned against the risk of building AI too quickly and failing to put guardrails in place that mitigate the future consequences of a potential mass extinction event down the road.

The AI safety camp is pitted against the likes of Sam Altman and Marc Andreesson who are optimistic about the AI-powered future and believe we need to build it as quickly as possible.

(It’s worth noting that the ones suggesting we should slow down are computer scientists who have each played an instrumental role in the development of AI while the ones suggesting we should push forward are founders and investors with a significant financial stake in AI advancement.)

Safety doesn’t seem to be compatible with the profit-driven business models that dominate our economy. Either you sacrifice short-term profit to do safety research and put guardrails in place to mitigate future risks, or you don’t. And if you don’t you’ll just deal with the consequences later. Or even better: make them someone else’s problem.

The debate over AI safety reveals we are at a critical junction in Western civilization. For the last three centuries, the profit motive has shaped how we govern ourselves and the way we organize society. The ability to own capital, build a company around it, and generate a profit for it is what makes the Western world distinct from every other civilization that preceded it.

But it’s clear the profit motive that dominates shareholder capitalism is incompatible with the new AI-powered technological revolution we’re entering into. Conflicting interest between safety and profit, combined with the urgency – but improbability – of greater government regulation, means AI businesses cannot operate under the existing paradigm without imperiling life as we know it.

A new system will have to emerge. And this is where things get real juicy. AI CEOs aren’t just drunk on the prospect of cashing in on one of the greatest financial windfalls in human history. They’re eager to establish their legacy in the history books. They want to create the new system that emerges and the power that comes from it.

Rather than slowing down the development of AI, countries and the companies within them are moving full-steam ahead. They know that whoever controls AI will control the future. Sovereign leaders and corporate executives alike want to be the first ones to reach the finish line before their competitors.

That’s why individuals like Geoffrey Hinton have been going on the podcast circuit for the last two years. They know what’s at stake and they’re trying to warn the public before it’s too late.

Based on current estimates, we are in what I would call an 18-month goldilocks window. AI has only just started to become disruptive. While the window is shrinking, there is still time to act. But once AI agents come fully online and begin replacing workers at scale, there won’t be anything people – or their governments – can do to stop the wheels of progress from turning.

This essay will dive into some of the key threats Geoffrey Hinton raises in his appearance on Diary of a CEO. It will draw attention to some of the big, existential crises that Western civilization will have to grapple with as AI continues to be developed. It will conclude by recommending action steps you can take to prepare for the AI-powered future that lies ahead.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Amanda Claypool
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share