If Anyone Builds It, Everyone Dies If Anyone Builds It, Everyone Dies

If Anyone Builds It, Everyone Dies

    • 4.2 • 24 Ratings
    • £7.99

Publisher Description

Brought to you by Penguin.

An instant NEW YORK TIMES bestseller

** A Guardian biggest book of the autumn **


AI is the greatest threat to our existence that we have ever faced.

The scramble to create superhuman AI has put us on the path to extinction – but it’s not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsky and Nate Soares, explain why artificial superintelligence would be a global suicide bomb and call for an immediate halt to its development.

The technology may be complex but the facts are simple: companies and countries are in a race to build machines that will be smarter than any person, and the world is devastatingly unprepared for what will come next.

Could a machine superintelligence wipe out our entire species? Would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares explore the theory and the evidence, present one possible extinction scenario and explain what it would take for humanity to survive.

The world is racing to build something truly new – and if anyone builds it, everyone dies.

'The most important book of the decade' MAX TEGMARK, author of Life 3.0

'A loud trumpet call to humanity to awaken us as we sleepwalk into disaster - we must wake up' STEPHEN FRY

© Eliezer Yudkowsky and Nate Soares 2025 (P) Penguin Audio 2025

GENRE
Non-Fiction
NARRATOR
RB
Rafe Beckley
LANGUAGE
EN
English
LENGTH
06:18
hr min
RELEASED
2025
18 September
PUBLISHER
Random House
SIZE
325.3
MB

Customer Reviews

Listener228 ,

An excellent explanation and a call for humanity to act

So: Geoffrey Hinton receives a Nobel prize for his foundational work in AI, but says he (partly) regrets his life’s work and thinks there’s >50% chance that AI will literally kill everyone on the planet.

Countless scientists sign the statement that mitigating the risk of extinction from AI should be a global priority.

This book is an excellent explanation why.

It argues that building something that is superhumanly good at achieving goals, in a way that doesn’t cause a catastrophe, is hard. Especially so given the way modern AI is built: no one writes the code it’s made of; instead of code, modern AI is hundreds of billions to trillions of numbers we don’t understand and automatically “grow” until it becomes smart.

If you have a background in machine learning, you can read the additional online materials the book links to, and read the papers to understand the technical details: that when a goal-oriented AI is very smart and has long-term goals, it tries to maximize reward signal during training for instrumental reasons, to prevent gradient descent (the process by which the numbers inside artificial neural networks are grown) from changing its goals to comply, and this means that gradient descent looks for capable agentic systems with some long-term goals, without being able to distinguish between the goals we like and the goals we don’t like. AI’s internals are effectively black boxes. (And we already empirically see this starting to happen: see the alignment faking paper.)

Some of the parables feel unnecessary, but there’s an audience that would appreciate them making the concepts the authors point at more intuitive.

I’ve read an earlier draft and absolutely enjoyed the book back then, and still need to re-read it to form a final opinion. But I can say with certainty that the book is well-written; the best compressed version of the argument. I would really like to recommend this book to everyone.

Shamrocksteady ,

Boring and convoluted

If you’re not super into tech or AI I’d give it a miss. It’s not meant for the casual reader/listener.

The Singularity is Nearer The Singularity is Nearer
2024
Empire of AI Empire of AI
2025
Lights On Lights On
2025
Genesis Genesis
2024
The Hour of the Predator The Hour of the Predator
2025
The Age of AI The Age of AI
2021