How to Stay Smart in a Smart World
Why Human Intelligence Still Beats Algorithms
-
-
3.0 • 1 Rating
-
-
- $17.99
Publisher Description
STAYING IN CHARGE: How do we navigate a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards”?
“Anyone worried about the age of AI will sleep better after reading this intelligent account” about the limits and dangers of technology (Publishers Weekly).
Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms.
Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.
PUBLISHERS WEEKLY
Gigerenzer (Risk Savvy), director emeritus at the Max Planck Institute for Human Development, offers plausible reassurance for those who fear artificial intelligence is poised to take over human decision-making. Things that AI can do well, Gigerenzer explains, such as playing chess, occur in strict rules-based environments where there's little or no chance of something unpredictable happening. The AI Watson's vaunted Jeopardy! victory over human champions Ken Jennings and Brad Rutter, for example, was less impressive than it appears, Gigerenzer writes, as it was the result of an altered game in which certain kinds of questions were excluded because it was anticipated that the AI wouldn't be able to answer them accurately. Gigerenzer also covers more pressing issues, among them self-driving cars that are unable to accurately assess dangers to pedestrians, tech and ads that are designed to demand attention and distract users, and the large-scale voluntary abandonment of privacy. It amounts to a solid case against "unconditional trust in complex algorithms," arguing that "more computing power and bigger data" won't bridge the gap between machine and mind, because most problems humans face involve "situations in which uncertainty abounds." Anyone worried about the age of AI will sleep better after reading this intelligent account.