Business & Economics Organizational Behavior
Meltdown
What Plane Crashes, Oil Spills, and Dumb Business Decisions Can Teach Us About H
- Publisher
- Penguin Group Canada
- Initial publish date
- Mar 2019
- Category
- Organizational Behavior, Disasters & Disaster Relief, Infrastructure
-
Paperback / softback
- ISBN
- 9780735233348
- Publish Date
- Mar 2019
- List Price
- $24.00
Classroom Resources
Where to buy it
Description
Winner of the 2019 National Business Book Award
A groundbreaking take on how complexity causes failure in all kinds of modern systems—from social media to air travel—this practical and entertaining book reveals how we can prevent meltdowns in business and life.
A crash on the Washington, D.C. metro system. An accidental overdose in a state-of-the-art hospital. An overcooked holiday meal. At first glance, these disasters seem to have little in common. But surprising new research shows that all these events—and the myriad failures that dominate headlines every day—share similar causes. By understanding what lies behind these failures, we can design better systems, make our teams more productive, and transform how we make decisions at work and at home.
Weaving together cutting-edge social science with riveting stories that take us from the frontlines of the Volkswagen scandal to backstage at the Oscars, and from deep beneath the Gulf of Mexico to the top of Mount Everest, Chris Clearfield and András Tilcsik explain how the increasing complexity of our systems creates conditions ripe for failure and why our brains and teams can't keep up. They highlight the paradox of progress: Though modern systems have given us new capabilities, they've become vulnerable to surprising meltdowns—and even to corruption and misconduct.
But Meltdown isn't just about failure; it's about solutions—whether you're managing a team or the chaos of your family's morning routine. It reveals why ugly designs make us safer, how a five-minute exercise can prevent billion-dollar catastrophes, why teams with fewer experts are better at managing risk, and why diversity is one of our best safeguards against failure. The result is an eye-opening, empowering, and entirely original book—one that will change the way you see our complex world and your own place in it.
About the authors
Awards
- Winner, National Business Book Award
Contributor Notes
CHRISTOPHER CLEARFIELD is a former derivatives trader who worked in New York, Hong Kong, and Tokyo. He is a licensed commercial pilot and a graduate of Harvard University, where he studied physics and biology. Chris has written about complexity and failure for The Guardian, Forbes, and the Harvard Kennedy School Review. He lives in Seattle.
ANDRÁS TILCSIK holds the Canada Research Chair in Strategy, Organizations, and Society at the University of Toronto's Rotman School of Management. He has been recognized as one of the world's top forty business professors under forty and as one of thirty management thinkers most likely to shape the future of organizations. The United Nations named his course on organizational failure as the best course on disaster risk management in a business school. He lives in Toronto.
Excerpt: Meltdown: What Plane Crashes, Oil Spills, and Dumb Business Decisions Can Teach Us About H (by (author) Chris Clearfield & András Tilcsik)
I.
It was a warm Monday in late June, just before rush hour. Ann and David Wherley boarded the first car of Metro Train 112, bound for Washington, DC, on their way home from an orientation for hospital volunteers. A young woman gave up her seat near the front of the car, and the Wherleys sat together, inseparable as they had been since high school. David, sixty-two, had retired recently, and the couple was looking forward to their fortieth wedding anniversary and a trip to Europe.
David had been a decorated fighter pilot and Air Force officer. In fact, during the 9/ 11 attacks, he was the general who scrambled fighter jets over Washington and ordered pilots to use their discretion to shoot down any passenger plane that threatened the city. But even as a commanding general, he refused to be chauffeured around. He loved taking the Metro.
At 4:58 p.m., a screech interrupted the rhythmic click-clack of the wheels as the driver slammed on the emergency brake. Then came a cacophony of broken glass, bending metal, and screams as Train 112 slammed into something: a train inexplicably stopped on the tracks. The impact drove a thirteen-foot-thick wall of debris—a mass of crushed seats, ceiling panels, and metal posts—into Train 112 and killed David, Ann, and seven others.
Such a collision should have been impossible. The entire Washington Metro system, made up of over one hundred miles of track, was wired to detect and control trains. When trains got too close to each other, they would automatically slow down. But that day, as Train 112 rounded a curve, another train sat stopped on the tracks ahead—present in the real world, but somehow invisible to the track sensors. Train 112 automatically accelerated; after all, the sensors showed that the track was clear. By the time the driver saw the stopped train and hit the emergency brake, the collision was inevitable.
As rescue workers pulled injured riders from the wreckage, Metro engineers got to work. They needed to make sure that other passengers weren’t at risk. And to do that, they had to solve a mystery: How does a train twice the length of a football field just disappear?
II.
Alarming failures like the crash of Train 112 happen all the time.
Take a look at this list of headlines, all from a single week:
CATASTROPHIC MINING DISASTER IN BRAZIL
ANOTHER DAY, ANOTHER HACK: CREDIT CARD
STEALING MALWARE HITS HOTEL CHAIN
HYUNDAI CARS ARE RECALLED OVER
FAULTY BRAKE SWITCH
STORY OF FLINT WATER CRISIS, “FAILURE OF
GOVERNMENT,” UNFOLDS IN WASHINGTON
“MASSIVE INTELLIGENCE FAILURE” LED
TO THE PARIS TERROR ATTACKS
VANCOUVER SETTLES LAWSUIT WITH
MAN WRONGFULLY IMPRISONED
FOR NEARLY THREE DECADES
EBOLA RESPONSE: SCIENTISTS BLAST
“DANGEROUSLY FRAGILE GLOBAL SYSTEM”
INQUEST INTO MURDER OF SEVEN-
YEAR-OLD HAS BECOME SAGA OF THE SYSTEM’S FAILURE
TO PROTECT HER
FIRES TO CLEAR LAND SPARK VAST WILDFIRES AND
CAUSE ECOLOGICAL DISASTER IN INDONESIA
FDA INVESTIGATES E. COLI OUTBREAK AT
CHIPOTLE RESTAURANTS IN WASHINGTON
AND OREGON
It might sound like an exceptionally bad week, but there was nothing special about it. Hardly a week goes by without a handful of meltdowns. One week it’s an industrial accident, another it’s a bankruptcy, and another it’s an awful medical error. Even small issues can wreak great havoc. In recent years, for example, several airlines have grounded their entire fleets of planes because of glitches in their technology systems, stranding passengers for days. These problems may make us angry, but they don’t surprise us anymore. To be alive in the twenty-first century is to rely on countless complex systems that profoundly affect our lives—from the electrical grid and water treatment plants to transportation systems and communication networks to healthcare and the law. But sometimes our systems fail us.
These failures—and even large-scale meltdowns like BP’s oil spill in the Gulf of Mexico, the Fukushima nuclear disaster, and the global financial crisis—seem to stem from very different problems. But their underlying causes turn out to be surprisingly similar. These events have a shared DNA, one that researchers are just beginning to understand. That shared DNA means that failures in one industry can provide lessons for people in other fields: dentists can learn from pilots, and marketing teams from SWAT teams. Understanding the deep causes of failure in high-stakes, exotic domains like deepwater drilling and high-altitude mountaineering can teach us lessons about failure in our more ordinary systems, too. It turns out that everyday meltdowns—failed projects, bad hiring decisions, and even disastrous dinner parties—have a lot in common with oil spills and mountaineering accidents. Fortunately, over the past few decades, researchers around the world have found solutions that can transform how we make decisions, build our teams, design our systems, and prevent the kinds of meltdowns that have become all too common.
This book has two parts. The first explores why our systems fail. It reveals that the same reasons lie behind what appear to be very different events: a social media disaster at Starbucks, the Three Mile Island nuclear accident, a meltdown on Wall Street, and a strange scandal in small-town post offices in the United Kingdom. Part One also explores the paradox of progress: as our systems have become more capable, they have also become more complex and less forgiving, creating an environment where small mistakes can turn into massive failures. Systems that were once innocuous can now accidentally kill people, bankrupt companies, and jail the innocent. And Part One shows that the changes that made our systems vulnerable to accidental failures also provide fertile ground for intentional wrongdoing, like hacking and fraud.
The second part—the bulk of the book—looks at solutions that we can all use. It shows how people can learn from small errors to find out where bigger threats are brewing, how a receptionist saved a life by speaking up to her boss, and how a training program that pilots initially dismissed as “charm school” became one of the reasons flying is safer than ever. It examines why diversity helps us avoid big mistakes and what Everest climbers and Boeing engineers can teach us about the power of simplicity. We’ll learn how film crews and ER teams manage surprises—and how their approach could have saved the mismanaged Facebook IPO and Target’s failed Canadian expansion. And we’ll revisit the puzzle of the disappearing Metro train and see how close engineers were to averting that tragedy.
Editorial Reviews
Praise for Meltdown:
"Endlessly fascinating, brimming with insight, and more fun than a book about failure has any right to be, Meltdown will transform how you think about the systems that govern our lives. This is a wonderful book."
—Charles Duhigg, author of The Power of Habit and Smarter Faster Better
“It is rare to have the pleasure of reading a book that tackles a complex issue and provides a new way of thinking that is both rigorous and practical. Meltdown is such a book. I not only enjoyed it but also learned a lot about the world—most of it utterly counterintuitive—and even something important about myself. A valuable read for anyone who would rather shape their world than just let it happen to them.” —Roger Martin, author of The Design of Business
“As technology advances, it brings an explosion of complexity and interdependence that can threaten our most critical systems and organizations in unforeseen ways. Meltdown is essential reading for anyone who seeks to understand these dangers and what can be done to address them.”
—Martin Ford, author of Rise of the Robots
“Too often, we blame failures on bad apples when the real culprits are bad barrels. This engaging, evidence-based book sheds light on why blunders and bankruptcies happen—and how you can get better at designing systems to prevent them.”
—Adam Grant, author of Originals and co-author of Option B
“Meltdown is essential reading for any leader. We are all human. We all make mistakes. But in complex, whirlwind environments, those mistakes can spiral quickly out of control. This book can help.”
—Anne-Marie Slaughter, author of Unfinished Business
“Meltdown is not for the faint of heart. In crisp, compelling prose, Chris Clearfield and András Tilcsik explain why failures occur so often in today’s unfathomably complex systems. Their insights and takeaways offer crucial guidance for avoiding your own disasters.”
—Daniel H. Pink, author of To Sell is Human and Drive