Wharton Digital Press Logo

This website does not support versions of Internet Explorer below version 9. Please try updating your version of Internet Explorer or using another browser. Thank You.

Kartik Hosanagar  |  March 21, 2019

The First Self-Driving Fatality — And Why The Public Viewed It As an Omen

Share about this on Facebook Tweet about this on Twitter Share on LinkedIn

A Human's Guide to Machine Intelligence book cover, Kartik Hosanagar

Editor’s note: Kartik Hosanagar’s new book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, explores how algorithms affect the lives of millions of people. In the following excerpt from the book, Hosanagar examines the first known fatality in a self-driving car and how it affected public perception of — and trust in — algorithms.

On a beautiful Saturday afternoon in May 2016, on a sunny stretch of road in northern Florida, Joshua Brown, a forty-year-old entrepreneur and technology enthusiast from northeastern Ohio, was sitting behind the wheel of his Tesla Model S sedan. He’d just spent a week with family at Disney World, they had all said goodbye that morning, and he was now driving to a business meeting for a company he had started five years earlier, to help bring internet access to rural areas.

At about twenty minutes before 5:00 p.m., Brown’s car was zipping along U.S. highway 27A when a semi carrying blueberries in the opposite direction pulled into a left turn lane and then crossed the road ahead of him. Reports suggest that the truck driver ought to have waited for Brown to pass, but there was still sufficient time for Brown to slow down.

The Tesla, which Brown had put into self-driving Autopilot mode, failed to register the white truck against the bright sky. Brown himself failed to take control and engage the brakes. The car crashed into the side of the truck-trailer at seventy-four miles an hour, then continued under it until hitting a utility pole, spinning, and finally coming to rest. Brown, investigators believe, was killed almost instantly upon the Tesla’s impact with the truck.

Brown’s death is the first known fatality in a car operating in self-driving mode, and it got a lot of attention in the technology and automobile worlds. Some media commentators and industry analysts had already faulted Tesla for including Autopilot in its cars because the technology was still in beta mode. Others had criticized the company for not doing more to ensure that drivers are actively in control of their vehicles while Autopilot is engaged. Less than a month before the accident, Elon Musk, Tesla’s founder, had promoted a video Brown himself had made of a different Tesla Autopilot experience, wherein the car successfully noted and avoided a truck pulling ahead of it. Now, after the fatal accident, Musk found himself defending Autopilot as a lifesaving technology that, when used correctly, would reduce the overall number of vehicle fatalities.

Most experts agree. In more than 90 percent of conventional crashes, human error is to blame. According to some estimates, self-driving cars could save up to 1.5 million lives just in the United States and close to 50 million lives globally in the next fifty years. Yet in an April 2018 poll, 50 percent of the respondents said they believed autonomous cars were less safe than human drivers. After the Tesla crash, consumers were apoplectic. “This dangerous technology should be banned. Get it off the road. The public streets are not a place to experiment with unproven self-driving systems,” wrote a San Franciscan commenting in a discussion forum. Clearly, people viewed Brown’s death as less an anomaly than a harbinger of things to come. If robots were going to take over our roads, they deserved serious scrutiny first. The National Transportation Safety Board, which is responsible for investigating airplane and train accidents, among other things, launched an inquiry.

“In more than 90 percent of conventional crashes, human error is to blame. … Yet in an April 2018 poll, 50 percent of the respondents said they believed autonomous cars were less safe than human drivers.”

The NTSB published its report in June 2017. Among the findings were that Brown had used the car’s Autopilot mode on an inappropriate road; Tesla’s manuals had instructed that it be used only on highways “where access is limited by entry and exit ramps,” not where a truck might make a left turn across two lanes of oncoming traffic. Moreover, where Tesla stated that a “fully attentive driver” should oversee the car’s actions even in Autopilot, Brown had been inattentive for at least thirty seconds before the crash. He may have been so because he had successfully used the Autopilot multiple times in the past and had begun to become more comfortable with the feature. The report’s authors included advice to car manufacturers: “Until automated vehicle systems mature, driver engagement remains integral to the automated driving system.” It is the carmakers’ responsibility, they said, to build systems that ensure drivers remain engaged.

From THE HUMAN’S GUIDE TO MACHINE INTELLIGENCE, by Kartik Hosanagar, published on March 12, 2019, by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2019 by Kartik Hosanagar.