Driverless cars and trucks—or autonomous vehicles (AV)—offer a tantalizing promise of safer and unclogged roadways. In 2017, 37,150 people died in accidents on America’s roads, reports the National Highway Traffic Safety Administration, up sharply from 32,479 in 2011, and far worse per capita than anywhere else in the Western world. And the United States has ten of the 25 most congested cities globally, according to the Inrix transportation intelligence group. Cars that drive themselves could reduce crashes to a small fraction of today’s totals, while moving people about more efficiently, in larger groups and at faster speeds.

For now, though, these positive outcomes remain speculative. Even as companies start deploying driverless cars on America’s streets, no data exist yet on whether the vehicles are consistently safer than those with human drivers and, if so, under what circumstances. The safety of driverless cars will depend in part on policies adopted by federal, state, and local officials—just as speed limits help keep human drivers from inflicting carnage.

Autonomous vehicles pose a particular challenge for dense cities like New York, which have always had an uneasy relationship with the automobile. But if cities handle the introduction of this new technology right, the potential payoff won’t just be improved street safety; it will be an improved quality of life for everyone—by car, on foot, or on bikes.

As it has done with many recent technological advances, America’s military ignited the autonomous-vehicle revolution. Back in 2000, Congress directed the Defense Department to set a goal that “by 2015, one-third of the operational ground-combat vehicles” would be unmanned. Following the directive, the Pentagon’s Defense Research Projects Agency, DARPA, began holding contests for driverless vehicles, which would be raced by their private-sector and academic sponsors across the Nevada desert for prize money.

The technology advanced so quickly that, in 2007, DARPA “made it an urban challenge,” Ryan Chin, CEO of Optimus Ride, a Cambridge, Massachusetts–based software company, recently told an Urban Land Institute New York conference. The military had AV teams compete in a mocked-up suburban environment, awarding points for their vehicles’ ability to follow California traffic rules. “Most self-driving vehicle companies around today can be traced back to the teams involved in this challenge,” Chin observed.

Yet confusion remains over exactly what AV tech can do today. At a think-tank gathering held before the Washington (D.C.) Auto Show in January, Talal Al Kaissi, a representative of the United Arab Emirates, got car wonks buzzing when he announced (perhaps jokingly) that he had set his Tesla to autopilot and let the car drive him to the conference, while he wrote his presentation. Bryan Reimer, associate director of the New England University Transportation Center at MIT, is more circumspect. Autonomous vehicles won’t be street-ready in “the next 12 months,” he says, but it won’t take “a thousand years, either.” Standard and Poor’s predicts that driverless cars will make up a 2 percent to 30 percent share of vehicle sales by 2030.

An autonomous vehicle relies on external sensors—camera, radar, and laser-based lidar—to “see” what’s around it. Digital maps guide it. Massive processing power enables the car to “decide” instantly what to do with all the millions of data inputs—how it should respond, that is, to what’s going on around it. The technology is fast-evolving. Each vehicle “learns” as it drives; as research and development accelerate, the learning process does, too. Waymo, the driverless-car arm of Alphabet, took six years to complete its first 1 million driverless miles, which happened late last year. It took just three months, in early 2018, to reach 5 million miles, and as of mid-2018, the company has logged 6 million.

The industry and its regulators have quantified the AV capability of a vehicle with a six-level scale, designed by the Society of Automotive Engineers. A Level 0 car is an old-fashioned car, without automation. Level 1 means that a car allows technology to take over for specific well-defined functions, which aren’t critical to life and limb, such as parallel parking. In a Level 2 car, partial automation lets a human operator relinquish more important functions, like steering and braking, but the driver must monitor the environment. A Level 3 car has “conditional automation,” meaning that the driver doesn’t have to monitor the environment but must take over quickly if the car asks him to. A “high automation” Level 4 vehicle can do all driving tasks in “certain circumstances,” with no need for the driver to pay attention. Finally, a “full automation,” Level 5 car can do everything, anytime, anywhere.

From Massachusetts to California, Americans have become gradually familiar with cars in each of the categories. Most people likely associate AV technology with a handful of firms. Starting in 2009, Waymo was the first to experiment with strange-looking cars, outfitted with an array of external sensors, and driving themselves around the American West. Earlier this year, Waymo deployed driverless, Level 4 taxis on select routes in Arizona, producing videos of passengers cheerfully taking selfies in the backseat as the steering wheel in the front moves by itself (a remote operator monitors activity and can take over).

For nearly two years, until a crash in Arizona suspended the experiment, Uber was running an AV taxi in Pittsburgh that qualified as Level 2, with human operators ready to take the wheel. Tesla’s Model S and Model 3 cars and X SUV offer partial Level 2 capability. The vehicles can do much of the work on many stretches of road, but with human drivers required to keep hands at the wheel. Cadillac’s “Super Cruise” feature, sold in the CT6 sedan model since last September, is the first commercially available vehicle that lets drivers take their feet off the pedals and hands off the wheel along stretches of divided highway; the car stays in its lane and maintains distance from other vehicles automatically. Cadillac says that it will implement the technology on all models, as well as other GM brands, starting in 2020. Beyond these leading companies, virtually every global auto firm is involved in autonomous development, often partnering with startups providing mapping software, radar hardware, and other support services.

“Even the most sophisticated AV technology can operate only in certain well-defined areas.”

But even Waymo’s Level 4 car isn’t autonomous everywhere, and even the most sophisticated AV technology can operate only in certain well-defined areas—such as the places in Arizona where Waymo has made extensive maps. Uber’s Pittsburgh shuttle confined itself to small geographic areas. And the highest-grade tech is not close to being commercially available. Waymo, Uber, and competitors devote extensive capital investment, maintenance, and care to each autonomous vehicle and cannot mass-produce them at present. Commercially available technology is installed only on select luxury cars; it performs best in navigating on divided highways with clearly painted lanes, with few (if any) stoplights or intersections and with limited or no access for pedestrians and cyclists.

The self-driving Uber shuttle in Pittsburgh underscores the difficulties posed by the greater complexity of urban environments. Things that wouldn’t faze most human drivers confused the shuttle, the Detroit News’s Henry Payne noted last fall—a construction barrier obstructing part of the car’s travel lane, say, or a four-way stop sign. The shuttle’s robotic reactions weren’t always encouraging. While it might have “seen” a worker unloading a truck close to a travel lane, it wouldn’t give that person as wide a berth as a conscientious human would, instead coming too close for the worker’s comfort. “I was surprised how frequently the driver took control of the robotic car,” Payne wrote.

Recent crashes demonstrate the current fallibility of AV technology. In March 2018, a Tesla Model X in “autopilot” mode—in which it can maintain its lane, change lanes, speed up or slow down, and brake—veered at high speed into a barrier on a Mountain View, California, highway, killing the driver, an Apple engineer. In May 2016, a Tesla Model S in autopilot steered into a large truck crossing a northern Florida highway, also killing the driver. This March, too, an Uber prototype SUV in autonomous mode, with a human driver in the front seat, hit a pedestrian in Tempe, Arizona, killing her (and leading Uber to suspend testing around the country).

The need for humans to take over a car within milliseconds after they’ve sat disengaged from the road for a long period may be one of the biggest perils of intermediate AV technology. In all three crashes, the human operator was supposed to be paying attention. But in at least two cases—the Florida Tesla crash and the Arizona Uber crash—the driver stopped doing so, perhaps because the technology encourages complacency. (The Apple engineer’s death remains under investigation.)

With a much longer track record in traditional research and development in the automotive industry, Cadillac is attempting to navigate these difficulties. Cadillac’s Super Cruise technology is “not something we describe as autonomous,” says Donny Nordlicht, product and tech communications manager for the brand. The company describes the car’s capacity to operate hands- and pedal-free as “driver-assist,” intended to reduce fatigue. The technology is enabled only when the car is on a divided highway, previously mapped by the firm to a degree of accuracy six times that of a cell phone’s schema. Cadillac has incorporated other safety features, too. An internal camera monitors the surface area of the driver’s face to ensure that he is engaged. “If I’m staring at my phone, it will start to alert me,” says Nordlicht, and, eventually, after multiple escalating missed warnings, “come to a complete stop with the hazard lights on.” (Tesla, which didn’t discuss its crash, also has warning systems to grab zoned-out drivers’ attention; one mystery is why some drivers seem to ignore them.)

There’s no point in prognosticating on when most Americans will be driving—or riding in—Level 4 or 5 cars; it will happen when it happens, if it does. The good news is that we don’t have to wait to make the leap from cars that operate at the whims of human drivers to cars that drive themselves fully before benefiting from tech-enabled safety advances. The most obvious gain: braking. Over the past decade, automakers have added collision warnings and automatic emergency-braking mechanisms to higher-end vehicles, and the Insurance Institute for Highway Safety (IIHS) and the Highway Loss Data Institute, both insurance-funded nonprofits, have found that cars equipped with both technologies have reduced rear-end crashes by 50 percent; crashes with injuries dropped 56 percent.

Such improvements may not have pushed overall traffic deaths down because the technology is only beginning to be installed. As of 2016, only 1 percent of registered vehicles on American roads had automatic-braking technology, says IIHS spokesperson Russ Rader. As people replace cars and as the hardware and software get cheaper, the feature will become common. Last year, more than half of new Toyotas came with automatic emergency braking. Twenty U.S. automakers—including the foreign firms that make cars here—have agreed to make the technology standard by 2022. As Meera Joshi, New York’s taxi and limousine commissioner, observes, the Toyota Camry, now equipped with standard automatic braking, is a popular model for New York City’s for-hire car fleet. Because owners must replace heavily used taxis and other for-hire cars more frequently than average household cars, New York should see a faster improvement in safety.

As AV tech advances, new public issues will arise. One will be determining the legal responsibility for crashes. “The biggest change in liability,” predicts Randy Maniloff, attorney at White & Williams in Philadelphia, will be a shift to blaming the product, at least in part, for accidents. “Right now, when there’s a car accident,” he says, “one person sues the other person.” In the future, though, “manufacturers of [AV] cars are going to be named as defendants.” Matters of law will not be as straightforward as figuring out whether a part was defective but could also involve human drivers’ interaction with hardware and software. Automakers already worry that they may incur legal risk if they allow drivers to override safety features, such as speed limits, when mapping technology recognizes the legal speed limit on any stretch of road.

A view from inside an autonomous vehicle (NIPIPHON NA CHIANGMAI / ALAMY STOCK PHOTO)
A view from inside an autonomous vehicle (NIPIPHON NA CHIANGMAI / ALAMY STOCK PHOTO)

For now, the more pressing need is establishing the right kind of safety regulations for autonomous vehicles. It’s almost taken for granted in tech circles that autonomous vehicles will be safer than human-driven vehicles. Elon Musk, Tesla chief, has accused AV skeptics of “killing people.” Yet there’s reason to be skeptical in the absence of data. The government measures vehicle deaths per 100 million miles. Waymo, the AV sector’s most advanced participant, has logged 6 million miles—too few to know if its technology performs better or worse than human drivers. Another metric involves deaths per 100,000 residents, but with early market leader Tesla having sold about 250,000 autopilot-equipped cars to a select group of affluent Americans, the data here, too, are still sparse.

Differences in state approaches to regulation highlight the wisdom of a federalist system, in which states can experiment in how best to handle this disruptive industry. Massachusetts and California, America’s centers of academic and private-sector tech creativity, have each adopted a safety-first stance as driverless-car companies have left simulated environments for real-world tests. In 2014, after months of public hearings and workshops, the Golden State’s department of motor vehicles issued the state’s first AV regulations. Before they could use an autonomous vehicle on public roadways, firms had to certify that they had extensively tested their vehicles under real-life conditions at research sites. A trained operator would have to sit in the driver’s seat of any car, ready to take over at any time. The application, while stringent, wasn’t a deterrent to innovation: more than 50 companies, from AutoX to Zoox, now hold California permits.

Massachusetts has taken an even more conservative approach, leaving important details up to individual cities. Boston says that it’s adopting a “very graduated” philosophy. Since last year, it has approved three firms, including Chin’s Optimus, for testing in limited areas in cars with a safety driver on board. Boston requires companies to master simple driving conditions before okaying them to move on to more difficult levels, like bad-weather conditions or more complex roads. Nearby Cambridge requires two test drivers per vehicle. Like Boston, the city requires firms to prove a “clear progression of increasingly difficult situations,” city council member Quinton Zondervan told the Harvard Crimson. (New York State’s rules for future tests in lower Manhattan, the first in the city, are just as strict: companies must have a trained driver and a second observer up front, and must notify the state police of each route beforehand. GM, the holder of the first permit, has quietly postponed its tests, originally slated for early 2018.)

Until recently, Arizona had taken a much looser approach. In December 2016, contrasting itself with California, Arizona’s department of transportation issued a statement saying that “part of what makes Arizona an ideal place for Uber . . . to test autonomous vehicle technology is that there are no special permits or licensing required.” (Uber had refused to apply for a California permit the same month.) The potential problem with Arizona’s stance became apparent earlier this year, when the Uber car inflicted the first-ever pedestrian death by autonomous vehicle. Since then, the state has tightened its laws. Arizona has a much higher tolerance for auto deaths in general, with a rate of 13.9 per 100,000 residents, compared with 5.7 and 5.2 for Massachusetts and New York, respectively, and with California, at just under 9.2. Uber did not benefit from Arizona’s lax framework: the pedestrian death has set its AV research and development back by months, if not longer.

That different state laws have already coincided with different safety outcomes, even at such an early stage, suggests how important regulatory arrangements will be. As the cars reach the commercial stage, the jockeying for who is in charge will doubtless grow fiercer, but local and state governments should have the ultimate say over their streets.

One regulatory dispute has already flared. The federal government regulates automobiles’ physical safety design, but state governments, through their DMVs, regulate who can drive cars. In cars with no drivers—or with drivers some of the time in some circumstances—this division creates an obvious jurisdictional overlap. A bill languishing in Congress, called AV Start, would resolve this issue in Washington’s favor.

Car design alone doesn’t determine safety: road design and traffic law—responsibilities left mostly to state and local governments—are also key. Consider the March Uber crash in Tempe. Under the general rules of almost any road, the Uber vehicle was at fault. Leading up to the crash, its human operator is shown on video as distracted. She never tries to take over, even as the vehicle approaches the pedestrian, 49-year-old Elaine Herzberg, as she wheeled a bicycle across the road. Yet the area’s poor road design and high speed limits were also factors. The road, like many in busy suburbs, is a high-speed arterial, with a narrow sidewalk, few official crossings, and no bike lanes. Better AV technology may have averted this crash; Waymo executives have said that their driverless prototype would have seen Herzberg. But road infrastructure and speed limits that encourage pedestrian safety would have reduced the risk, too.

Driverless cars may be safer—but for whom? Two years ago, a Mercedes-Benz executive noted that, when an autonomous car has a “choice” between saving its driver or, say, a pedestrian or another driver, it may be best for it to choose its own driver. “If you know you can save at least one person, at least save that one. Save the one in the car.” The comment sparked a backlash, but existing practice isn’t completely at odds with it. After all, heavier SUVs are meant to protect occupants in crashes, but they have contributed to a 46 percent increase in pedestrian deaths since 2009, according to the IIHS. Still, a distracted driver who slams into a bicyclist on a road’s shoulder isn’t acting consciously, or even subconsciously, to sacrifice the cyclist over herself. Reducing the number of these crashes involves fixing road and speed conditions, installing bike lanes and more pedestrian crossings, and cutting speed limits.

These are questions for local and state government officials, not for driverless-software designers. States already take vastly different approaches to road design and governance, with commensurately different results. In Rhode Island, the crash-death rate per 100,000 residents is 4.8; in Mississippi, it is 23.1. In recent years, New York City, in particular, has defied the national trend of higher auto deaths, largely by redesigning its streets to make more room for pedestrians and cyclists and by discouraging high speeds.

The introduction of autonomous vehicles shouldn’t change a state’s or city’s responsibility—and right—to design and govern streets to keep citizens safe. But bad regulations could make it easy for autonomous vehicles to degrade urbanism. If a fully autonomous car can’t consistently “see” a cyclist crossing an intersection in a bike lane, or if it gets “confused” by a mass of pedestrians, it’s sensible for city officials to ban that car from operating on its roads. If Congress prohibits that solution, however, another tempting but damaging way to deal with this problem would be to restrict access for pedestrians and cyclists. Cities could “end up with pedestrians and cyclists effectively not being able to use urban areas,” warns Oliver Carsten, transportation-safety professor at Britain’s University of Leeds.

In the mid-twentieth century, cities changed their infrastructure to fit cars, often for the worse. Government rammed big highways through cities to compete with the quickly populating suburbs, which could handle lots of traffic; city neighborhoods emptied out even faster as a result. Government ripped out streetcar lines and neglected mass transit, only to reverse those decisions decades later. In the future, cities may have to resist pressure to build more limited-access highways to avoid “confusing” driverless cars, or to set traffic lights to favor caravans of fast-moving driverless cars rather than pedestrians and cyclists. “Didn’t we learn in the twentieth century about how many mistakes can be made with the automobile?” asks Carsten.

Tellingly, few AV experts seem interested in how pedestrians react to driverless cars. Wendy Ju, assistant professor in Cornell Tech’s information-science program on Manhattan’s Roosevelt Island, is one exception. Without research, she says, “we’re all guinea pigs.” Ju and her former colleagues at Stanford, where she previously taught, conducted a decidedly low-tech experiment on Stanford’s campus: concealing a driver in a regular car (in a costume that blended in with the front seat) so that the car appeared to be operating autonomously, and seeing how pedestrians reacted. “People did not seem shy to walk in front of the car” at crosswalks, Ju and her colleagues concluded. Of 67 observed pedestrians, “only two people clearly tried to avoid getting in front of the car by walking around it.” Some people explained that their actions were motivated by trust in government and corporations: “a lot of people thought that the car would definitely not be allowed on the road if it couldn’t see them,” she found.

As Ju explains, “the way that cars and pedestrians interact . . . is not completely cooperative.” New Yorkers, in particular, “are very good at this game,” she says. Employing a conservative AV safety model, where a car automatically stops at any hint of pedestrian encroachment, rather than “understanding” that the pedestrian has already correctly judged the spacing and the risk, could create more gridlock—and another temptation to reduce pedestrian access to streets. Though Ju’s research is at an early stage, she notes that “when we’re designing the interactions we have to do a lot more to model people’s behavior.” That behavior, in turn, differs from place to place; in New York, pedestrians expect cars to defer to them; in much of the rest of the country, even when pedestrians have the right of way, such as at a crosswalk, they defer to cars.

“With regard to congestion, just as with safety, what cities do with the new technology will matter profoundly.”

Congestion is indeed a concern, and not just because of autonomous cars stopping too often. If cheap driverless cars proliferate, they could overwhelm cities as well as suburbs with more traffic. New York has already experienced an early version of this, with the explosion in the number of Uber and Lyft cars contributing to record-low traffic speeds in Manhattan over the last five years. “If you make autonomous vehicles convenient . . . this induced demand is going to be a problem,” says Optimus Ride’s Chin. In one possible AV future, then, pedestrians and cyclists would have to contend with a new crush of traffic. They would have to wait longer at intersections for the crossing light, as the city will have reengineered the streets to favor long caravans of self-driving vehicles. There would be fewer cyclists and pedestrians, anyway, as they will have succumbed to the temptation of the ever-available, ever-cheaper autonomous car. A dwindling number would use an increasingly unreliable subway, and poor, mentally ill, and eccentric people would make up a growing percentage of the pedestrian population. Safety could suffer for people within cars, at least marginally. “Even if driverless cars are 90 percent safer” than driving oneself in a car, says Sam Schwartz, former city traffic commissioner and the author of an upcoming book on the topic, “transit remains 95 percent safer” than driving oneself. “So if we take people out of transit . . . and put them into robo-cars, on a per-mile basis, they will be less safe.”

With regard to congestion, just as with safety, what cities do with the new technology will matter profoundly. In dense urban environments such as Manhattan, a congestion fee could deter some people from taking driverless cars instead of a train. Dedicated traffic lanes for driverless vans and buses would favor carpooling over individual hailing. This more efficient use for road space could mean less need for traffic lanes, and more room for pedestrians, cyclists, and even parks.

Outside of urban areas, too, the social impact of driverless technology will depend on public and private choices. Efficient public transportation has never taken hold in American suburbs because it isn’t convenient to how they are built, with schools, after-school activities, stores, restaurants, dentists, and doctors often placed miles away, in different directions. Density, not technology, encourages carpooling and public transportation because it increases the chances that many people will want to go to and from the same points simultaneously. Without more urban-style density in suburbs, driverless cars won’t reduce congestion; they may add to it.

Which future we pick won’t depend on the technology, but on us.

Top Photo: A row of Google’s self-driving cars (ERIC RISBERG/AP PHOTO)

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next