Warning: Driverless Cars Are Farther Than They Appear



An Uber driverless Ford Fusion drives down Smallman Street this month in Pittsburgh. (Jeff Swensen/Getty Images)

September saw big developments on the road to robotic vehicles. Uber inaugurated public test rides of its automated fleet in Pittsburgh; Elon Musk rolled out improvements to Tesla’s Autopilot driving mode; and the federal government released much-needed guidelines for deployment of the technology.

These steps came after months of global partnerships and investments related to autonomous cars. Scores of startups, auto and tech companies including Google GOOGL +0.25%, Drive.ai, Ford, General Motors GM -0.46%, Toyota, nuTonomy, Baidu and Delphi cumulatively are logging millions of test miles in automated fleets. The cars are loaded with state-of-the-art LiDAR, radar and camera-based sensor systems from suppliers such as Velodyne and Mobileye and packing massive data-processing capabilities courtesy of Nvidia and Intel.

You’d be forgiven for thinking our robotic car future is just around the corner, maybe months away. It’s not.

The new federal framework sets useful guidelines for states and companies seeking to test advanced vehicles on public roads, but at this stage they’re recommendations not regulations. Solutions to technological shortcomings, ethical concerns and even cybersecurity fears lie ahead before cars and trucks under the control of artificial intelligence can outperform fallible humans.

“There’s reason to believe they are safer than human drivers and offer a host of other advantages, but this leaves government agencies in a difficult situation because belief that the technology is better is not going to cut it in the eyes of many consumers,” said Nidhi Kalra, senior information scientist at RAND Corp. and co-director of its Center for Decision Making Under Uncertainty. “One of the big challenges is if you can’t quantify the risk or risk-reduction of autonomous vehicles, how can you manage it?”


Gill Pratt, Toyota’s executive technical adviser and CEO of its new Toyota Research Institute in Palo Alto, California, sees a simple, albeit time-consuming, path to validating the technology: More practice.

“Up to now, our industry has measured on-road reliability of autonomous vehicles in the millions of miles, which is impressive,” Pratt said at the Consumer Electronics Show in Las Vegas last January. “To achieve full autonomy we actually need reliability that’s a million times better. We need trillion-mile reliability.”

By “full autonomy,” Pratt is referring to classifications created by the Society of Automotive Engineers to define stages of autonomous driving, ranging from none to full robotic driving at Level 5.

“Level 5 means the car will autonomously take you from anywhere to anywhere at any time under any circumstances,” Pratt said in a recent interview at TRI headquarters. Current vehicles from automakers including Mercedes-Benz and Tesla, with its semi-automated Autopilot feature, to prototype robotic cars from Google, Uber and Volvo are still at Level 2, with varying levels of advanced driver-assist functions, he said.

At Level 2, drivers must remain watchful and ready to take control when circumstances grow too challenging for the car. Adaptive cruise control, lane-keep assist, automated braking, rear- and forward-facing cameras and vehicle sensors to monitor the location of other vehicles and objects on the road are helpful driver-assist technologies and increasingly standard features on new vehicles. They also can't be relied on under all circumstances. And while Tesla's Autopilot allows automated lane changes and now gets more detailed information about a vehicle's surroundings from its radar sensors, it's also still at Level 2.

When do we get to Level 5, the Holy Grail of robotic car technology?

"We don’t know," said Pratt, previously program manager for the U.S. Defense Advanced Research Projects Agency, which kicked off the self-driving vehicle push with its annual DARPA Challenge robotic car races. "Here’s what we can say: Level 5 is extremely hard."

Toyota has said it will demonstrate an autonomous vehicle, presumably with at least Level 4 capability, by 2020, in time for the Tokyo Olympics. Franco-Japanese alliance partners Renault-Nissan aim to introduce vehicles with similar autonomous capability around the same time. Ford plans a Level 4 autonomous vehicle for a commercial rideshare program by 2021, and numerous automakers also aim to have competing products on the market by the early 2020s.

On SAE's scale, Level 4 vehicles drive themselves almost all the time, excluding extreme weather, bad road conditions or when mapping data is insufficient. To get to Level 4 or 5 capability, engineers working on self-driving systems are trying to create sensors that are both more powerful and cheaper than current versions to cut the cost of hardware needed for a vehicle to see its surroundings from tens of thousands of dollars to just a few hundred.

Through the process of deep learning algorithms, automotive AI systems are being trained in labs around the world to recognize countless images of potential road and traffic conditions, pedestrian behavior, traffic signs, signals and circumstances. Yet designing AI that can assess conditions not seen before, uncommon "edge" cases that require human-like reasoning capabilities, remains a significant hurdle, said Andrew Moore, dean of Carnegie Mellon University's School of Computer Science.

"It's that last 10%, and then that last 1% of circumstances that gets very, very difficult to solve," Moore said in a recent interview. "Autonomous in-town driving, I'm still putting it at 2028."

Where humans can use common sense reasoning when engaged in complex tasks, AI still struggles, he said.

"There's a kid playing on the sidewalk who suddenly decides to run out in front of the car. We can react to that," Moore said. "We can just about start to have algorithms that can do that, but I want to see at least five or six years of intense experimentation, maybe with robot children, to ensure we've got it right."

A mattress unexpectedly falling from a truck on the highway, a deer jumping into vehicle's path on an icy road or unfamiliar road surfaces pose challenges for automotive AI. How wide a berth does an autonomous vehicle decide to give to a bicyclist on a narrow two-lane street? Does it briefly move into oncoming traffic to maneuver around the rider, Kalra wonders?

Combating cybersecurity risks is also a major concern as the use of wireless connections essential for driverless cars to access cloud-based data and mapping networks brings the possibility of threats from hackers.

“Cybersecurity is becoming a key technology for the automotive industry as connected cars grow,” Egil Juliussen, research director for IHS Markit, said in a statement. “It is especially important for self-driving and driverless cars where it will be required.”

IHS this week issued a report estimating that a new market for software designed to limit cybersecurity threats will grow to $759 million by 2023 as automated vehicle technology proliferates.

"Makers of this technology need to achieve a level of performance we wouldn’t expect from a human driver," Kalra said. "What kind of risk we are willing to accept to gain some of the benefits of the technology? Being better than the average human is not good enough, even though that might save lives."

Post by: Alan Ohnsman ,  Forbes Staff
Share on Google Plus

About Micheal Aigbe

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment