Autonomous Driving ‘Sees’ the Future

0
44
Autonomous Driving ‘Sees’ the Future

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Autonomous driving has moved from hype to reality, and it might be some time before autonomous vehicles (AVs) allow their passengers to watch a movie or admire the scenery while driving them safely to their destination. In this blurry picture, camera, radar and LiDAR units are the “eyes” of the vehicles, mapping the road to full autonomy.

To gain an objective view of the present situation and prospects, EE Times consulted Pierrick Boulay, senior technology and market analyst in the Photonics and Sensing Division at Yole Intelligence, part of Yole Group. Pierre Cambou, principal analyst in the Photonics and Sensing Division at Yole Intelligence, also contributed to the analysis.

pierrick-boulay_yole_ee-times-europe-6622016
Yole Intelligence’s Pierrick Boulay

“It is clear that the automotive industry has underestimated how difficult it would be to develop autonomous driving features,” Boulay said. “Ten years ago, the industry expected that autonomous driving would be more common. It was one of Tesla’s promises, and if we look at where we are today, Tesla has still not achieved full autonomous driving features.”

The only automated features have been implemented by European and Japanese OEMs, and these features are still limited to highways, with driving speeds up to 60 km/h, Boulay said. “It is almost a useless feature and quite far from what people expected 10 years ago.”

During Tesla’s Autonomy Day in April 2019, CEO Elon Musk made a bold prediction: “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware, feature complete, at a reliability level that we would consider that no one needs to pay attention [to the road].”

A wave of euphoria swept through the automotive industry. Tesla’s stock price rocketed, and investors poured jaw-dropping amounts of money into startups as optimists claimed AVs were just around the corner.

A sense of disillusionment came when the National Highway Traffic Safety Administration announced it had received incident reports for 392 crashes related to partial self-driving and driver-assistance systems in the 10 months between July 1, 2021, and May 15, 2022. Almost 70% of them, or 273, were Tesla vehicles using Autopilot or the Full Self-Driving beta, while Honda cars were involved in 90 crashes and Subaru models in 10.

The AV era is forging ahead and continues to pledge enhanced driving safety. As of March 20, 2023, 42 companies have received test permits with safety drivers, and seven companies have driverless test permits in specific areas in California. Three companies—Cruise, Nuro and Waymo—have deployment permits.

Yet full autonomy is taking longer to deliver than promised, and the need for capital expenditure is exploding. From the perspective of perception sensors, why is autonomous driving still a distant dream?

What makes it so hard

There is no compromise, no negotiation that the safety of drivers, passengers and other road users is the No. 1 priority. OEMs must target zero fatalities when implementing automated-driving features. “When we look at the few cars enabling such features, they all have in common the use of three main sensors: camera, radar and LiDAR,” Boulay said. “There is no competition between these sensors; on the contrary, they are complementary to enable redundancy and diversity of data.”

What makes it so hard is, first of all, to develop LiDARs at an affordable price. While cameras and radars have an average selling price (ASP) between $30 and $50, the LiDAR ASP is between $500 and $1,000. The investment is so high for OEMs that this combination can be implemented only in high-end cars today, Boulay said.

Another major challenge for OEMs is to process all the data from the myriad of sensors with very low latency to enable real-time perception of the car’s environment. Boulay commented, “Difficulties rise exponentially with autonomy level while the industry thought it would be linear—incremental, per se. Going from Level 2 to Level 3 is, in fact, an x100 step in terms of computing power and sensing capability and, therefore, cost.”

More development is also needed to enable automated driving at night and in adverse weather conditions like fog, heavy rain, snow and wind. Waymo recently published a video in which robotaxis can drive autonomously without a security driver under light rain and a second video at night and in rainy conditions. “It is possible, and more work is needed by automotive players,” Boulay said.

L2+, L2++ and L3–

Autonomous driving is awash with technical terminology and jargon. It’s not always easy to find our way around, especially after Mobileye CEO Amnon Shashua laid down a new taxonomy based on four axes at CES 2023: eyes-on/eyes-off, hands-on/hands-off, driver versus no driver and MRM requirement.

Level 2 (L2) is widely known as the stage of partial automation on a scale of zero to five, as defined by the Society of Automotive Engineers (SAE). Yet the intermediate levels 2+ and 2++ were coined and popularized to bridge the gap between L2, which specifies hands-off applications, and L3, which are eyes-off applications.

“The new acronyms, L2+, L2++ and L3–, are ways to buy time, as each incremental generation does not jump fully to the next engineering definition in terms of autonomy,” said Boulay, who sees them as a marketing ploy to tell consumers that cars are slightly better than the previous generation.

For eyes-off applications, different use cases are possible: highway driving at low speed, high speed, highway driving with on/off ramps and so on. These use cases will require more sensors or more performant sensors, and each new generation of sensors will enable new use cases. For example, Boulay noted, a second-generation LiDAR can enable only highway driving at low speeds, while a third-generation LiDAR will enable highway driving at high speeds.

Automated driving also requires in-cabin sensing solutions, especially to monitor the driver for possible distraction or drowsiness and determine whether the driver can regain control of the car if needed.

Seeing the road ahead

A vehicle may be driving under a big, blue sky one moment and in a downpour the next. Sensors must be strategically positioned to provide continuous information about the vehicle’s environment and monitor variables. Nevertheless, there are technical limitations to sensor placement. Condensation in a headlight, for example, can prevent LiDARs from working. In snow or cold weather, freezing temperatures can cause the sensor to malfunction. Infrared sensors cannot see through glass and cannot be placed behind a windshield.

More and more sensors are being deployed in the vehicle to proactively respond to safety issues, but their number cannot grow indefinitely for cost and integration reasons. “Nobody wants to have a car that looks like a robotaxi,” Boulay said. “Each sensor must be perfectly integrated or hidden, like radars.”

In cars, he continued, radars can be dedicated to short-/mid-range applications and to long-range applications. These are different radars, implemented in different locations of the car. “LiDARs implemented in cars are mostly for long range, but in the future, we also expect to see short-range LiDARs for use cases like automated lane change or inter-urban driving applications between highways or even for city driving, where it is necessary to monitor the surroundings of the car perfectly to avoid any blind zones.”

Sensors generate massive amounts of data every second, and systems are heavily limited by processing power. Moving forward, it will not only be a matter of the number of sensors implemented but also a question of the performance of these sensors, Boulay concluded.