Back

 Industry News Details

 
How To Practically Enable Vehicle Autonomy In Urban Settings Posted on : Oct 26 - 2020

The auto industry is currently experiencing a rapid shift to autonomous vehicles (AV). This evolution is spearheaded by new, innovative technology companies that are bringing cutting-edge automotive platforms to the market at an unprecedented pace. Currently, vehicles on the road are equipped with the ability to maneuver on their own on highways while in the presence of a human driver. The next logical step in the race to autonomy is self-driving capability in an urban setting — first with a driver and eventually with humans acting solely as passengers.

However, driving in cities is an exponentially more difficult problem to solve than maneuvering on highways. Urban driving presents varying speeds, vulnerable road users and a higher density of cars packed into narrower roads. To adapt various technologies to this specific market while transitioning to full autonomy, original equipment manufacturers (OEMs) must identify a combination of sensors that not only accurately captures the plethora of real-time visual data in metro settings, but is also cost-efficient, allowing vehicles to scale and maintain affordability.

Currently, sensor configurations that are required to enable urban autonomy are extremely expensive. Due to the high cost of the supporting hardware — particularly the price of the lidar (light detection and ranging) — these solutions are not scalable, especially considering the massive output of vehicles produced by OEMs. A less expensive combination of sensors must be identified by OEMs to allow them to adapt to this huge market while continuing to maintain profit margins.

One feasible solution includes eight standard cameras placed at various locations on the car. Because OEMs require redundancy, the system needs to extract depth information from at least two independent sources. To ensure accuracy, a radar sensor should be placed adjacent to each image sensor (or camera), with an additional lidar sensor integrated onto the vehicle. Currently, the aggregate cost of this configuration is around $8,000, mainly due to the steep price of lidar sensors. Given the volume of vehicles produced by OEMs, this high cost prevents scalability and practicality.

Alternatively, OEMs could consider a more cost-optimized solution that eliminates lidar. Rather than using standard cameras, arranging the various image sensors stereoscopically enables the system to accurately calculate the object-level depth information of other road users, eliminating the need for lidar to be a second source of depth information. Collectively, this configuration of sensors is cheaper by several thousand dollars and paves a path to a more practical and scalable solution.

The elimination of lidar in the alternative solution allows that configuration to be as much as six times cheaper than a lidar-based configuration. This lower cost can enable scalability for OEMs. With some companies producing millions of vehicles annually, an inexpensive sensor integration such as this is key to being competitive in the long run.

With several years of experience in the auto industry under my belt, I have witnessed the progression of the technology in this segment. The next logical step in this evolution is urban vehicle autonomy. To take advantage of the alternative sensor configuration highlighted above, car companies must horizontally source stereoscopic cameras to replace monochromatic image sensors and the ultra-expensive lidar. By doing this, they can adapt to the market as it transitions to urban autonomy. View More