Heute 88

Gestern 137

Insgesamt 39278523

Freitag, 1.07.2022
eGovernment Forschung seit 2001 | eGovernment Research since 2001

Autonomous driving and intelligent cockpit are two key application areas of artificial intelligence (AI), which can be used to transform camera and sensor data into actionable information.

According to projections made by the Boston Consulting Group (BCG), the value of the global autonomous vehicle market will hit US$42 billion by 2025, and partially autonomous vehicles will account for a 12.4% market share. The BCG said that the market will continue to expand at a rapid pace until 2035 and that smart autonomous vehicles will become an irreversible development trend for the car industry.

There are three main development directions for AI applications in smart vehicles: Level 5 fully autonomous vehicles upgraded from vehicles equipped with ADAS (advanced driver-assistance systems), smart cockpit systems upgraded from the smart dashboard and in-vehicle infotainment systems, and smart/autonomous commercial and public transportation fleet management system developed to satisfy smart city/transportation demand.

LiDAR, radar, camera, and AI

On a technical level, autonomous vehicles can detect the environment around them by turning data collected through their sensors, including cameras, LiDARs, and radars, into actionable information using edge computing.

Cheng Wen-Huang, director of the Artificial Intelligence Graduate Program at the National Yang-Ming Chiao Tung University, pointed out that visual sensors in autonomous vehicles are most commonly used for detecting and marking visible objects when the vehicles are on the road. For the sensors to perform such functions, massive amounts of data are used to train deep learning and machine learning models through data-hungry technologies, he explained.

Currently, most major carmakers have adopted sensor fusion techniques to enable autonomous vehicles to make accurate decisions while on the road. The most common sensors include infrared radar, ultrasonic radar, mmWave radar, and camera. Each sensor type is useful in different situations. For example, when cameras cannot function properly in environments with low visibility, LiDARs and radars can be used to monitor road conditions ahead of the vehicles.

Not long ago, when Tesla announced that it will abandon mmWave radar and adopt a pure vision AI perception system instead, the announcement sparked heated discussion in the industry.

Tesla's decision to remove radar from its vehicles was driven not only by its intent to lower vehicle production costs but also by the dilemma of "which systems should be used in unique situations" when vehicles have both radar and vision AI perception systems. With over 14% share of the global electric vehicle (EV) market, Tesla can fully maximize the advantages of vision AI by leveraging image data collected through millions of its vehicles sold over the last 10 years. However, for automakers that only entered the EV market recently, sensor fusion is a more practical development direction than a purely vision-based approach.

Accuracy of vision AI is determined by data volume and quality

According to Cheng, there are two reasons why ADAS and autonomous driving systems using pure vision perception technology might not be reliable.

First, there are still limitations preventing the effectiveness of some machine learning algorithms, and these limitations are difficult to overcome. Even if deep learning models can work around these limitations, most developers of ADAS and autonomous driving systems will have trouble acquiring the massive amounts of useful data needed for deep learning technologies.

Second, there is a quality difference between commercialized AI solutions, especially in terms of data acquisition. Large-scale businesses are more capable of collecting useful data than medium- and small-sized companies, therefore they have a better chance to develop high-quality AI products.

Since 2021, Huawei has unveiled its vehicle-to-everything (V2X) platform and 5G vehicle communication software, while at the same time partnering with Audi to introduce an autonomous vehicle adopting Huawei's AI chipset. Cheng noted that Huawei has accumulated a large volume of data on climate conditions, geographic locations, and traffic statuses in different cities. These data are expected to be used for improving ADAS and autonomous driving systems, and they allow autonomous vehicles to operate in different scenarios, such as in the country, or on highways and urban roads, he explained.

Lee Kai-Fu, chairman and CEO of Sinovation Ventures, indicated that big data is both a crucial component of autonomous driving and a major data management challenge for businesses. He said how to safely applying AI to autonomous vehicles is a key challenge for the car industry.

Era of full intellectualization of vehicles?

As major automakers try to optimize ADAS systems by adding sensors to their vehicles, they are also placing increased focus on smart cockpits. Mercedes-Benz drew attention at CES 2021 with a 56-inch Hyperscreen it plans to fit inside its cars, and some automakers have introduced customized infotainment services and voice assistance systems that can prevent distracted driving.

According to IHS, the global smart cockpit market was valued at over US$40 billion in 2021, and the number is expected to rise to US$43.8 billion this year and US$68.1 billion in 2030. China accounted for 20% of the global share in 2021, and it is expected to hold a 30% share in 2030 with a market size of CNY 160 billion (US$24.16 billion).

Eyeing massive business opportunities associated with smart cockpits, Taiwan-based Foxconn, Pegatron, AU Optronics, Innolux, Adlink, Macronix, Winbond, First International Computer, and Trend Micro have all deepened their deployments in this area since last year.

Mindtronic AI technical director Mike Huang said that most traditional automakers are still approaching smart cockpits from a hardware equipment perspective. Many of them are considering integrating LCD screens, navigation systems, and networking functions into the vehicles, but few have provided practical solutions to achieve "cockpit intellectualization," he pointed out.

Huang said the concept of a smart cockpit is more than just an integration of several functions and component modules. It is a completely customized cockpit environment inside a vehicle, he emphasized.

In the past, traditional automakers would improve vehicle value by upgrading hardware components, such as the audio system and rearview mirror, Huang noted. However, after Tesla began exploring the relationship between in-vehicle hardware equipment, drivers, and passengers in the last few years, consumers and traditional automakers have broadened their imagination of smart cockpits.

Intelligent assistant, human-machine co-driving, and third living space are seen as the three stages of future development of the smart cockpit, and they reflect the relationship shift between the driver and the vehicle from ADAS to fully autonomous driving. Although the car industry still has a long way to go before realizing fully autonomous driving, AI technologies are undoubtedly reshaping our understanding of the concept of driving.


Autor(en)/Author(s): Louise Lu; Kevin Cheng

Quelle/Source: Digi Times, 10.05.2022

Bitte besuchen Sie/Please visit:

Zum Seitenanfang