Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Comprehensive interpretation of the key components of autopilot

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Https://www.toutiao.com/a6702210202118128136/

This article provides a brief and comprehensive overview of the key components of self-driving cars (self-driving systems), including self-driving levels, self-driving car sensors, self-driving car software, open source data sets, industry leaders, self-driving car applications, and ongoing challenges.

Introduction

In the past ten years, many research papers have been published in the field of self-driving. However, most of them only focus on specific technical areas, such as visual environment perception, vehicle control and so on. In addition, such articles are quickly out of date due to the rapid development of self-driving car technology.

In the past decade, with a series of breakthroughs in self-driving system technology around the world, the competition for the commercialization of self-driving cars (self-driving systems) is fiercer than ever before. In 2016, for example, Waymo launched its own self-driving taxi service in Arizona, attracting a lot of attention. In addition, Waymo spent about nine years developing and improving its autopilot system, using a variety of advanced engineering technologies such as machine learning and computer vision. These cutting-edge technologies have greatly helped their driverless cars better understand the world and take the right action at the right time.

Due to the development of self-driving technology, many scientific papers have been published in the past decade, and their citations have increased exponentially, as shown in figure 1. We can clearly see that the number of posts and citations have been increasing every year since 2010, and reached a peak last year.

Figure 1 the number of publications and citations on autonomous driving research in the past decade

I. autopilot system

The autopilot system enables the car to run in a real environment without human driver intervention. Each autopilot system consists of two main components: hardware (car sensor and hardware controller, i.e., throttle, brake, wheel, etc.) and software (function group).

On the software side, it has been modeled in many different software architectures, such as Stanley (Grand Challenge), Junior (Urban Challenge), Boss (Urban Challenge) and Tongji autopilot system. The Stanley software architecture consists of four modules: sensor interface, sensing, planning and control, and user interface. The Junior software architecture consists of five parts: sensor interface, perception, navigation (planning and control), on-line drive interface (user interface and vehicle interface) and global service. Boss uses a three-tier architecture: task, behavior, and motion planning. Tongji autopilot system divides the software architecture into perception, decision-making and planning, control and chassis. In this paper, the software architecture is divided into five modules: perception, positioning and mapping, prediction, planning and control. As shown in figure 2, figure 2 is very similar to the software architecture of Tongji autopilot system.

Fig. 2 Software architecture of autopilot system

Autopilot grade classification

According to the Society of Automotive Engineers (SAE international), autopilot can be divided into six levels, as shown in Table 1. Human drivers are responsible for driving environment monitoring (DEM) of level 0-2 autopilot systems. From level 4, human drivers are no longer responsible for dynamic driving task backhaul (DDTF). At present, the most advanced autopilot systems are mainly at level 2 and level 3. Industry insiders generally believe that it may take a long time to reach a higher level of autopilot.

Table 1

Sensors installed on autopilot systems are usually used to sense the environment. Each sensor is selected to weigh the sampling rate, field of view (FoV), accuracy, range, cost, and overall system complexity. The most commonly used sensors are passive sensors (such as cameras), active sensors (such as lidar, radar and ultrasonic transceivers) and other sensor types, such as Global Positioning system (GPS) and Inertial Measurement Unit (IMU).

The camera captures two-dimensional images by collecting light reflected onto a three-dimensional environment object. Image quality usually depends on environmental conditions, that is, different weather conditions and different lighting environments will have different effects on image quality. Computer vision and machine learning algorithms are often used to extract useful information from captured images / videos.

The lidar irradiates the target with a pulsed laser and measures the distance to the target by analyzing the reflected pulse. Lidar is usually used to make high-resolution world maps because of its high three-dimensional geometric accuracy. Lidar is usually installed in different parts of the vehicle to achieve different functions, such as the top, side and front.

By emitting electromagnetic waves and analyzing the reflected waves, radar can accurately measure the distance and radial velocity of the target. Radar is particularly good at detecting metal objects, but it can also detect non-metal objects, such as pedestrians and trees over short distances. Radar has been used in the automotive industry for many years, giving birth to ADAS functions, such as automatic emergency braking, adaptive cruise control and so on.

Similar to radar, ultrasonic sensors calculate the distance to the target by measuring the time between transmitting ultrasonic signals and receiving echoes. Ultrasonic sensors are usually used for positioning and navigation of self-driving vehicles.

GPS is a satellite-based radio navigation system of the United States government, which can provide time and location information for the autopilot system. However, GPS signals are easily blocked by obstacles such as buildings and mountains, such as the so-called urban canyons, where GPS tends to perform poorly. Therefore, inertial measurement units (IMUs) are usually integrated into GPS devices to ensure the positioning of self-driving cars in places such as "urban canyons".

Hardware controller

The hardware controller of self-driving vehicle includes torque steering motor, electronic brake booster, electronic throttle valve, transmission lever and parking brake. The state of the vehicle, such as wheel speed and steering angle, can be automatically sensed and sent to the computer system through the Controller area Network (CAN) bus. This allows a human driver or autopilot system to control the throttle, brake and steering wheel.

Autopilot software

Perception: the perception module analyzes the original sensor data and outputs the understanding of the environment in which the self-driving car is located. This process is similar to human visual cognition. The perception module mainly includes object detection and tracking (free space, lanes, vehicles, pedestrians, road damage, etc.), 3D world reconstruction (using motion structure, stereo vision, etc.) and so on. The most advanced perception technology can be divided into two categories: based on computer vision and based on machine learning. The former generally uses explicit projective geometric model to solve the problem of visual perception, and uses the optimization method to find the best solution. Machine learning-based techniques learn the best solution to a given perception problem by using data-driven classification / regression models, such as convolution neural networks. SegNet and UNet have made excellent achievements in semantic image segmentation and object classification. This kind of neural network is very easy to use and can be easily used in other similar perceptual tasks, such as transfer learning. The perception of multi-sensor information fusion can produce better understanding results.

Location and map: using sensor data and perceptual output, the localization mapping module can not only estimate the location of self-driving cars, but also build and update three-dimensional world maps. Since the concept of synchronous positioning and mapping (SLAM) was introduced in 1986, it has been widely concerned by people in the industry. The most advanced SLAM systems are usually divided into filter-based SLAM and optimization-based SLAM. The filtering-based SLAM system is obtained by Bayesian filtering, usually through incremental integration of sensor data to iteratively estimate the attitude of the self-driving vehicle and update the three-dimensional environment map. The most commonly used filters are extended Kalman filter (EKF), unscented Kalman filter (UKF), information filter (IF) and particle filter (PF). On the other hand, the optimization-based SLAM method firstly identifies problem constraints by finding the corresponding relationship between new observations and maps. Then, calculate and improve the posture of the self-driving car and update the 3D map. The optimization-based SLAM method can be divided into two main branches: Bundle Adjustment (BA) and graph SLAM. The former uses Gauss-Newton method, gradient descent and other optimization techniques to jointly optimize the 3D map and camera attitude by minimizing the error function. In the latter, the positioning problem is modeled as a graphical representation problem and solved by finding the error function of different vehicle posture.

Prediction: the prediction module analyzes the motion patterns of other traffic agents and predicts the future trajectory of self-driving vehicles, so that self-driving cars can make appropriate navigation decisions. The current forecasting methods are mainly divided into two categories: model-based forecasting methods and data-driven forecasting methods. The former calculates the future motion of the self-driving vehicle by propagating its motion state (position, velocity and acceleration) according to the kinematics and dynamics of the basic physical system. For example, Mercedes-Benz's motion prediction component uses map information as a constraint to calculate the next position of a self-driving car. Kalman filter performs well in short-term prediction, but not in long-term prediction, because it ignores the surrounding environment, such as roads and traffic rules. On this basis, a pedestrian motion prediction model based on gravity and repulsion is established. In recent years, with the development of artificial intelligence and high performance computing, many data processing techniques, such as hidden Markov model (HMM), Bayesian network (BNs) and Gaussian process (GP) regression, are used to predict the state of self-driving vehicles. In recent years, researchers have used inverse reinforcement learning (IRL) to model the environment, for example, using inverse optimal control method to predict pedestrian paths.

Planning: the planning module determines the possible navigation paths of safe self-driving vehicles based on perception, positioning, mapping and prediction information. Planning tasks are mainly divided into path planning, mobility planning and trajectory planning. A path is a list of geometric path points that a self-driving car should follow in order to reach its destination without colliding with obstacles. The most commonly used path planning techniques are: Dijkstra, dynamic planning, A*, status lattice and so on. Mobile planning is a high-level self-driving vehicle motion representation process, because it takes into account both traffic rules and other self-driving vehicle states. After finding the best path and maneuver planning, the trajectory that satisfies the motion model and state constraints must be generated in order to ensure the safety and comfort of traffic.

Control: the control module sends appropriate commands to the throttle, brake or steering torque according to the predicted trajectory and the estimated vehicle state. The control module makes the car as close as possible to the planned trajectory. The controller parameters can be estimated by minimizing the error function (deviation) between the ideal state and the observed state. Proportional integral derivative (PID) control, linear quadratic regulator (LQR) control and model predictive control (MPC) are the most commonly used methods to minimize error functions. PID controller is a feedback mechanism of control loop, which uses proportional term, integral term and derivative term to minimize the error function. When the system dynamics is expressed by a set of linear differential equations and the cost is expressed by a quadratic function, the LQR controller is used to minimize the error function. MPC is an advanced process control technology based on dynamic process model. These three controllers have their own advantages and disadvantages. The self-driving vehicle control module generally adopts the hybrid mode of the above method. For example, entry-level self-driving cars use MPC and PID to perform low-level feedback control tasks, such as applying torque converters to achieve the desired wheel corners. Baidu Apollo adopts a mixed mode of these three controllers: PID for feedforward control, LQR for wheel angle control, and MPC to optimize the parameters of PID and LQR controllers.

III. Open source datasets

In the past decade, many open source data sets have been released, which has made a great contribution to self-driving research. The editor collects several of the most frequently used datasets and briefly describes the uses of each dataset. Cityscapes contains a large data set, which can be used for pixel-level and instance-level semantic image segmentation. ApolloScape can be used for a variety of self-driving vehicle perception tasks, such as scene analysis, vehicle case understanding, lane segmentation, self-positioning, trajectory estimation and target detection and tracking. In addition, KITTI provides visual datasets for stereo and traffic estimation, target detection and tracking, road segmentation, mileage estimation and semantic image segmentation. 6D-vision uses stereo cameras to perceive the 3D environment and provides datasets for stereo, optical flow and semantic image segmentation.

IV. Industry leaders

Recently, investors have begun to invest their money in potential stocks for the commercialization of self-driving systems. Tesla's valuation has soared since 2016. This has led underwriters to speculate that the company will produce a self-driving fleet within a few years. In addition, GM's share price has risen 20% since it was reported in 2017 that it plans to build self-driving cars. As of July 2018, Waymo had tested its self-driving car for 8 million miles in the United States. In 2018, GM and Waymo had the fewest accidents: GM had 22 crashes over 212km, while Waymo had only three crashes over 563km. In addition to the industry giants, world-class universities have also accelerated the development of autonomous driving. These universities have well carried out the mode of combination of industry, university and research. This enables colleges and universities to better contribute to enterprises, economy and society.

Application scenario: self-driving technology can be applied to any type of vehicles, such as taxis, long-distance buses, tour buses, trucks and so on. These vehicles can not only free people from labor-intensive and tedious work, but also ensure their safety. For example, road quality assessment vehicles equipped with self-driving technology can repair detected road damage. In addition, using autopilot technology, road participants can communicate with each other, and public transport will be more efficient and safer.

V. existing challenges

Although self-driving technology has developed rapidly in the past decade, there are still many challenges. For example, the perception module does not perform well in bad weather and / or light conditions or in complex urban environments. In addition, most perception methods are usually computationally intensive and cannot be run in real time on embedded and resource-limited hardware. In addition, due to long-term instability, the application of SLAM method in large-scale experiments is still limited. Another important issue is how to integrate self-driving car sensor data to create more accurate three-dimensional semantics in a fast and economical way. In addition, when people can really accept self-driving and self-driving cars is still a topic worth discussing, which leads to the discussion of serious ethical issues.

References:

[1] J. Jiao, Y. Yu, Q. Liao, H. Ye, and M. Liu, "Automatic calibration of multiple 3D lidars in urban environments," arXiv preprint arXiv:1905.04912, 2019.

[2] H. Ye, Y. Chen, and M. Liu, "Tightly coupled 3D lidar inertial odometry and mapping," in 2019 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2019.

[3] R. Fan, M. J. Bocus, Y. Zhu, J. Jiao, L. Wang, F. Ma, S. Cheng, and M. Liu, "Road crack detection using deep convolutional neural network and adaptive thresholding," arXiv preprint arXiv:1904.08582, 2019.

[4] C. Coberly, "Waymo's self-driving car fleet has racked up 8 million miles in total driving distance on public roads," https://www.techspot.com/news/75608-waymo-self-driving-car-fleetracks-up-8.html, accessed: 2019-04-21.

[5] D. Welch and E. Behrmann, "Who's winning the self-driving car race?" Https://www.bloomberg.com/news/features/2018-05-07/whos-winning-the-self-driving-car-race, accessed: 2019-04-21.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report