review planning pipelien in self-driving car

when understanding planning in self-driving car, all other modules make sense.

perception layer

with all concepts such as lidar, radar, camera, sensors, GPS, CAN, data fusion, computer vision, SLAM, AI-based detect/tracking algorithms, they are all perception related, which helps the car to understand itself and the surroundings.

perception first helps the car understand itself located in the world; then helps to predict the intensions of other vehicles/pedestrains around.

behavior layer

there are two steps here: behavior predictation, behavior planning.

based on the prediction info of other agents intensions from previous timestep/configuration, the behavior predictation module predicts current behavior of the self-driving car, which usually implemented either by model based methods or data driven method (AI-trained), and which output all the possible feasible maneuvers for current timestep.

the car choose only one maneuver at each timestep/configuration, and the behavior planning module is used to weight all the feasible maneuvers from the behavior predictation, and find the most-likely maneuver, which usually is implemented based on a cost function with constraints.

the all possible manuever is also called trajectory planning, and the choosen manuever is also called motion planning, which is locally-space and time-depended.

control layer

since motion planning, then send the command to vehicle control actor and update the car physically.

end-2-end motion planning

deep learning is also used to demo end2end motion planning. e.g. from camera output to vehicle control output frame to frame, while many situations may not be trained in the model, so not that realisty.

path/routine planning

the motion planning pipleline above is happening every timestep for self-driving car and locally.

at high-level is path/routine planning, bascially given the start point and destination point. there are a few algorithms, like global graph search, random tree, incremental graph search.

reinforcement learing in simulator

previous blog, it is also popular to learn behavior in simulation environement and train with reinforcement learning.

simulation in self-driving

how simulation tool chain can accelerate self-driving development ?

usually simualtion enviornemnt can help to verify and test the perception, behavior, control algorithms.

if working with reinforcement learning, a virtual simulator is required also.

what else ?