dig into Apollo/Daohu527 and YannZ/Apollo-Notes are some good summary of Apollo.
localization
rtk
|
|
ndt
normal distribution transform(NDT) is to match our sensors view points compare to what we see on an existing map.
due to the view points from sensor may be a little off from the map, or the world might change a little between when built the map and when we record the view points.
so NDT will try to match view points to a grid of probability functions created from the map, instead of the map itself.
msf
perception
module overview
|
|
- edge init
|
|
it’s easy to understand EventManager
is used to register edge, as the data flow either by ros node or by share data is event driven. inside EventManager
there are two variables:
event_queue_map <eventID, message_queue>
event_meta_map <eventID, message_info>
- shareData init
|
|
each subnode and shareData is a separate ROS/data thread, which started after DAG process initialization finished, the run() process is the event loop:
|
|
Lidar process
- hdmap ROI filter. basically compare the lidar cloud points to hdmap area, and filter out the cloud points outside of ROI.
CNN image segmentation, basically classfiy the point clouds as obstacles based on CNN.
Lidar & Radar fusion
the data fusion ros node proces in the following way:
step1: collect sensor’s data
|
|
the data structure for sensor_type, timestamp and perceptioned obj is:
std::map<string sensorType, map<int64, SensorObjects> > sensors_;
sensors_[sensor_id][timestamp]
return the sensor’s object at a special timestamp
for Lidar and Radar fusion, we get a frame list, each frame has the object list from both Lidar and Radar. basically, so far it is just matching Lidar objects to Radar objects, assiged to timestamp. Lidar objects and Radar objects are indepenent.
prediction
in Apollo, prediction module anticipatd the future motion trajectories of the perceived obstacles.
prediction subscribe to localization, planning and perception obstacle messages. when a localization update is received, the prediction module update its internal status, the actual prediction is triggered when perception sends out its perception obstacle messages, as following code :
|
|
take a detail review of OnPerception
as following:
|
|
ContainerManager
used to instancialize an instance of the special container, which is used to store the special type of message. there are three types of contianer:
- ObstaclesContainer
used to store obstalce info and its lane info. obstacle info coming from perception; its lane info coming from hd map and LaneGraph
, LaneSequence
and LaneSegment
. check the explanation of ObstacleContainer
LaneGraph
is namely the lane network. e.g. the pre/post lane, neighbor lane, or where the lane disappear. LaneSegment
is the element/piece of discretization of the lane network, which has a start point and end point. LaneSequence
is a set of all possible next lane choices from current lane, which is only based on the lane network and current obstacle velocity, rather than from planning.
- PoseContainer
used to store vehicle world coord and the coord transfer matrix, velocity info e.t.c
- ADCTrajectoryContainer
EvaluatorManager
what evaluator does, is to compute the probability of each laneSequence, the obstacle is either vehicle, or pedestrain or bicycle. to compute probability is with multi-layer predictor(MLP) model:
|
|
- ObstacleFeature
there are 22 dimensions, which are greate resources to understand a good prediction model in ADS.
these features includes, dimension, velocity, acc, direction of the obstacle, and relative position, angle to boundary lane e.t.c.
- LaneFeature
since the laneSequence is already discritized as LaneSegment, for each lane Segment, Apollo calculate 4 features
: the angle from lanePoint position to its direction vector; obstacle’s projected distance to laneSequence; the direction of lanePoint; the direction angle from lanePoint’s direction to obstacle’s direction.
for each LaneSequnce only calculate its first 40 features, namely the first 10 LaneSegment.
from 22 Obstacle features and 40 lane features, compute the probability of each LaneSequence.
PredictorManager
this is where we answer the predict question, where in the next 5 sec the obstacle located. after EvaluatorManager(), we get the probability of each LaneSequence. then choose the few LaneSequences with high probability, and predict a possible obstacle motion for each high-probability LaneSequence as well as with ADCTrajectory info from planning module.
|
|
why need prediction module
whether including motion prediction or not is based on the computing ability of ADS system. for low power system, the planning module update really high frequency, then there is no prediction, or prediction only need be considered in less than a few mic-sec; but for complex ADS system with low frequency update of planning, the ability to forsee the next few secs is very helpful.
Planning
VehicleStateProvider
|
|
P&C Map
update routing response
during PnC map, we need to query waypoints(in s, l coord), e.g. current vehicle position, target position, in which routing laneSegment.
- get laneSegment info from routing response
|
|
- query waypoints
|
|
with UpdateRoutingResponse(), basically trafer waypoint in s,l
coord to a more readable representation: [route_index, roadSegid, PassageID, laneSegmentID]
generate RouteSegments
based on current vehicle status and routing response, get all driving-avaiable path set, each RouteSegment
is one driving-avaiable path in a short period. The length of each RouteSegment depends on the both backward(30m by default) lookup length and forward lookup length, which depends on ego vehicle velocity, a time threashold(8s by default), min_distance(150m by default), max_distance(250m by default). check here
- UpdateVehicleState()
it’s not about update vehicle velocity e.t.c, but find out vehicle current location adc_waypoint_
in the routing path and the next lane index next_routing_waypoint_index_
.
|
|
- GetNeighborPassages()
basically, check the next connected and avaiable channel. for situations, e.g. next cross lane is left turn, or lane disappear
|
|
- GetRouteSegments()
once we get the neighbor channels/passenges, the last step is to check if these passenges are drivable, only when current lane where ego vehicle located is the same lane or exactly next one lane when the vehicle projected on the passenge, and make sure the same direction. all other situations are not drivable.
additionaly add forward and backward segments will give current RouteSegments.
tips, passage is the channel where vehicle drive, or lane in physical road. but we use only LaneSegment, there is no keyword Lane
.
from RouteSegment to road sample points
RouteSegment is similar as LaneSegments from hdMAP, including laneID, start_s, end_s; with additional mapPathPoint info, e.g. heading direction, and other traffic area property, which is used to lookup HD map.
rather than LaneSegments is about 2m, RouteSegment length can be much longer, including a few LaneSegments. if each LaneSegment provides a sample point, and packaging these smaple points as MapPathPoint, then RouteSegments can be represented as a list of MapPathPoint. and in reality, both start_point and end_point of each LaneSegemnt also added as a sample point, but need take off overlap points.
|
|
each two mapPathPoint again can group as a LaneSegment2d, and for the LaneSegment2d in same lane, can joining as one LaneSegment:
|
|
each small piece of LaneSegment2d helps to calculate the heading direction.
|
|
finally, RouteSegment has traffic zone info, e.g. cross road, parking area e.t.c
ReferenceLineProvider
ReferenceLineProvider has two funcs, check here:
- smoothing sample path_points to get a list of anchor points, which is the exactly vehicle driving waypoints. from path_points to anchor_points is resampling, with a sparse(5m) sampling distance, as well as additionally consider one-side driving correction factor.
taking an example about one-side driving correction factor,: when driving on left curve, human driver keeps a little bit left, rather than keeping in the centerline.
- piecewise smoothing from anchor points
the basic idea is split anchor points in n
subgroups, and polyfit each subgroup; for the subgroup connection part need to make sure the zero-order, first order and second order differential smooth.
which in final return to an optimization problem
with constraints.
Frame
the previous PnCMap and ReferenceLine is in an ideal driving env, Frame class considers obstacles behavior, and traffic signs. check here
obstacle lagPrediction
predictionObstacle info is filtered by the following two cases, before use as the obstacle trajectory for planning.
- for latest prediction obstacles
only if perception_confidence is large than confidence_threshold(0.5 by default) and the distance from the prediction obstacle to ego vehicle is less than distance_threshold(30m by default), then this obstacle is considered
- for history prediction obstacles
only if the prediction_obstacles has more than 3 obstacles. as well as each obstacle appear more than min_appear_num
(3 by default), and for the same obstacle, the timestamp distance from its previous prediction to latest prediction is not longger than max_disappear_num
(5 by default), then this obstcle need take considerion.
relative position from ego vehicle to obstacle
as we get the obstacles’ LagPrediction trajectory, as well as ReferenceLine for ego vehicle. now we combine this two information, to understand when the ego is safe and how far ego can drive forward. the output is the referenceline with each obstacle overlapped info. including the time low_t
when overlap begins, and the time high_t
when overlap ends; and the start location low_s-start
and end location high_s-start
of the overlap.
rule based behavior to handle overlap area
there are 11 rule based behavior.
backside_vehicle
change_lane
crosswalk
destination
front_vehicle
keep_clear
pull_over
reference_line_end
rerouting
signal_light
stop_sign
Planning::RunOnce()
{
foreach ref_line_info in frame_->reference_line_info()){
traffic_decider.Init();
traffic_decider.Execute(&ref_line_info);
}
}