Serious Autonomous Vehicles


  • Home

  • Archives

  • Tags

  • Search

robust control theory

Posted on 2019-05-10 |

this is a review from cmu refer

state variable method

any Nth order differential equation describing a control system could be reduced to N 1st order equations, these equations could be arranged in the form of matrix equations.

define x as system state, y as output, u as input:

modern control methods(ODEs) can handle multiple-input-multiple-outputs, and they can be optimized, and they allow to design performance and cost model.

effects of uncertainty

observability

the ability to observe all of the parameters or state variables in the system

controllability

the ability to move a system from any given state to any desired state

stability

the bounded response to any bounded input

robust control theory might be stated as a worst-case analysis, to bound the uncertantiy.

metircs

how to model the behavior of the test system is one most difficult challenge in design a good control system.

adaptive control

set up observers for each significant state variable. at each iteration loop, the system learns about the changes in the system parameters, and getting closer to the desired. while the method may suffer from convergence issues

H2 or H-infinity

H2 control seeks to bound the power gain of the system, H-infinity seeks to bound the energy gain of the system. gains in power or energy indicate operation of the system near a pole in the transfer function.

parameter estimation

by establishing boundaries in the frequency domain that cannot be crossed to maintain stability.

Lyapanov

the only universal tech for assessing non-linear systems, the method focus on stability. Lyapanov functions are constructed, which are described as energy-like functions, to model the behavior of real system.

Carmaker 8.0

Posted on 2019-05-10 |

this is from IPG open house Shang Hai

scenario generation task

data record

tracking vehicles, roads, tobstacles

obj: lane, road, barries, GPS input, vehicle position/orientation, fixed ID, type

the goal of recoding is for road building, which will be used in replay.

road build

GPS input + lane mark info + vehicle location –> vehicle trajectory

replay

run config, input as tranversal and longitudial position

traffic vehicle location, speed

rearrange

input as : traffic vehile info + ego info , list of traffic vehicle info

traffic vehicle manage:

1) manuevor control:  free move 

2) spawn control:  lati +   longi -->  23 cases 

3) support external plugins +   manuevor trigger

Synthetic Scenario

junction assistant

road type + traffic rules + scenario –>

support road topology modification

support different envs: day of time, weather,

scenario editor to support opendrive import

standardization

PEGASUS + ASAM simulation standards

roads, scenarios, simulation interfaces

Opendrive –> road topology

opENScenario –> maneuver & anction abstract definitions

Open simulation interface –> interface developed for PEGASUS

limitations

pre-define route for vehicle ?

the ego car has AI maneuvor ?

Vitual Prototype

including gearbox loss mode, gas mode, through look-up table

including hybrid powertrain architectures: automatic gearbox + parallel hybrid

including powertrain masses(engine, tank, gearbox, battery, motor)

including trailer data set generator

including damping top mount

Simulation test

support:: ADAS/AD, POWERTRAIN, Vehicle Dynamics

steering system visual case

for less steering will overall comfort and vehicle dynamics

reference measurements(steering-in-loop simulator) -> model parameter id + softare + ECU integration –> parameterization & validation -> training

how the steering system works

open loop to get mechanical characteristics(stiffness, friction..)

system performance with or without EPS

1) ideal(basis) model vs physical model

how to cowok the physical model with autopilot control model ?

test bed

to support electrification, durability, balancing, driveability, powertain caillbratio, connected powertrain

AI training with synthetic scenario

decion making

trajectory planning

image perception

q: how to make sure AI robost ? –>

what CarMaker can do for AI?

1) obj annotation (vehicles, pedestrains) –> auto annotation

2) semantic segmentation

e.g. IPG Movier for auto semantic segmentation

Q: what’s the hardware for ?

Cloud & CPU/GPU for Parallelization

q: how to parallel in docker ?

1) test case in each CPUs

2) even for single test run(with multi sensors, multi cars )

resources & distribution 

CPU: vehile model, drivel model, envs, ideal sensors

GPU; visual, camera RIS, radar ris, lidar rs

Test run in prallel

sensor setup(10 ultra, 5 Radar, 1 Lidar, 1 Camera)

host pc (with test manager) + 4 virtual machines

output: key figures, reports, statistics, queries

open archi for scalable processing( on-premise and cloud)

big data anaysis with DaSense by NorCom

  • how it works ?
  • external scheduler mananger, PBS
  • HPC light to support local PC parallel

new features in 8.0

virtual test driving 8.0

  • simulink lib (through Simscape)
  • Scenario Editor: vege geenration, animated 3D objs, new models(vehicles, trailers, trucks, bus, buildings, houses, street furniture, pedestrains)
  • visulize road surfaces ..
  • ipg movie
  • fisheye distortion from external file
  • new sensor models(Lidar RSI)

q: what’s the difference of open source tool vs commericial ?

Lidar RSI

Ideal perfect world –> ground truth

HiFi –> false positives & negatives

raw data –> RSI

supporting Lidar type:

moving laser & photot diode

moving mirrors 

solid state 

flash

input features :

  • Laser beam, including custom beam pattern, Raytracing rays

  • Scene Interaction, including atmoshpere attenuation, color or material or surface or transparent dependency

  • detection, including threashold, multiple echoes per beam, separability

output features:

  • sending & receiving direction of every beam
  • light intensity of every beam
  • time & lenght of light
  • pluse width
  • number of interactions

User Case : Nio Pilot

by sun peng

cases

inter-city, parking, closed space, crowded space

sensors: 3 front camera, 4 surround camera, 4 mm RADARS, 12 Ultra, 1 driver monitor camera

higway pilot in June

perception: camera, radar, ult, hd map, location

planning : path planning, maneuvor decsion, system control

cloud & AI

simulation usage

  • FDS -> cases -> SIL

  • platform –> cases -> regression test, abstraction & instantiation ; scene reconsturction(in-house) / close loop SIL ; traffic model training(to do)

  • integration -> HIL

what about vd ? –> co-work with simulation and physical test, the cover percentage of simulation is about 80%, the left is from

data platform

upload nodes -> cloud

med usa API server -> fleet mgmt

log stash –> elastic search –> Kibana & Admin (tensn and spark )

I think they are collecting data, and this data for scene building and simulation usage in future

data visulazation

HIL

  • lane model simualtion on HIL

  • fusion simulation on HIL

automation test

jenkins master –> jenkins slave (agent IPG) –> cloud

goal

simulation server <—> data center

parallel sim core + simulation monitor (data exchange service)

data processing + labelling + case management + traffic model training

(replay, SIL, REMODEL, Visuliazation )

PID control

Posted on 2019-05-04 |

closed loop system

set point is the desired or command value for the process variable. at any given moment, the difference between the process variable and the set point is used by the control system algorithm to determine the desired actuator output to drive the system.

closed loop system, the process of reading sensors to provide constant feedback and calculating the desired actuator output is repeated continuously and at a fixed loop rate.

control system performance is measured by applying a step fuction as the set point command variable, and then measuring the response of the process variable.

rise time is the amount of time that the system takes to go from 10% to 90% of the steady-state/final.

percent overshoot is the amout that the process variable overshoots the final value, expressed as a percentage of the final value.

settling time is the time required for the process variable to settle to within a certain percentage(5%) of the finla value

steady state error is the final difference between the process variable and set point

disturbance rejection is the measure of how well the control system is able to overcome the effects of disturbances. often there is a disturbance in the system that affects the process variables or the measurements of these variables, it’s important to design a control system that performs satisfactorily during the worst case conditions.

nonlinear system , in which the control parameters that produe a desired response at one operating point might not produce a satisfactory response at another operating point.

deadtime is the delay between when a process variable changes, and when that change can be observed.

loop cycle the interval of time between calls to a control system, system that change quickly or have complex behavior requires faster control loop rates.

PID theory

proportional response

error the difference between the set point and the process variable. the proportional gain(K_g) determins the ratio of the output response to the error signal.

e.g. the error term has a magnitude of 10, and the K_g is 5, then the proportional response is 50.

increasing K_g will increase the speed of the control system response, but if K_g is too large, the process variable will oscillate(why?)

integral response

the integral component sums the error term over time. the result is that even a small error term will cause the integral component to increase slowly. the integral response will continually increase unless the error is zero, so the effect is to driven the steady-state-zero to zero.

integral windup when integral action saturates, still without the controller driving the error signal toward zero

derivative response

the response is portortional to the rate of change of the process variable. increasing the derivative time will cause the control sytem to react more strongly to changes in the error, and react more quickly.

in practice, most control system use very small derivative time, since the derivative response is highly sensitive to noise.

turning

which is the process of setting the optimal gains of P, I, D to get an ideal response.

trial and error

set I, D as zero, and increas P gain. Once P has been set to obtain a desired fast response, I starts to increase to stop the oscillatins to achieve a minimal steady state error. once P and I have been set, the D is increased untill the loop is acceptably quick to its set point.

filtering

assume a sinusoidla noise with frequency w, the direvative is:

so in practice it’s necessary to limit the high frequency gain of the derivative term, either by adding a low pass filtering of the control signal, or implement the derivative term in a cut-off way.

Udacity Self driving Car project 9


int main()
{
    uWS::Hub h ;
    PID pid ; 
    pid.Init(pinit, iinit, dinit);
    h.onMessage(    // cte, the error 
    {
        double diff = fabs(pid.p_error - cte) ; 
        if( diff > 0.1 && diff < 0.2)
            thr = 0.0; 
        else if( diff > 0.2 && speed > 30)
            thr = -0.2;

        pid.UpdateError(cte, dt);
        steer_value = -pid.TotalError(speed);
    }
}

void PID::UpdateError(double cte, double dt)
{
    d_error = (cte - p_error) /dt ;
    p_error = cte ;
    i_error = integral(cte * dt);
}

double PID::TotalError(double speed)
{
    return (Kp - 0.0032 * speed) * p_error + Ki * i_error + (Kd + 0.002 * speed) * d_error ;
}

in real self driving control system, usually divide as latitual and longitual control. and in different scenarios, there are plenty of PID controls. in the future will expose: highway autopilot, auto parking, urban L4

reference

PID theory explained

PID control from Caltech

udacity pid control

Apollo 2.0 Localization 源码

Posted on 2019-05-03 |

there are two localization methods: RTK and multi-sensor fusion(MSF). RTK using GPS and IMU inputs, MSF using GPS, IMU and Lidar sensor and HD map as inputs. the output is the localization estimated object instance.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Status Localization::Start()
{
localization_ = localization_factory_.CreateObject(cofig_.type());
localization_->Start();
return Status::OK();
}
Status RTKLocalization::Start(){
AdapterManager::Init(FLAGS_rtk_adapter_config_file);
timer_ = AdapterManager::CreateTimer(ros::Duration(duration), &RTKLocalization::OnTimer, this);
AdapterManager::GetGps();
AdapterManager::GetImu();
tf2_broadcaster_ = new tf2_ros::TransformBroadcaster() ;
return Status::OK();
}
void RTKLocalization::OnTimer(const ros::TimerEvent &event)
{
AdapterManager::Observe();
PublishLocalization();
RunWatchDog();
}
void RTKLocalization::PublishLocalization(){
LocalizationEstimate localization ;
PrepareLocalizationMsg(&localization);
AdapterManager::PublishLocalization(localization);
PublishPoseBroadcastTF(localization);
}
void RTKLocalization::PrepareLocalizationMsg(LocalizationEstimate *localization){
const auto &gps_msg = AdapterManager::GetGps()->GetLatestObserved();
Imu imu_msg = AdapterManager::GetImu()->GetLatestObserved();
ComposeLocalizationMsg(gps_msg, imu_msg, localization);
}
void RTKLocalization::ComposeLocalizationMsg(const localization::Gps& gps_msg, const localization::Imu &img_msg, LocalizationEstimate* localization)
{
// add header
// set measurement time
// combine gps and imu
auto mutable_pose = localization->mutable_pose();
if(gps_msg.has_localization()){
const auto &pose = gps_msg.localization();
if(pose.has_position()){ // update mutable_pose };
if(pose.has_orientation()) { //update mutable orientation };
if(pose.has_linear_velocity()){};
}
if(imu.has_linear_acceleration()){ //update mutable_pose acc };
if(imu.has_angular_velocity()){};
if(imu.has_euler_angles()){};
}

MSF

vehicle localization based on multi-sensor fusion

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class MSFLocalization {
void InitParams();
void OnPointCloud(const sensor_msgs::PointCloud2 &message);
void OnRawImu(const drivers::gnss::Imu &imu_msg);
void OnGnssRtkObs(const EpochObservation &raw_obs_msg);
// ...
void PublishPoseBroadcastTF(const LocalizationEstimate& localization);
}
Status MSFLocalization::Start()
{
AdapterManager::Init(FLAGS_msf_adapter_config_file);
Status &&status = Init();
AdapterManager::GetRawImu();
AdapterManager::ADDRawImuCallback(&MSFLocalization::OnRawImu, this);
AdapterManager::GetPointCloud();
AdapterManager::AddPointCloudCallback(&MSFLocalization::OnPointCloud, this);
// ...
return Status::OK();
}

there is addtional local_map module, which will be review later.

Apollo 2.0 Prediction源码

Posted on 2019-05-03 |

Prediction

prediction inputs are: obstacles from perception module nad localization from localization module. outputs are obstacles will predicted trajectories.

there are three classes in prediction modules:

* container
store input dat from subscribed channelds, e.g. perception obstacles, vehicle localization 
* evalutor
predicts paths and speed separately for any given obstacles
* predictor
generate predicted trajectories for obstacles. e.g. lane sequence(obstacle moves following the lanes),  free movement, regional moves(move in a possbile region)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
// 模块入口
APOLLO_MAIN(apollo::prediction::Prediction);
Status Prediction::Init()
{
predicition_conf_.Clear();
adapter_conf_.Clear();
common::util::GetProtoFromFile(FLAGS_prediction_adapter_config_filename, &adapter_conf_);
//Initial managers
AdapterManager::Init(adapter_conf_);
ContainerManager::instance()->Init(adapter_conf_);
EvaluatorManager::instance()->Init(prediction_conf_);
PredictorManager::instance()->Init(prediction_conf_);
AdapterManager::GetLocalization();
AdapterManager::GetPerceptionObstacles();
AdapterManger::AddPerceptionObstaclesCallback(&Prediction::RunOnce, this);
AdapterManger::AddLocalizationCallback(&Prediction::OnLocalization, this);
AdapterManger::AddPlanningCallback(&Prediction::OnPlanning, this);
return Status::OK();
}
void Prediction::RunOnce(const PerceptionObstacles& perception_obstacles)
{
ObstaclesContainer* obstacles_container = dynamic_cast<ObstaclesContainer*>(ContainerManager::instance()->GetContainer(AdapterConfig::PERCEPTION_OBSTACLES));
obstacles_container->Insert(perception_obstacles);
EvaluatorManager::instace()->Run(perception_obstacles);
PredictorManager::instance()->Run(perception_obstacles);
auto prediction_obstacles = PredictorManager::instance()->prediction_obstacles();
for(auto const& prediction_obstacle : prediction_obstacles.prediction_obstalces()){
for(auto const& trajectory:prediction_obstacle.trajectory())
{
for(auto const& trajectory_point : trajectory.trajectory_point())
{
if(!IsValidTrajectoryPoint(trajectory_point)){
return ;
}
}
}
Publish(&prediction_obstacles);
}
void Prediction::OnLocalization(const LocalizationEstimate& localization, ObstaclesContainer* obstacles_container)
void Prediction::OnPlanning(const planning::ADCTrajectory& adc_trajectory, ADCTrajectoryContainer* adc_trajectory_container)

Prediction interface has defined: RunOnce() and Pulish(). the Prediction class has defined listener’s callback: OnLocalization, OnPlanning, and Start(), Stop() and implemented interface functions. For both EvaluatorManager and PredictorManager, there are Init() and Run() functions, and there are bunch of apis from container class.

Evaluator

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
void EvaluatorManager::Init(const PredictionConf& config)
{
//...
switch(config.obstacle_type()){
case PerceptionObstacle::VEHICLE :{
vehicle_on_lane_evaluator_ = obstalce_conf.evaluator_type();
break;
case PerceptionObstacle::BICYCLE:{
cyclist_on_lane_evaluator_ = obstlce_conf.evluator_type();
break;
case PerceptionObstacle::PEDESTRAIN:{
break;
}
case PerceptionObstacle::UNKNOWN:{
default_on_lane_evaluator_ = obstacle_conf.evalutor_type();
break;
}
}
}
Evaluator* EvaluatorManager::Run(const perception::PerceptionObstacles& perception_obstacles)
{
ObstaclesContainer* container = dynamic_cast<ObstaclesContainer*>();
Evaluator* evaluator = nullptr ;
for(const auto& perception_obstacle : perception_obstacles.perception_obstalce())
{
int id = perception_obstalce.id();
Obstacle* obstacle = container->GetObstalce(id);
switch(perception_obstalce.type())
{
case PerceptionObstacle::VEHICLE: {
if(obstacle->IsOnLane()){
evaluator = GetEvaluator(vehicle_on_lane_evaluator_);
}
break;
}
case PerceptionObstacle::BICYCLE:{
if(obstacle->IsOnLane()){
evaluator = GetEvaluator(cyclist_on_lane_evaluator_);
}
break;
}
// ...
if(evaluator != nullptr)
{
evaluator->Evaluate(obstacle);
}
}
}
}

and there are a few different evaluators: (multilayer perception approach) MLP and RNN(deep neural network), both will discuss in details in future.

Predictor

predictor will generate trajectories, a few apis:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
virtual void predictor::Predict(Obstacle* obstacle) = 0;
void predictor::TrimTrajectories(const Obstacle, const ADCTrajectoryContainer* );
static predictor::Trajectory GenerateTrajectory(const std::vector<apollo::common::TrajectoryPoint>& points) ;
void PredictorManager::Init(const PredictionConf& config)
{
// ...
switch(obstacle_conf.obstacle_type())
{
case PerceptionObstacle::VEHICLE: {
if(obstacle_conf.obstacle_status() == ObstacleConfg::ON_LANE){
vehicle_on_lane_predictor_ = obstacle_conf.predictor_type();
}else if(obstacle_conf.obstacle_status() == ObstacleConf::OFF_LANE){
vehicle_off_lane_predictor_ = obstacle_conf.predictor_type();
}
}
break;
// ...
}
void PredictorManager::Run(const PerceptionObstacles& perception_obstacles){
ObstaclesContainer* obstacles_container = dynamic_cast<ObstaclesContainer*>(AdapterConfig::PERCEPTION_OBSTACLES) ;
ADCTrajectoryContainer *adc_trajectory_container = dynamic_cast<ADCTrajectoryContainer*>(AdapterConfig::PLANNING_TRAJECTORY);
Predictor* predictor = nullptr ;
for(const auto* perception_obstacle : perception_obstacles.perception_obstacle())
{
int id = perception_obstacle.id();
PredictionObstacle prediction_obstacle ;
Obstacle* obstacle = obstacle_container->GetObstacle(id);
if(obstacle != nullptr)
{
switch(perception_obstacle.type())
{
case PerceptionObstacle::VEHICLE: {
if(obstacle->IsOnLane()){
predictor = GetPredictor(vehicle_on_lane_predictor_);
}else{
predictor = GetPredictor(vehicle_off_lane_predictor_);
}
break;
}
// ...
}
if(predictor != nullptr)
{
predictor->Predict(obstacle);
if(obstacle->type() == PerceptionObstacle::VEHICLE){
predictor->TrimTrajectories(obstacle, adc_trajectory_container);
}
for(const auto& trajectory : predictor->trajectories())
{
prediction_obstacle.add_trajectory()->CopyFrom(trajectory);
}
}
}
prediction_obstacle.mutable_perception_obstacle()->CopyFrom(perception_obstacle);
prediction_obstacles_.add_prediction_obstacle()->CopyFrom(prediction_obstacle);
}

the predictor has a few types:: regional based, free move, lane sequence e.t.c, which defined as subclasses, will discuss more in future.

Container

container manager class has a few subclass as adc_trajctory container, obstacles contianer and pose constainer, which should be discussed in details later.

Adapter

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// in adapter_manager.cc
case AdapterConfig::LOCALIZATION:
EnableLocalization(FLAGS_localization_topic, config);
case AdatperConfig::PERCEPTION_OBSTACLES:
EnablePerceptionObstacles(FLAGS_perception_obstacle_topic, config);
// in messsage_adapters.h
using PerceptionObstaclesAdapter = Adapter<perception::PerceptionObstacles>;
using LocalizationAdapter = Adapter<apollo::localization::LocalizationEstimate>;
// in adapter_manager.h
static voi Enable##name(){
instance()->InternalEnable##name(topic_name, config);
};
static name##Adapter *Get##name(){
return instance()->InternaleGet##name();
}
static void Add##name##Callback(name##Adapter::Callback callback){
instance()->name##_->AddCallback(callback);
}
template<class T> staic void Add##name##Callback(void(T:: *fp)(const name##Adapter::DataType &data), T* obj)
{
Add##name##Callback(std::bind(fp, obj, std::placeholders::_1));
}

Apollo2.0 Control 源码 (3)

Posted on 2019-05-01 |

the input of Apollo control module includes: chassis info, localization info, and planning info ,the output is steering angle, acc, throttle.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// 模块入口
#define APOLLO_MAIN(APP)
int main(int argc, char **argv) {
google::InitGoogleLogging(argv[0]);
google::ParseCommandLineFlags(&argc, &argv, true);
signal(SIGINT, apollo::common::apollo_app_sigint_handler);
APP apollo_app_;
ros::init(argc, argv, apollo_app_.Name());
apollo_app_.Spin(); //check previous blog(1)
return 0;
}
Status Control::Init(){
init_time_ = Clock::NowInSeconds();
common::util::GetProtoFromFile(FLAGS_control_conf_file, &control_conf_);
AdapterManager::Init(FLAGS_control_adapter_config_filename);
common::monitor::MonitorLogBuffer buffer(&monitor_logger_);
controller_agent_.Init(&control_conf_);
AdapterManager::GetLocalization();
AdapterManager::GetChassis();
AdapterManager::GetPlanning();
AdpaterManager::GetControlCommand(); AdapterManager::GetMonitor();
AdapterManager::AddMonitorCallback(&Control::OnMonitor, this);
return Status::OK();
}

conf_file: /modules/control/conf/lincoln.pd/txt,

which is derieved from

/modules/calibration/data/mkz8/calibration_table.pd.txt

the topics in control modules are :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
config {
type: LOCALIZATION
mode: RECEIVE_ONLY
}
config {
type: PLANNING_TRAJECTORY
mode: RECEIVE_ONLY
}
config {
type: CHASSIS
mode: RECEIVE_ONLY
}
config {
type: CONTROL_COMMAND
mode: PUBLISH_ONLY
}
config {
type: MONITOR
mode: DUPLEX
}
```
basically, control module will receive topic about /localization, /planning, /chassis, and publish /control_command
the controller_agent is an interface class, so can support user defined controller algorithms, which only need to configure through `control_conf`
```c
Status ControllerAgent::Init(const ControlConf* control_conf)
{
RegisterControllers(control_conf);
InitializeConf(control_conf);
for(auto &controller : controller_list_)
{
if(controller == NULL || !controller->Init(control_conf_).ok())
{
return Status(ErrorCode);
}
}
return Status::OK();
}
void ControllerAgent::RegisterControllers(const ControlConf *control_conf)
{
for(auto active_controller : control_conf->active_controllers())
{
switch(active_controller){
case ControlConf::MPC_CONTROLLER:
controller_factory_.Register(
ControlConf::MPC_CONTROLLER,
[]()->Controller * {return new MPCController();});
break;
//case LAT_CONTROLLER
//case LON_CONTROLLER
}
}
}
Status Control::Start(){
//sleep for advertised channel to ready
std::this_thread::sleep_for(std::chrono::millisecons(1000));
timer_ = AdapterManager::CreateTimer(ros::Duration(control_conf_.control_period()),
&Control::OnTimer, this);
common::monitor::MonitorLogBuffer buffer(&monitor_logger_);
return Status::OK();
}
void Control::OnTimer(const ros::TimerEvent &)
{
double start_timestamp = Clock::NowInSeconds();
ControlCommand control_command ;
Status status = ProduceControlCommand(&control_command);
double end_timestamp = Clock::NowInSeconds();
status.Save(control_command.mutable_header()->mutable_status());
SendCmd(&control_command);
}
Status Control::ProduceControlCommand(ControlCommand *control_command)
{
Status status = CheckInput();
Status status_ts = CheckTimestamp();
Status status_compute = controller_agent_.ComputeControlCommand(
&localization_, &chassis_, &trajectory_, control_command);
return status;
}
Status ControllerAgent::ComputeControlCommand(
const localization::LocalizationEstimate *localization,
cosnt canbus::Chassis *chassis, const planning::ADCTrajectory *trajectory,
control::ControlCommand *cmd){
for(auto &controller : controller_list_)
{
controller->ComputeControlCommand(localization, chassis, trajectory, cmd) ;
}
return status::OK();
}

ControllerAgent::ComputeCmd() is the interface, the real control cmd will be computed inside each specified controller modes. there are few controller modes: MPC, Lattice e.g in Apollo.

in next few days continue work on localization, prediction, and decision modules in Apollo 2.0

Apollo2.0 Routing源码(2)

Posted on 2019-05-01 |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
// 模块入口
APOLLO_MAIN(apollo::routing::Routing)
int main(int argc, char** argv)
{
google::InitGoogleLogging(arg[0]) ;
google::ParseCommandLineFlags(&argc, &argv, true);
signal(SIGINT, apollo::common::apollo_app_sigint_handler);
apollo::routing::Routing apollo_app_ ;
ros::init(argc, argv, apollo_app_.Name());
apollo_app_.Spin();
return 0;
}
```
`appollo_app_.Spin()` can be found [here](https://zjli2013.github.io/2019/04/28/apollo-planning-源码/)
```c
apollo::common:Status Routing::Init()
{
const auto routing_map_file = apollo::hdmap::RoutingMapFile() ;
navigator_ptr_.reset(new Navigator(routing_map_file)) ;
common::util::GetProtoFromFile(FLAGS_routing_conf_file, &routing_conf_);
hdmap_ = apollo::hdmap::HDMapUtil::BaseMapPtr();
AdapterManager::Init(FLAGS_routing_adapter_config_filename);
AdapterManager::AddRoutingRequestCallback(&Routing::OnRoutingRequest, this);
return apollo::common::Status::OK();
}
/*
DEFINE_string(routing_adapter_config_filename, "modules/routing/conf/adapter.conf", "the adapter config filename")
*/
void AdapterManager::Init()
{
//...
for(const auto &config :: configs.config())
{
case AdapterConfig::ROUTING_REQUEST :
EnableRoutingRequest(FLAGS_routing_request_topic, config);
break;
case AdapterConfig::ROUTING_RESPONSE:
EnableRoutingResponse(FLAGS_routing_response_topic, config);
break;
case AdapterConfig::ROUTING_MONITOR:
EnableMonitor(FLAGS_monitor_topic, config);
break;
//...
}
}

where define EnableRoutingRequest() ?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
//apollo/modules/common/adapters/adapter_manager.h
#define REGISTER_ADAPTER(name)
static void Enable##name(const std::string &topic_name, const AdapterConfig &config)
{
instance()->InternalEnable##name(topic_name, config);
}
template<class T> static void Add##name##Callback(
void(T::*fp)(const name##Adapter::DataType &data), T *obj){
Add##name##Callback(std::bind(fp, obj, std::placeholders::_1));
}
tempalate<class T> static void Add##name##Callback(void (T::*fp)(const name##Adapter::DataType &data)){
Add##name##Callback(fp);
}
// apollo/modules/common/adapters/message_adapters.h
using RoutingRequestAdapter = Adapter<routing::RoutingRequest> ;
using RoutingResponseAdapter = Adapter<routing::RoutingResponse>;
// apollo/moduels/common/adapters/adapter.h
typedef typename std::function<void(const D&)> Callback ;
// apollo/modules/common/adapters/adapter.h
template <class D> void Adapter<D>:OnReceive(const D& message)
{
last_receive_time_ = apollo::common::time::Clock::NowInSeconds();
EnqueueData(message);
FireCallbacks(message);
}
void AddCallback(Callback callback)
{
receive_callbacks_.push_back(callback);
}
tempplate<class D> void Adapter<D>::FireCallbacks(const D& data)
{
for(const auto& callback : receive_callbacks_)
{
callback(data);
}
}

Dreamview and Planning modules have message publish API. e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
SimulationWorldUpdater::(WebSocketHandler *websocket, SimControl *sim_control, const MapSerivce *map_service, bool routing_from_file) : sim_world_service_(map_service, routing_from_file),
map_service_(map_service),
websocket_(websocket),
sim_control_(sim_control)
{
// ...
websocket_->RegisterMessageHandler("SendRoutingRequest", [this][cosnt Json &json, WebSocketHandler::Connection *conn)
{
RoutingRequest routing_request,
bool succed = ConstructRoutingRequest(json, &routing_request);
if(succed){
AdapterManager::FillRoutingRequestHeader(FLAGS_dreamview_module_name, &routing_request);
AdapterManager::PublishRoutingRequest(routing_request);
}
}

ApapterManager class is used to make sure the connection among each module in ROS message type.

Routing class has GPS and IMU input and generate routing and velocity info as output.

Navigator class is using A start algorthm with hd map, start point and end point as input to generate a navigation route.

未来5年在哪里(7)

Posted on 2019-04-30 |

噱头团队

必须在有危机意识的企业中成长。靠投资活的团队,往往只展现其繁荣的一面,对内对外;而实际公司的产品、市场定位,甚至内部员工都不知情。对员工缺乏诚实,对市场客户也不会诚实。

民企文化

近距离观察了一家民企(长城汽车),意识到民企都不容易迈过国际化的坎儿。为什么需要国际化?因为资本市场是赢者通吃。行业内国际企业进入,本土行业要想生存,必须主动走出去。

民企的老总个人印记太深。集权的管理问题,符合中国文化传统,但与现代管理理念相差甚远。强调军事化管理的集体制,是无法调动个人主观积极性的,对于底层车间工人可能有效,但如此又会造成企业管理上的双轨制,产生内部紧张。另外,集体制会助长一些骄横的个人气息,狭隘自大,而不利于个人内在品质的培养。一个代表先进生产关系和生产力的团队,是不会容忍形式主义的。

国内不错的企业要么狼性,要么艰苦奋斗。其实鼓励创业氛围没有问题,但是文化上很容易“右倾”,把规则理解的太死。

另外强调艰苦奋斗又不集权的华为,给员工持股,也许才是现代优秀企业该有的特征。把员工当作“合伙人“,自然调动了员工的积极性,而不是为某个老板打工。

回国前的想法是参与一个团队的成长,而不是在一家公司打工。所以除了物质利益,现在人更渴求在工作中的身份认同:”合伙人“。

套路

国内的套路:先放话。

改革开放前,国家层面严禁私有制,结果江浙的小农户搞了“分田到户”私有承包,后来却全国推广了。面对新情况,国家领导也在摸索,但又要给广大普通人一致的声音。所以先放话。在日后的实践中,慢慢修正,甚至会产生与放话的内容完全相反的实践。至于普通人,如果把放话的内容听的太真,跟领导较劲儿,就是不懂套路。

有些企业做了匪夷所思的规定,还名正言顺地称为“企业文化”,对于明显不符合人情逻辑的条例,也就是这类“放话”,企业领导并不知道怎么管理,员工明白就好,该怎么来还是怎么来;但如果因此想挑战企业领导的规定,就是不懂套路。

提拔 or 压制

国内有个说法叫”站错队“:不怨能力,是没跟对人。在美国职场,管理层都至少表现比较”亲民“,另外工薪层也基本生活无忧,所以两者相安,比较容易相信对方;相对,国内的职场还没有成熟的系统,就会有”站错队“的风险。另外大家都有生活压力,难免成了隐性竞争对手,互不信任,所以国内的职场被压制可能多过受提拔。这当然是陋习。

偶然看了密西根的地图,一股亲切感就涌上来。在美国的大环境会把善意当作默认的配置,工作生活上有意无意都会受到他人的帮助或有意无意地帮助别人。善意比较容易表达出来;相比,国内的环境,职场上、生活上,都有一些压制感。

比如生活上的压抑感。在北美的同学都嘴上说,回国好啊,吃的多么多么好。实际上,外面的东西都不敢吃,忌讳比如肉干净吗、油干净吗、放了不该放的调料吗。办点事,老是担心哪里被骗了都不知道。社会系统不成熟,就会有这样的隐形成本。

写在最后

写完上面的内容4天后,才有机会再次打开。工作到没时间读书是对人最大的消耗。希望自己尽快适应国内工作生活节奏,而又不受制于这样的工作生活状态。

某个人的回忆

ps 军训的时候,每天跑8km,站军姿一个半小时,在这样的环境下反而更激发我去思考,如何把高标准习成标配。

管理之路

我以为自己是细节导向的,国内这个工作环境会占据太多个人时间,叫人没时间思考大方向的问题。高标准成为标配吧。

越来越看到技术是平的,相比,工作流程,系统建设才是企业愿意花钱的地方。写程序、推公式、做运营、甚至做基础研究,到一定阶段都是极易取代的。“单腿走路”是high risk的。相比,任何一个领域的产品人,至少有一条粗腿,同时还有很多不错的小腿。把各个环节、各个岗位有机组合起来的,就是管理。而在工作中需要有意培养系统(管理)意识。

现在的民企,管理水平还是很差的。文化层面上,虽然在推一些流程,但是也有一些制度性的限制。一方面,一边高举打破旧思维搞创新,一边又做很多行为细节上的约束,在员工心理建立墙。工作流程的建立,不是做几次讲座、参加几个培训,领导提倡几次,不是喊出来的,而是像培养一个工作习惯。需要反复练习,慢慢融入到企业文化,变成员工自然而然的行为。

日常层面上,任务管理、时间管理都比较缺乏。老板会希望员工加班,但并没有定义很好的工作内容,即缺乏任务管理,缺乏日常的考核机制。然后责任不明晰,缺乏有效的工作模式,比如出现一个问题,一伙儿人都围上来了。或者一个会议讨论停不下来。当然,大的层面还是工作的流程没有建立好。

今后的工作,要经常跳出来思考系统建设,而不要跟年轻人拼体力上。

Apollo 2.0 Planning 源码(1)

Posted on 2019-04-28 |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
// 模块入口
APOLLO_MAIN(apollo::planning::Planning)
int main(int argc, char **argv)
{
google::InitGoogleLogging(argv[0]);
google::ParseCommandLineFlags(&argc, &argv, true);
signal(SIGINT, apollo::common::apollo_app_sigint_handler);
apollo::planning::Planning apollo_app_ ;
ros::init(argc, argv, apollo_app_.Name());
apollo_app_.Spin();
return 0;
}
///////////////////////////////////////////////////////
int ApolloApp::Spin()
{
ros::AsyncSpinner spinner(callback_thread_num_);
auto status = Init();
status = Start();
spinner.start();
ros::waitForShutdown();
Stop();
return 0;
}
///////////////////////////////////////////////////////
Status Planning::Init()
{
hdmap_ = apollo::hdmap::HDMapUtil::BaseMapPtr();
apollo::common::util::GetProtoFromFile(FLAGS_planning_config_file, &config);
if(!AdapterManager::Initialized()){
AdaapterManager::Init(FLAGS_planning_adapter_config_filename);
}
AdapterManager::GetLocalization();
AdapterManager::GetChassis();
AdapterManager::GetRoutingResponse();
AdapterManager::GetRoutingRequest();
if(FLAGS_enable_prediction)
AdapterManager::GetPrediction();
if(FLAGS_enable_traffic_light)
AdapterManager::GetTrafficLightDetection();
ReferenceLineProvider::instance()->Init(hdmap_, config_.qp_spline_reference_line_smoother_config());
RegisterPlanners();
planner_ = planner_factory_.CreateObject(config_.planner_type());
return planner_->Init(config_);
}
/* DEFINE_string(planning_adapter_config_filename,
"modules/planning/conf/adapter.conf",
"The adapter configuration file")
*/
///////////////////////////////////////////////////////////////
void AdapterManager::Init(const std::string &adapter_config_filename)
{
AdapterManagerConfig configs;
util::GetProtoFromFile(adapter_config_filename, &configs);
Init(configs);
}
//////////////////////////////////////////////////////////////
void AdapterManager::Init(const AdapterManagerConfig& configs)
{
if(Initialized()) return;
instance()->initialized_ = true;
if(configs.is_ros()){
instance()->node_handle_.reset(new ros::NodeHandle());
}
for(const auto &config : configs.config())
{
case AdapterConfig::CHASSIS:
EnableChassis(FLAGS_chassis_topic, config);
break;
case AdapterConfig::LOCALIZATION:
EnableLocalization(FLAGS_localization_topic, config);
break;
// ...
}
}
/* DEFINE_string 宏
FLAGS_chassis_topic -> /apollo/canbus/chassis
FLAGS_localization_topic -> /apollo/localization/pose
*/

where is EnableChassis ?

in /apollo/modules/common/adapters/message_adapters.h

1
2
3
4
5
using ChassisAdapter=Adapter<::apollo::canbus::Chassis>;
using GpsAdapter = Adapter<apollo::localization::Gps>;
using PlanningAdapter = Adapter<planning::ADCTrajectory>;

in /apollo/modules/common/adapters/adapter_manager.h

1
2
3
4
#define REGISTER_ADAPTER(name)
public static void Enable##name(const std::string &topic_name, const AdapterConfig& config)
{
}
continue tomorrow

自动驾驶职位介绍

Posted on 2019-04-08 |

光庭科技(武汉)

智能网联汽车车载终端产品,汽车IT公司,自主研发、产品设计、制造和销售。e.g. 自动驾驶控制器,远程无线通信终端,立体相机、地图传感器、”光谷梦4.0”自动驾驶系统

职位:感知算法工程师

要求:图像/点云目标检测、跟踪;slam算法;Camera/Lidar/Radar传感器融合算法;系统集成和调试

加分项:ROS开发经验,视觉SLAM算法,深度学习算法

职位:控制算法工程师

要求:控制系统设计、信号处理、动态系统建模; 时域/频域控制器设计和稳定性分析;汽车速度和方向控制

职位:测试主管

要求: 产品测试方案、管理。软件测试及自动化测试工具

华砺智行(武汉)

智能基础设施平台,为智能驾驶、智慧城市提供软硬件解决方案。产品:智能网联汽车终端、交通行业、无线通信、云平台、交通应用等

天迈科技(郑州)

城市公交运营、管理及服务提供综合解决方案。车联网产品:智能公交调度系统、远程监控系统、智能公交收银系统、充电运营管理。

职位:视频算法工程师

要求: 驾驶员行为状态检测算法,机器学习检测效果

赢彻科技

物流

纵目科技

环视ADAS解决方案

职位: 车身控制算法工程师

要求: 车身横向、侧向控制算法设计、仿真、测试。自动控制理论,车辆底盘控制,车身动力学,Carsim, Simulink

西井科技(上海)

职位: 仿真平台开发工程师

要求: 设计仿真平台架构,传感器仿真模型,3D物理引擎(unreal, unity),熟悉开源仿真平台(carla, airsim),构建场景环境

小鹏汽车(广州)

职位: SLAM专家

要求: 视觉定位算法;与感知、地图模块结合做3d视觉系统;

职位:HD Map工程师

要求: 大规模多层高精地图框架,地图API, 地图数据质量评估,地图数据格式

职位:运动控制算法专家

要求:转向、制动、动力系统的主动控制,对底盘、执行器的控制需求;车道保持、自适应巡航、自动变道等功能运动控制,现场调试

职位:规划与控制专家

要求:算法开发、测试,提出硬件设计和集成要求

职位: 雷达算法工程师

要求: 算法仿真验证、数据分析、算法嵌入式平台实现、维护

职位:计算机视觉

要求:场景元素识别;道路、停车场环境寓意分割;算法验证

职位:传感器融合

要求: 毫米波、超声波、摄像头、激光雷达测试开发; 感知数据处理、实时地图构建及应用;目标实时跟踪与预测

职位:项目经理

要求: l2-l4产品解决方案项目管理

宇通汽车(深圳)

职位: 控制工程师

要求: 轨迹跟踪、车辆控制、 仿真优化

职位:行为决策工程师

要求:行为决策算法开发,碰撞预测、行为预测、驾驶经验库、交通安全规则库等,基于强化学习的跟踪(?)

职位:首席工程师

要求:自驾客车架构规划、设计;核心算法开发和测试;自动驾驶技术跟踪。熟悉人工智能、机器视觉、深度学习、高精地图、定位等, 熟悉滤波算法、轨迹规划

纽励科技

职位:ADAS产品经理

要求:产品客户需求调研、项目收集整理, 与主机厂沟通技术方案,产品改进。熟悉AEB, ACC, LKA, FCW, 自动泊车,熟悉汽车电子软硬件设计、嵌入式开发标准,熟悉传感器

光束汽车

职位: 控制算法

华人运通

职位: 控制工程师

要求: 运动控制算法的设计开发、仿真、优化,测试; 基于驾驶员操作数据的车辆运动控制算法

恒大汽车

自行开发,现有汽车平台及产品

职位: 仿真验证经理

要求:汽车电子网络、诊断开发测试、

研发中心结构: 造型中心,动力总成中心, 车联网中心,整车工程中心,自动驾驶中心

上海电气

轨道公交智能系统

职位: 控制算法工程师

要求:车辆控制实时数据采集、理论模型、系统仿真和实测验证算法

Magna Steyr 麦格纳斯太尔汽车技术

职位: L4 ADAS电子专家

要求: ADAS SE团队

职位: GNSS测试验证首席工程师

牧月科技

职位:仿真系统工程师

要求: 负责仿真系统开发、测试,及各种传感器的仿真软件库,利用已有数据构建仿真场景;开发自动化仿真分析平台

景驰科技(文远知行)

职位: 仿真算法工程师

要求: 开发场景自动化生成算法、人机交互仿真软件工具集、可扩展计算框架

腾讯CSIG事业部

伟世通

职位: 软件测试

要求: ADAS软件测试、竞品分析

博世(苏州)

职位: 控制算法工程师

要求: ADAS自动泊车系统

亿伽通(吉利系)

职位:算法工程师
要求:定位产品场景、需求分析; 参与搭建自动驾驶数据智能服务平台;熟悉基于视觉的slam方案

奥迪中国(北京)

车联网团队

职位: 测试工程师

要求: ADAS/HAD功能测试

职位:仿真工程师

要求: SiL环境搭建,adas功能仿真测试

斑马智行(阿里、上汽)

智加科技(苏州)

职位: 传感器标定工程师

要求: 传感器选型、数据读取、内外参标定、在线检测

职位:感知算法工程师

要求: 对图像、点云等的静态场景要素检测和追踪;融合传感器数据后的目标检测和跟踪;服务地图自建、预测规划决策系统

职位: 地图定位工程师

要求: 开发多传感器融合算法以提高地图精度,自动化地图的大规模采集、生成、标注、校政。

职位: 决策规划工程师

要求: 预测、决策、规划等系统

职位: 仿真工程师

要求: 利用真实数据或合成数据搭建动态环境的仿真框架;设计场景,为感知、规划、预测等模块开发仿真测试接口;开发基于仿真测试的自动化分析平台;构建可扩展的仿真框架

魔视智能(上海)

极目智能

职位: 车辆决策算法工程师

零跑科技

职位: 算法工程师

要求: 定位算法模块,基于GPS/IMU的航伟推算算法,视觉定位算法

阿里巴巴(人工智能实验室)

职位: 仿真算法工程师

吉利

Volvo上海研发中心

中智行(南京)

l4 解决方案

同元软控(苏州)

国产CAD/CAE产品供应商

华为2012

华为海思(上海)

职位: 软件测试

一汽红旗

职位: 车联网构架设计师

要求: T-box端-云构架,v2x

潜在人选:

  • 车联网企业: 斑马、亿伽通、博泰、百度车联网团队、飞驰镁物、哈曼、四维图新、梧桐车联
  • 新势力: 蔚来、小鹏、威马

职位:软件算法、系统工程师

潜在人选:

  • 上汽人工智能实验室
  • 英伟达
  • 福特-argo ai
  • momenta, plusai

职位: 数据挖掘工程师

要求: 大数据建模分析,用户数据、车辆数据、驾驶员数据等

潜在人选:

  • 浪潮
  • 曙光
  • 华为云
  • 阿里云
  • 百度

华域汽车

滴滴无人车

美团无人车

职位: 仿真工程师

一汽大众

职位:决策算法工程师

veoneer

职位:adas软件开发工程师

易航智能

职位: 控制算法工程师

三一无人驾驶

无人码头

职位: 算法工程师

易高美

职位: hpmap仿真工程师

要求: 大批量仿真数据自动化生成;设计分布式仿真系统底层架构,3D引擎编辑器工具链开发;实现仿真系统场景渲染;对大批量及各种随机虚拟数据自动化脚本工具开发。OpenGL/Direct3D图形接口; unreal, unity引擎

1…111213…20
David Z.J. Lee

David Z.J. Lee

what I don't know

193 posts
51 tags
GitHub LinkedIn
© 2020 David Z.J. Lee
Powered by Hexo
Theme - NexT.Muse