Serious Autonomous Vehicles


  • Home

  • Archives

  • Tags

  • Search

particle filter (2)

Posted on 2019-04-03 |

前一篇 介绍贝叶斯滤波的数学原理和蒙特卡洛定位的算法。本篇将介绍基于序贯重要性采样。粒子滤波的思想就是采用一个加权粒子分布去近似后验概率分布p(x)

蒙特卡洛积分

定义一连续随机变量X, 其概率密度分布函数为 p(X); 定义Y=f(X), 则随机变量Y的数学期望:

实际中,概率密度分布p(X)未知,如何保障所采样的点服从p(X)

直接采样

通过对均匀分布采样,实现对任意分布的采样。

任何未知概率密度分布的累积概率函数cdf都映射在[0-1]区间,通过在[0-1]区间的均匀采样,再函数z = cdf(y)求逆,即是符合真实 y的概率密度分布的采样点。

但如果cdf()函数未知或无法求逆,直接采样不可行。

接受-拒绝采样

用一个已知概率分布函数q(X)去采样,然后按照一定的方法拒绝某些样本,达到近似p(X)分布:

p(x_i) <= k p(x_i)

该采样的限制是确定参数k。

重要性采样

在一定的抽样数量基础上,增加准确度。未知p(x), 在已知概率密度分布的q(x)上采样{x_1, x_2, … x_n}后估计f的期望:

定义新的随机变量:

关于原随机变量Y在未知概率分布p(x)下的期望,转化为新的随机变量Z在已知概率分布q(x)下的期望。已知概率分布,即知道如何采样。这里 p(x)/q(x) 就是权值。

so the posterior expectations can be computed as:

as the importance weights can be defined as:

the problem is we can’t get p(x|z) , but a loosed (unnormalized) importance weights as:

then do normalized from it:

so the posterior expectation is approximated as:

sequential importance sampling(SIS)

consider the full posterior distribution of states X_{0:k} given measurements y_{1:k} :

consider the sequential of q(x):

then the unnormalized importance weights can be as:

namely:

the problem in SIS is the algorithm is degenerate, that variance of the weights increases at every step, which means the algorithm will converget to single none-zero (w=1) weight and the rest being zero.

so resampling.

sequential importance resampling(SER)

resampling process:

1) interpret each weight as the probability of obtaining the sample index i in the set x^i

2) draw N samples from that discrete distribtuion and replace the old sample set with the new one.

SER process:

1) draw point x_i from the q distribution.
2) calculate the weights in iteration SIS and normalized the weights to sum to unity
3) if the effective number of particles is too low, go to resampling process

partical filter

Posted on 2019-04-02 |

localizing the vehicle involves determing where on the map the vehicle is most likely to be by matching what the vehicles see to the map.

Markov localization or Bayes Filter for localization is the generalized filter. thinking of the robot location as a probability distribution, each time the robot move, the distribution becomes more diffuse(wide). by passing control data, map data, observation into the filter will concentrate(narrow) the distribution at each timestep.

state space

x = f(x, v)            (1)

z = h(x, w)             (2)

v, w is the process noise, measurement noise respectfully, and each is in the normal Gaussian distribution.

Bayes filter derivation

(b)

consider the multiply rule of probability:

p(a, b) = p(a|b) p(b) 

lhs of equation(b) is:

given x_k, assuming z_k is independent from all previous measurements z_{1:k-1}:

Markov Localization

in which the true state x is unobserved, and the measurements z is observed.
assuming 1st order Markov, the probability of current true state:

p(x_k | x_{0:k-1}) == p(x_k | x_{k-1}) (3)

similarly, the measurement is only dependent on current state, which is a stochastic projection of the true state x_t, :

p(z_k | x_{0:k}) = p(z_k | x_k)     (4)            

(3) is referred to as motion model, and (4) as measurement/observation model.

the classical problem in partially observable Markov chains is to recover a posterior distribution from all avilable sensor measurements and controls in all timesteps.

Especially, for the localization problem here is to obtain the system current state posterior p(x_k | z_{1:k}) based on the all existing measurements, which can be solved by Bayes Filter.

ps. the propability distribution of current state is also depend on other known inputs, e.g. map data, control data.

prediction

from Bayes filter equation, p(x_k | z_{1:k-1}) need get first, which is the prediction step. physically, it used to estimate the system state based on all previous measurements.

consider x_{k-1} as the random variable, the integration of pdf p(x_k, x_{k-1} | z_{1:k-1}) about x_{k-1} is p(x_k | z_{1:k-1})

consider the multiply rule:

by 1st order Markov assuming, the first item in integral can reduced:

p(x_k | x_{k-1}) is determined by the system, which obey the same distribution of process noise. p(x_{k-1} | z_{1:k-1}) is known, as the posterior state at timestep k-1. this is where the recursive process.

update

using equation (b) to update the current posterior state. the denominator of (b) is a constant coeffient. p(z_k | x_k) is the likelihood paramter, decided by measurement.

无迹卡尔曼滤波

Posted on 2019-03-31 |

非线性系统:

x = f(x, w)         (1)

z = h(x) + v                 (2)

随机信号 w, v分别是过程噪声和观测噪声

CTRV 状态方程

对于const turn rate and velocity magnitude (CTRV )场景:

x = (px, py, v, phi, \dot{phi})

固定速度和转动速率约束,即:

考虑 dv/dt == 0 , dphi^2\dt^2==0 且\psi是时间的函数, 上述第一项即:

从原状态空间到预测空间,由方程(1),(2)可见,过程噪声w是状态x的非线性项;而z关于观测噪声v是线性的。ukf实际采用增广状态变量sigmax = [x, w]. 过程噪声w包括径向加速度和角加速度 [w_a, w_phi], 且w不是时间的函数 , 对上述第一项可展开:

预测空间

对比扩展卡尔曼 ekf采用一阶线性化近似。无迹卡尔曼ukf,将原状态空间的特征采样点(sigmax)映射到预测空间,采用预测空间里的状态变量f(sigmax)的均值、方差的加权推广作为先验状态估计x^- 和先验误差P^-。

其中权值表述:

$$ w = lamda / ( lamda + ns) when i==1 $$ 
$$ w_i = 0.5/(lamba + ns)       when i!=1 $$

$$ X^- =  sum(w_i * f(sigmax) ) $$
$$ P^- =  sum(w_i * (f(sigmax) - x).^2) $$  

观测空间

将原状态空间的特征采样点(sigmax)映射到观测空间,采用观测空间里的状态变量h(sigmax)的均值、方差的加权作为先验观测值Z^- 和观测值先验误差S^-,使用与预测空间同样的权值。

$$ Z^- =  sum(w_i * h(sigmax) ) $$
$$ S^- =  sum(w_i * (Z^- - z).^2) + R $$  

卡尔曼滤波表示: 后验估计(真实状态变量值)与先验估计(预测空间的状态变量值)的差异,可表示为真实观测值与观测空间里的先验观测值的差异的增益 K。

$$ x - x^-  = K (z - Z^-) $$   (3)

可见,卡尔曼增益K在衡量状态误差与观测误差之间的相关性。定义预测空间与观测空间的相关系数:

T =  sum(w_i * (X^- - x)(Z^- - z))
K =  T /  S^-                   (4)

ukf算法

有(4), (3) 分别更新卡尔曼增益和状态变量, 预测空间里的先验误差更新由:

P = P - KSK^t

ps: 在非线性的处理上,线性化或者布点采样都是常用的思路。也是ekf与ukf的区别。

link1

link2

CarND(term2): Extended Kalman Filter

Posted on 2019-03-31 |

sensor measurement

Radar

radio detection andranging, using radio waves to measure the distance to objects as well as their velocity and angle. Radar used a lot in preventing collions, parking assistance, cruise control. and Radar isn’t affected by weather conditions.

while Radar can’t tell an object’s shape correctly. and Radar can’t detect objects if they are out of their line of sight.

the Radar measurement data in EKF proejcct is 3D position and velocity vector (ro, theta, ro_dot) in polar coordinates.

Lidar

light detection and ranging, using near-infrared light to scan objects and create 3D map of the enviroment. it’s 360-degree view and can track movements and their directions. but it also depends on weather conditions and can’t detecting the speed of other vehicles well.

the Lidar measurement data in EKF proeject is 2D position vector (x,y) in Cartesian coordinate system.

compare Radar vs Lidar

server - client network

Udacity simulator communicate with EKF controller through websocket.

in simulators(Udacity carsim, Carla) running time, the message channel between simulator server and external controller need to be open all the time. so the simulator feed the controller with sensor data, and the controller feedback simulator with controlling data.

uwebSocket

built once uwebSocket, webSocket protocol providing full-duplex communication channel between server and client through a singlt TCP connection. it allows the server to send content ot the client without being first requested by the client, and allowing messages to be passed back and forth while keeping the conenction open.

sensor raw data

every data slice includes a either Lidar or Radar raw measurement and a ground truth measurement.

The state variable x is described in Cartesian coord, so for Radar measurement processing, there is a coordinate transfer from Cartesian to polar, and which lead it nonlinear, requiring Extended Karman Filter.

EKF controller

the client side is the EKF controller, which process the sensor measurement.

define

system state `x_` ,

  state priori covariance `P_`,

   state transition matrix `F`, 

process covariance `Q_`, 

measurement gain matrix  'H'

measurement covariance 'R_' 

kalman filter gain 'K'

as discussed in previous blog:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
void KalmanFilter::Predict()
{
x_ = F * x_ ;
P_ = F * P_ * F.transpose() + Q_ ;
}
void KalmanFilter::Update(const Vector & measurement)
{
if(EKF)
h = toPolar(x_);
y = measurement - h ;
else
y = measurement - H * x_ ;
K = P_ * Ht / (H * P_ * Ht + R_);
x_ = x_ + K * y ;
P_ = (I - K*H) * P_ ;
}

卡尔曼滤波

Posted on 2019-03-30 |

状态方程

状态变量 $x$ 满足, 其中 u为控制变量:

$$ x = A x_prev + B u_prev + w_prev $$    (1) 

观测变量 $z 满足:

$$ z = H x + v $$                    (2)

随机信号 $w$ 和 $v$ 分别表示过程激励噪声和观测噪声,假定相互独立,且服从正态分布。

定义状态变量的先验估计 $x^-$, 即基于之前状态对当前状态的预测值; 定义后验估计 $x^$,即已知当前观测值所计算的当前状态变量。 定义先验误差 $e^-$, 后验误差 $e^$, 满足:

$$ e^- = x - x^- $$                     (3)

$$ e^ = x - x^ $$                         (4)

卡尔曼滤波表示: 后验估计(观测值所推导的状态变量值)与先验估计(预测的状态变量值)的差异,可表示为观测值与以先验估计为输入的观测值的差异的增益 K。

$$ x^ -  x^- =  K ( z - H x^- ) $$     (5)

K可由先验误差的协方差 P、观测噪声的协方差R 和观测增益H表示:

$$ K = P^- H^t /  (  HP^-H^t + R ) $$   (6)

可见:

1) 当R 趋于0时, k 趋近于 h 的逆,此时 x^ = x。即当观测误差很小,观测值趋近真实值。

2) 当P趋于0时,即预测值趋近真实值。

算法设计

卡尔曼滤波器用反馈控制估计过程状态(变量): 滤波器估计某一时刻的状态(时间更新/预估),然后以(含噪声的)测量变量获得反馈(测量更新/校政)。

时间更新,当前时步状态先验估计 x^- 及先验误差协方差近似P^-:

$$ x^- =  A x^-_prev + B u_prev $$  (7.1)

$$ P^- = A P^_prev A^t + Q $$         (7.2)

其中 $ P(w) ~ N(0, Q) $

测量更新,使用(6)更新卡尔曼增益K, 使用(5)更新后验状态变量x^和当前步先验误差协方差值 P^:

$$ P = ( I  - KH ) P^-  $$            (8)

控制器调参

测量误差一般可观测得到;而过程误差q需要通过与一个已知误差的在线滤波器对比调整系数。调参一般是离线过程。一般当过程误差和卡尔曼增益会快速收敛并保持常数。但测量误差受环境影响不易保持不变。

扩展卡尔曼

当观测值与系统状态变量 或 系统本身是非线性关系,方程(1), (2)变非线性函数。

$$  x = f(x_prev, u_prev, w) $$  (1.2) 
$$  z =  h(x, v)    $$            (2.2) 

link

paper reading-Carla an open urban driving simulator

Posted on 2019-03-30 |

CARLA used to support training, prototyping, validation of self-driving models, including perception and control. CARLA is usded to study the performance of three approaches, 1) classic modular pipeline that comprises a vision-based perception module, a rule-based planner, and a maneuver controller; 2)a deep network that maps sensory input to driving commands via imitaion learning; 3) end-to-end reinforcment learning.

all approaches make use of a high-level topological planner. the planner takes the current position of the agent and the location of the goal as input, and use A* algorithm to provide a high-level plan. this plan advises the agent to turn left/right, or keep straight at intersections, but not provide a trajectory neither geometric info. which is a weaker form of common GPS.

simulation engine

CARLA simulates a dynamic world and provide a simple interface between the world and an agent that interacts with the world. CARLA is designed as a server-client system, where the server runs the simulation and renders the scene, the client API is responsible for interaction between the agent and the server via sockets.

the client send commands and meta-commands to the server and receives sensor readings in return. the comands control the vehicle and includes steering, accelerating, and braking. meta-commands controls the behavior of hte server, e.g. resetting hte simulation, modifying the sensor suite.

environment

the static 3D world, such as buildings, traffic signs, and the dynamic objects such as vehicles, pedestrains. the behavior of non-player characters is based on standard UE4 vehicle model(PhysXVehicles), and extended with a basic controller to govern NPC’s behavior: lane following, respecting traffic lights, speed limits, and decision making at intersections.

pedestrains

pedestrains navigate the streets according to a town-specific navigation map, which conveys a location-based cost, which is designed to encourage pedestrains to walk along sidewalkd and marked road crossing, but allows them to cross road at any point.

sensors

camera parameters include 3D location, 3D orientation with respect to the vehicle’s coordinate system, field of view, and depth of field.

the semantic segmentation pseudo-sensor provides 12 semantic classes: road, land-marking, traffic sign, sidewalk, fence, pole, wall, building, vegetation, vehicle, pedestrain, and other.

a range of measurements associated with the state of the agent. ?

measurements concerning traffic rules include the percentage of vehicle’s footprint that impinges on wrong-way lanes or sidewalks, as well as states of the traffic lights and speed limit at the current location of the vehicle.

CARLA provides access to exact location and bounding boxes of all dynamic objects.

autonomous driving

the agent interacts with the environment over discrete time steps. at each time step, the agent gets an observation, which is a tuple of sensory inputs, and must produce an action, which represents steering, throttle, brake.

modular pipeline

the pipeline includes: perception, planning, continuous control. local planning is critical based on visual perception.

perception

using semantic segmentation network based on RefineNet to estimate lanes, road limits, and dynamic objects. and a classification model is used to determine proximity to intersections.

the local planner

coordinates low-level navigation by generating a set of waypoints, near-term goal states that represents the desired position and orientation of the car in near future. the rule-based state: 1) road-following, 2) left-turn, 3) right-turn, 4) intersection-forward, 5) hazard-stop. transitions between states are performed based on estimates provided by the perception module and on topological info provided by the global planner.

continuous controller

using PID controller, which inputs current pose, speed, a list of waypoints, and outputs steering, throttle, and brake.

carla release

9.1

1) enable client to detect collisions and determine lane changes : sensor.other.collision, sensor.other.lane_detector,

2) access to the road network, waypoints nearby current vehicle and define user navigation algorithms: Map

3) support new map created from external RoadRunner/VectorZero, in OpenDriven map standard

9.2

1) simulation of traffic scenarios by Scenario Runner. e.g. following leading vehicle, stationary object crossing, dynamic object crossing, opposite vehicle running red light, vehicle turn right/left etc

2)upgraded ROS bridge

3) vehicle navigation from client side: BasicAgent, navigate to a point given location while dealing with other vehicles and traffic lights safely; RoamingAgent, drives around making random choices when presented to multiple options

9.3

1) new town and new pedestrains

2) no rendering mode, a 2D map visualization tool that display vehicles, traffic lights, speed limits, pedestrains, road, etc. help to improve the server framerate

3) traffic light class in client TrafficLightState

4) new sensors. ObstacleDetector, a simple raycast sensor to detect something in front of the ego vehicle and what is it; and GlobalNavigationSatelliteSystem, attach to ego vehicle and get its geolocation, which is based on the geo-reference define in OpenDriven file associated with each map.

9.4

1) allow client side to change physics properties of vehicle or their components in runtime WheelsPhysicsControl

2) logging and playback system, which includes a camera-following mode to follow a target actor while replaying the simulation, and can replay situations from different viewpoints. and the logging query engine allow users to query different types of events.

3) random streaming port, which makes it possible to stream sensor data in a secondary port

4) import maps, replace maps as tar.gz files in “ExportedMaps” folder

paper reading-autonoVi: AV planning with dynamic maneuvers and traffic constraints

Posted on 2019-03-29 |

this is the advanced driver module in Vi-sim simulation platform. this driver module algorithm pipeline:

1) a route plan by graph-search over the network of roads

2) rules based guiding trajectories generation(traffic and lane following rules)

3) set of candidate trajectories(control inputs) generation and evaluated by vehicle dynamic model and collision free model

4) most feasible trajectory evaluated through optimization

vehicle state space

the full state of a vehicle updates:

X = (x, y, v, theta, throttle, steering, behavior)

the vehicle updates its plan at a fixed palnning rate dt; at each pllaning step, the vehicle computes a target speed v and target steering theta to be achieved by the control system

S(u, X) determine if a set of control is feasible, given current state of the vehicle, S(u, X) will return false if the given input u cause a loss of traction or control.

sensing and perception

the sensing module provide an approximation of the center line of lane, closet point on the lane center to the ego-vehicle, and friction coefficient.

route choice and behavior state

behavior set includes merging, right turn, left run, keep straight. the behavior state of the vehicle is described as a finite-state machine(turn left, turn right, merge left, merge right), which restrict potential control decisions and adjust the weight of the cost function.

guiding path

the ego-vehicle computes a set of waypoints along the current lane at fixed time intervals. how to create the path based on waypoints

collision avoidance

define obstacles domain for each neighbor of the ego-vehicle, which is defined as all controls that could lead to collision. the obstacles domain and the set of dynamic infeasible domain form the boundary of collision-free space for the ego-vehicle.

trajectory sampling

the exact obstacle domain is not computing time efficent, instead here use a sampling strategy around theta and v to determin a feasible control. each sample is referred to as a candidata control u_c.

trajectory cost function

once the set of suitable control candidates has been computed, the most feasible control will be selected by minimizing the cost function for each sample point i :

C = sum_i{ C_path(i) + C_cmft(i) + C_mnvr(i) + C_prox(i)

1) path cost, defined as success at tracking its path and the global route.

2) comfort cost, C_cmfg = ||vel_acc|| + ||theta_acc||

3) maneuver cost, penalize lane changes C_mnvr = lane_change

4) proximity cost, prevent the ego vehicle from passing close to neighbors.

control input

one PID controller to driven current speed to match the target speed; another PID controller drives the current steering angle to match the target.

未来5年在哪里(6)

Posted on 2019-03-29 |

广州之行

投自动驾驶,隐约有些不安全感。比如,激光雷达、定位l3+的HDMap, 仿真环境,l4全栈解决方案。一旦l3+以上的自动驾驶方案短期内不会量产,做这些方向的创业团队、产品投入都会”死“。

国内所有行业的激烈竞争,造成包括互联网公司、主机厂、创业团队,都强烈需要落地产品。而对于一个遥遥无期,5~10年,甚至更久以后才能实现的产品的技术研发投入是所有公司不能承受之轻。相反,对能快速落地的产品、应用场景,也是各方发力的地方。比如, l1 ~ l2.5 的 adas 产品。

在国外主机厂侧重研发;国内主机厂侧重落地。也是回国感受到的落差。朋友讲,国内主机厂对供应商的依赖很重。长城的情况大概就是,l3以下的adas产品软硬件全栈由供应商提供,主机厂自身连标定/调参都不参与😓。主机厂的趋势是慢慢自己做,也是国内汽车人的机会吧。相比,福特,通用都是15+万员工的规模,国内自主品牌主机厂的员工规模在1/20。另外,国内主机厂员工大多是dre角色。

行业走近了,都是深水。想轻轻松松工作,就是表面划水。对行业风险缺乏判断、或者不能承受行业风险的,想挑容易的活儿,那在哪行都待不长。所以啊,年轻人就要在压力环境下活着。

汽车创业

了解到一些汽车行业的创业者,比如做汽车云、车载服务、以及自动驾驶软硬件方案。可能因为创业方向本身只是依托于自驾、智能网联车等具体应用场景,其所创的技术只是一些在其他领域成熟或新的技术。比如,云计算、移动操作系统、视觉AI、5G等技术在汽车载体上的转化应用。

所看到的汽车领域的创业,更像是一个技术转化。而源于汽车领域自身的新创意,似乎只在博世、大陆等成熟技术积淀的企业里面逐步推进的。而且行业内新应用场景的标准定义也是由这些大厂主导的。比如,AutoSar, LTE-V2X, ADAS前装需求定义等等。

没有行业积淀的汽车创业,看着叫心悬,这样都敢玩!他们最好的命运可能是被主机厂收购,但是更大可能是被互联网巨头挤掉,无声无息。从广州回家的路上想到:自己曾经也是有梦想的人,面对现在的市场环境,也许能去个大平台做点事,就聊以自慰了。

创业 本是个挺好的事儿,但必须有强大的信念,觉得这事儿一定能成。只想搞个概念移植,那是注定要凉。

paper reading- autono Vi-Sim simualtion platform

Posted on 2019-03-28 |

introduction of Vi-Sim

  • data generation, allowing exports of traffic data and virtual sensor data on the vehicle, which can be used in training DL by generating automatically labelled classification and control data

  • dynamic traffic conditions, with varying vehicles, pedestrians, lighting, weather

  • rapid scenario construction

simulation modules

Vi-Sim is divided into 8 extensible modules.

roads

represented by center line, #lanes, directions, surface friction. the roads can quick constructed by drawing splines on the landscape

road network

provides connectivity information of road and traffic infrastructure. the road network provides routing and localization purpose.

infrastructure

represents traffic lights, signage, and any entities that will modify the behavior of vehicles on the road.

environment

represents the time of the day, weather, rain conditions, road friction etc.

non-vehicle traffic

basically pedestrains and cyclists in the map. both are following safe traffic rules.

data capture

this module used for logging data of the environment as well as sensor data from ego vehicle

driving modules

vehicle

represented as a physical-driven entity with specific tire, steering, sensor parameters.

the vehicle has 3 components:

* control is provided with steering, throttle, brake inputs;  


* dynamics is implemented in Nvidia physX engine; 


* perception component is a ray-cast with configurable uncertainty, detection time, classification error rate, and sensor angle/range. 

a vehicle can equip multiple sensors. the perception component provides interface to a generic camera interface and Monte Carlo scanning ray-casts, which can be extended to Lidar/camera based NN claassifiers.

driver

driving decision module, who fuses information from road network and vehicle’s sensor to make decisions. currently there are 3 driver models

   * lane-following driver, which employs control command like lane-keeping ADAS

   * manual driver, allows a human drive the vehicle

* autonoVi driver, use optimization-based maneuvering with traffic constraints to generate advanced behaviors 

limitations

  • lack of calibaration configuration to replicate specific sensors

  • driver modules are limited to hierarchical, rule-based approaches

  • real traffic conditions

thoughts

1) sensor components in simulation, e.g.  Lidar, camera, Radar

2) sensor calibration in simulation, Apollo and Carla may has some good suggestions 

3) multi-agents environment

4) distributed framework to ensure real time multi-agents simualtion 

paper reading-distributed simulation platform for autonomous driving

Posted on 2019-03-28 |

to test newly developed algorithms, due to the massive amount of simulation data, need a distributed simulation platform based on Spark distributed framework.

  • simulation based on synthetic data, used in control and planning

  • simulation based on real data playback, used to test function and performance of different components

in autonomous driving system, each functional module in ROS is deployed as a node, the communication between nodes rely on the messages with well-defined formats. so the test of each module is independent, we can develop simulation module for each functional module.

anatomy of simulator

there should be a dynamic model of the car, a vehicle dynamic model; then the external environment is needed, which includes static and dynamic scenes.

the simulator can decompose external environment into basic elements, and rearranges the combination to generate a variety of test cases, each simulating a specific scenario.

e.g. the position, speed, next step command of the barrier vehicle can give different basic elements.

ROS based simulator

to use the real traffic data to reproduce the real scene requires a distributed simulation platform.

  • ROSBAG, record from Topic and replay message to Topic.

    the Record function is to create a recording node in ROS, and call the subscribe method to receive ROS message to all Topics, and then write the message to Bag file.

    the Play function is to establish a play node, and call the advertise method to send message in bag to specified Topic.

Spark distributed platform

the Spark driver launch different simulation applications, e.g. localization algorithms, object recoginization algorithms, vehicle decision-making and control algorithms etc, then allocate resources to each Spark worker, who first reads the RosBag data into memory and launches a ROS node to process the incoming data.

the interface between Spark and ROS is through Linux pipe, basically data written to the write end of the pipe is buffered by the kernel until it is read from the read end of the pipe.

two problems: 1) Spark only support text-based data consuming; 2) Spark memeory to ROBag

Binary data streaming

the core Spark data structure is resilient distributed dataset (RDD). to process and transform binary data into a user-defined format and transform the output of Spark computation into a byte stream, even further to a generic binary file(HDFs)

1) encode and serialize the binary files(image, lidar input data) to form a binary byte stream 

2) de-serialize and decode the binary stream, according to interpret byte stream into an understandable format and perform target computation

3) the output then be encoded and serialized before passed in RDD partitions(e.g. HDFs), and returned to Spark driver 

data retrieval through ROSbag cache

two things: reading from memory through ROSbag play, and writing to memory through ROSbag record.

solution: design a memoryChunkedFile class, derived from ChuckedFile class, to read/write memory rather than files.

1…121314…20
David Z.J. Lee

David Z.J. Lee

what I don't know

193 posts
51 tags
GitHub LinkedIn
© 2020 David Z.J. Lee
Powered by Hexo
Theme - NexT.Muse