introduction of Vi-Sim
data generation, allowing exports of traffic data and virtual sensor data on the vehicle, which can be used in training DL by generating automatically labelled classification and control data
dynamic traffic conditions, with varying vehicles, pedestrians, lighting, weather
rapid scenario construction
simulation modules
Vi-Sim is divided into 8 extensible modules.
roads
represented by center line, #lanes, directions, surface friction. the roads can quick constructed by drawing splines on the landscape
road network
provides connectivity information of road and traffic infrastructure. the road network provides routing and localization purpose.
infrastructure
represents traffic lights, signage, and any entities that will modify the behavior of vehicles on the road.
environment
represents the time of the day, weather, rain conditions, road friction etc.
non-vehicle traffic
basically pedestrains and cyclists in the map. both are following safe traffic rules.
data capture
this module used for logging data of the environment as well as sensor data from ego vehicle
driving modules
vehicle
represented as a physical-driven entity with specific tire, steering, sensor parameters.
the vehicle has 3 components:
* control is provided with steering, throttle, brake inputs;
* dynamics is implemented in Nvidia physX engine;
* perception component is a ray-cast with configurable uncertainty, detection time, classification error rate, and sensor angle/range.
a vehicle can equip multiple sensors. the perception component provides interface to a generic camera interface and Monte Carlo scanning ray-casts, which can be extended to Lidar/camera based NN claassifiers.
driver
driving decision module, who fuses information from road network and vehicle’s sensor to make decisions. currently there are 3 driver models
* lane-following driver, which employs control command like lane-keeping ADAS
* manual driver, allows a human drive the vehicle
* autonoVi driver, use optimization-based maneuvering with traffic constraints to generate advanced behaviors
limitations
lack of calibaration configuration to replicate specific sensors
driver modules are limited to hierarchical, rule-based approaches
real traffic conditions
thoughts
1) sensor components in simulation, e.g. Lidar, camera, Radar
2) sensor calibration in simulation, Apollo and Carla may has some good suggestions
3) multi-agents environment
4) distributed framework to ensure real time multi-agents simualtion