Carmaker 8.0

this is from IPG open house Shang Hai

scenario generation task

data record

tracking vehicles, roads, tobstacles

obj: lane, road, barries, GPS input, vehicle position/orientation, fixed ID, type

the goal of recoding is for road building, which will be used in replay.

road build

GPS input + lane mark info + vehicle location –> vehicle trajectory

replay

run config, input as tranversal and longitudial position

traffic vehicle location, speed

rearrange

input as : traffic vehile info + ego info , list of traffic vehicle info

traffic vehicle manage:

1) manuevor control:  free move 

2) spawn control:  lati +   longi -->  23 cases 

3) support external plugins +   manuevor trigger

Synthetic Scenario

junction assistant

road type + traffic rules + scenario –>

support road topology modification

support different envs: day of time, weather,

scenario editor to support opendrive import

standardization

PEGASUS + ASAM simulation standards

roads, scenarios, simulation interfaces

Opendrive –> road topology

opENScenario –> maneuver & anction abstract definitions

Open simulation interface –> interface developed for PEGASUS

limitations

pre-define route for vehicle ?

the ego car has AI maneuvor ?

Vitual Prototype

including gearbox loss mode, gas mode, through look-up table

including hybrid powertrain architectures: automatic gearbox + parallel hybrid

including powertrain masses(engine, tank, gearbox, battery, motor)

including trailer data set generator

including damping top mount

Simulation test

support:: ADAS/AD, POWERTRAIN, Vehicle Dynamics

steering system visual case

for less steering will overall comfort and vehicle dynamics

reference measurements(steering-in-loop simulator) -> model parameter id + softare + ECU integration –> parameterization & validation -> training

how the steering system works

open loop to get mechanical characteristics(stiffness, friction..)

system performance with or without EPS

1) ideal(basis) model vs physical model

how to cowok the physical model with autopilot control model ?

test bed

to support electrification, durability, balancing, driveability, powertain caillbratio, connected powertrain

AI training with synthetic scenario

decion making

trajectory planning

image perception

q: how to make sure AI robost ? –>

what CarMaker can do for AI?

1) obj annotation (vehicles, pedestrains) –> auto annotation

2) semantic segmentation

e.g. IPG Movier for auto semantic segmentation

Q: what’s the hardware for ?

Cloud & CPU/GPU for Parallelization

q: how to parallel in docker ?

1) test case in each CPUs

2) even for single test run(with multi sensors, multi cars )

resources & distribution 

CPU: vehile model, drivel model, envs, ideal sensors

GPU; visual, camera RIS, radar ris, lidar rs

Test run in prallel

sensor setup(10 ultra, 5 Radar, 1 Lidar, 1 Camera)

host pc (with test manager) + 4 virtual machines

output: key figures, reports, statistics, queries

open archi for scalable processing( on-premise and cloud)

big data anaysis with DaSense by NorCom

  • how it works ?
  • external scheduler mananger, PBS
  • HPC light to support local PC parallel

new features in 8.0

virtual test driving 8.0

  • simulink lib (through Simscape)
  • Scenario Editor: vege geenration, animated 3D objs, new models(vehicles, trailers, trucks, bus, buildings, houses, street furniture, pedestrains)
  • visulize road surfaces ..
  • ipg movie
  • fisheye distortion from external file
  • new sensor models(Lidar RSI)

q: what’s the difference of open source tool vs commericial ?

Lidar RSI

Ideal perfect world –> ground truth

HiFi –> false positives & negatives

raw data –> RSI

supporting Lidar type:

moving laser & photot diode

moving mirrors 

solid state 

flash

input features :

  • Laser beam, including custom beam pattern, Raytracing rays

  • Scene Interaction, including atmoshpere attenuation, color or material or surface or transparent dependency

  • detection, including threashold, multiple echoes per beam, separability

output features:

  • sending & receiving direction of every beam
  • light intensity of every beam
  • time & lenght of light
  • pluse width
  • number of interactions

User Case : Nio Pilot

by sun peng

cases

inter-city, parking, closed space, crowded space

sensors: 3 front camera, 4 surround camera, 4 mm RADARS, 12 Ultra, 1 driver monitor camera

higway pilot in June

perception: camera, radar, ult, hd map, location

planning : path planning, maneuvor decsion, system control

cloud & AI

simulation usage

  • FDS -> cases -> SIL

  • platform –> cases -> regression test, abstraction & instantiation ; scene reconsturction(in-house) / close loop SIL ; traffic model training(to do)

  • integration -> HIL

what about vd ? –> co-work with simulation and physical test, the cover percentage of simulation is about 80%, the left is from

data platform

upload nodes -> cloud

med usa API server -> fleet mgmt

log stash –> elastic search –> Kibana & Admin (tensn and spark )

I think they are collecting data, and this data for scene building and simulation usage in future

data visulazation

HIL

  • lane model simualtion on HIL

  • fusion simulation on HIL

automation test

jenkins master –> jenkins slave (agent IPG) –> cloud

goal

simulation server <—> data center

parallel sim core + simulation monitor (data exchange service)

data processing + labelling + case management + traffic model training

(replay, SIL, REMODEL, Visuliazation )