design scenarios in ADS simulation

to design scenarios in self-driving test, one reference is: a framework for automated driving system testable cases and sceanrios, most other scenario classifications have almost the same elements: ODD, e.g. road types, road surfaces; OEDR, namely object and event detection and response, e.g. static obstacles, other road actors, traffic signature, environment conditions, special zones; and failure mode behaviors.

in general, test cases can be grouped as black box test, in which the scenario parterners’ behavior is not pre-defined or unpredictable, e.g. random traffic flow(npcs) scenario, or white box test, where the npcs behavior is pre-defined, e.g. following user-define routings. white box testing is helpful to support performance metrics; while black-box testing is helpful to verify the system completeness and robust.

as for ADS test, there are a few chanllenges coming from:

  • heuristics decision-making algorithms, deep-learning algorithms, which is not mathematically completed

  • the test case completeness, as the number of tests required to achieve statistically significant to claim safe would be staggering

  • undefined conditions or assumptions

a sample test scenario set maybe looks like:

ODD OEDR-obj OEDR-loc maneuver
in rump static obstacles in front of current lane
in rump static obstacles in front of targe lane
in rump dynamic obstacles in front of current lane

scenarios_runner

carla has offered an scenario engine, which is helpful to define scenarios by test_criteria and behaviors, of course the timeout as well.

test_criteria, is like an underline boundary for the scenario to keep, if not, the scenario failed. e.g. max_speed_limitation, collision e.t.c. these are test criterias, no matter simualtion test or physical test, that have to follow the same criterias; in old ways, we always try to find a way to evaluate the simulation result, and thought this may be very complex, but as no clue to go complex further, simple criterias actually is good. even for reinforcement learning, the simple criterias is good enough for the agent to learn the drive policy.

of course, I can expect there are some expert system about test criterias.

For simulation itself, there has another metric to descibe how close the simulation itself to physical world, namley to performance how well the simulator is, which is beyond here.

behaivor is a good way to describe the dynamic processing in the scenario. e.g. npc follow lane to next intersection, and stop in right lane, ego follow npc till the stop area.

OpenScenario has similar ideas to descibe the dynamic. to support behaivor, the simulator should have the features to control ego and npc with the atomic behaviors, e.g. lane-follow, lane-change, stop at intersection e.t.c.

in lg simulator, npc has simple AI routing and lane-following API, basically is limited to follow pre-defined behaviors; ego has only cruise-control, but external planner is avialable through ROS.

for both test_criteria and behavior, carla has a few existing atomic elements, which is a good idea to build up complex scenarios.

OpenScenario

not sure if this is a project still on-going, carla has interface and a few other open source parsers there, but as it is a standard in popular projects, e.g. PEGUS, should be worthy to take further study.