massively adas test pipeline

background

ADAS engineers in OEM use Matlab/Simulink(model based design) to develop the adas algorithms, e.g. aeb. Simulink way is enough for fuction verification in L2 and below; for L2+ scenario, often need to test these adas functions in system level, basically test it in scenarios as much as possible, kind of L3 requirements.

one way to do sytem level verification is through replay, basically the test vehicle fleet can collect large mount of data, then feed in a test pipeline, to check if these adas functions are triggered well or missed.

for replay system test, we handle large mount of data, e.g. Pb, so the Simulink model run is too slow. the adas test pipeline with the ability to run massively is required.

the previous blog high concurrent ws server mentioned about the archetecture of this massively adas test pipeline: with each adas model integrated with a websocket client, and all of these ws clients talk to a websocekt server, which has api to write database.

encode in C

the adas simulink model can be encode in c, which of course can encode as c++, while not powerful yet. as in C is more scalable then simulink/matlab.

matlab/simulink encode has a few choices, massively run is mostly in Linux-like os, here we choose ert env to encode the model, after we can build and test as:

1
2
3
gcc -c adas_model.c -I .
gcc -c ert_main.c
gcc ert_main.o adas_model.c -o mytest

json-c

as all messages in adas model in c is stored in program memory, first thing is to serialize these to json. here we choose json-c:

  • install on local Ubuntu
1
2
3
4
sudo apt-get install autoconf automake libtool
sh autogen.sh
./configure
make && make install

then the json-c header is at:

/usr/local/include/json-c

and the libs at:

/usr/local/lib/libjson-c.so *.al

when using we can add the following flags:

1
2
3
JSON_C_DIR=/path/to/json_c/install
CFLAGS += -I$(JSON_C_DIR)/include/json-c
LDFLAGS+= -L$(JSON_C_DIR)/lib -ljson-c

the json object can be created as:

1
2
3
4
5
6
7
8
9
10
struct json_object *json_obj = json_object_new_object();
struct json_object *json_arr = json_object_new_array();
struct json_object *json_string = json_object_new_string(name);
int i=0;
for(; i<20; i++){
struct json_object *json_double = json_object_new_double(vals[i]);
json_object_array_put_idx(json_arr, i, json_double);
}
json_object_object_add(json_obj, "name", json_string);
json_object_object_add(json_obj, "signals", json_arr);

modern c++ json libs are more pretty, e.g. jsoncpp, rapidJSON, json for modern c++

wsclient-c

the first ws client I tried is: wsclient in c, with default install, can find the headers and libs, separately at:

1
2
/usr/local/include/wsclient
/usr/local/lib/libwsclient.so or *.a

when using:

1
gcc -o client wsclient.c -I/usr/local/include -L/usr/local/lib/ -lwsclient

onopen()

as we need send custom message through this client, and the default message was sent inside onopen(), I have to add additionaly argument char\* into the default function pointer onopen

1
2
3
4
5
6
7
8
9
10
11
12
int onopen(wsclient *c, char* message) {
libwsclient_send(c, message);
return 0;
}
void libwsclient_onopen(wsclient *client, int (*cb)(wsclient *c, char* ), char* msg) {
pthread_mutex_lock(&client->lock);
client->onopen = cb;
pthread_mutex_unlock(&client->lock);
}
if(pthread_create(&client->handshake_thread, NULL, libwsclient_handshake_thread, (void *)client)) {

and the onopen callback is actually excuted inside handshake thread, in which is not legacy to pass char* message. further as there is no global alive status to tell the client-server channel is alive, to call libwsclient_send() in another thread sounds trouble-possible.

looks wsclient-c is limited, I transfer to wsclient c++. but need make sure model in c workable with g++.

wsclient c++

websocketpp is a header only C++ library, there is no libs after built, but it is depends on boost, so to use this lib, we can add header and libs as following:

1
2
/usr/local/include/websocketpp
/usr/lib/x86_64-linux-gnu/libboost_*.so

I am using the wsclient from sample, and define a public method as the client process:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
int client_process(std::string& server_url, std::string& message){
websocket_endpoint endpoint;
int connection_id = endpoint.connect(server_url);
if (connection_id != -1) {
std::cout << "> Created connection with id " << connection_id << std::endl;
}
connection_metadata::ptr mtdata = endpoint.get_metadata(connection_id);
//TODO: optimized with this sleeping time
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
int retry_num = 0;
while(mtdata->get_status() != "Open" && retry_num < 100){
std::cout << "> connected is not open " << connection_id << std::endl;
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
connection_id = endpoint.connect(server_url);
mtdata = endpoint.get_metadata(connection_id);
}
if(mtdata->get_status() != "Open") {
std::cout << "retry failed, exit -1" << std::endl ;
return 0;
}
endpoint.send(connection_id, message);
std::cout << message <<" send successfully" << std::endl ;

there is more elegent retry client solution.

to build our wsclient:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
g++ wsclient.cpp -o ec -I/usr/local/include -L/usr/lib/x86_64-linux-gnu -lpthread -lboost_system -lboost_random -lboost_thread -lboost_chrono
```
## ws server in python
we implemented a simple ws server with [websockets]():
```python
#!/usr/bin/env python3
import asyncio
import websockets
import json
from db_ model import dbwriter, adas_msg, Base
class wsdb(object):
def __init__(self, host=None, port=None):
self.host = host
self.port = port
self.dbwriter_ = dbwriter()
async def process(self, websocket, path):
try:
raw_ = await websocket.recv()
jdata = json.loads(raw_)
orm_obj = adas_msg(jdata)
try:
self.dbwriter_.write(orm_obj)
self.dbwriter_.commit()
except Exception as e:
self.dbwriter_.rollback()
print(e)
except Exception as e:
print(e)
greeting = "hello from server"
await websocket.send(greeting)
print(f"> {greeting}")
def run(self):
if self.host and self.port :
start_server = websockets.serve(self.process, self.host, self.port)
else:
start_server = websockets.serve(self.process, "localhost", 8867)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
if __name__=="__main__":
test1 = wsdb()
test1.run()

the simple orm db_writer is from sqlalchemy model.

in makefile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CC=g++
JSONC_IDIR=/usr/local/include
CFLAGS=-I. -I$(JSONC_IDIR)
OPEN_LOOP_DEPS= rtwtypes.h adas_test.h
LDIR=-L/usr/local/lib/
LIBS=-ljson-c
BOOST_LDIR=-L/usr/lib/x86_64-linux-gnu
BOOST_LIBS= -pthread -lboost_system -lboost_random -lboost_thread -lboost_chrono
JSON_DEPS= wsclientpp.h
obj = adas_test.o ert_main.o
src=*.c
$(obj): $(src) $(DEPS) $(JSON_DEPS)
$(CC) -c $(src) $(CFLAGS)
mytest: $(obj)
$(CC) -o mytest $(obj) $(CFLAGS) $(LDIR) $(LIBS) $(BOOST_LDIR) $(BOOST_LIBS)
.PHONY: clean
clean:
rm -f *.o

so now we have encode adas simulink model to C code, and integrate this model C with a websocket client, which can talk to a ws server, which further write to database, which further can be used an data analysis model.

we can add front-end web UI and system monitor UI if needed, but so far this adas test pipeline can support a few hundred adas test cores to run concurrently.

refer

pthread_create with multi args

wscpp retry client