Serious Autonomous Vehicles


  • Home

  • Archives

  • Tags

  • Search

autosar sucks

Posted on 2019-07-15 |

what is AUTOSAR

basically it’s an micro-service architecture for vehicle EE system.

Each micro-service is called software component(swc), and it has uniform interfaces, while of which the implementation is varied. as the goal of AUTOSAR said: share on the standard (interface), compete in the implementation.

the inter-connect of micro-services is through virtual function bus(vfb), which works as a gateway, guiding data flow from port A, from micro-serviceA to port B, from micro-serviceB

the benefits of AUTOSAR is obvious, to design the interface at system level first, if any changed need, it can be updated quickly. after the system architecture is fixed, then go to the implementation details.

refer

input description

  • software components(micro-services) description, only define the data flow, interface functions

  • system, system topology(interconnection among ECUs, and available data buses, protocols etc)

  • hardware, the available hardware(processors, sensors, actuators etc)

system configuration

used to distributes the software component descritpions to different ECU

ECU configuration

the basic software(BST) and run-time environment(rte) of each ECU has been configured, this is based on the dedication of the application software components to each ECU.

generation of executables

in this step to implement the software components, then build. this can be automated done by tool-chains.

all steps up to now are supported by defining exchange formats(xml) and work methods.

image

basic software

each swc has well-defined ports, either provider port(PPort) or request port(RPort), the swc interface can either be a client-server interface or sender-receiver interface.

with a PPort, the swc will impelment data generation; with a RPort, the swc will implement data read.

communication manager (ComM), is a resource mananger to encapsulates communication related basic software modules.

the actual bus states are controlled by the corresponding bus state manager, e.g. CAN/FlexRay/Lin bus. when ComM request a specific commmunication mode from the state manager, it will map the communication mode to a special bus state.

network management modules (NM) works in bus-sleep mode and only support broadcast communication.

diagnostic communication manager(DCM), a common API for diagnostic services.

CAN driver performs the hardware access and provides a hardware-independent API to upper layers; it can access hardware resources and converts the given information for transmission into a hardware specic format and triggers the transmission.

runtime environment(RTE)

between basic software to upper application softwares, I think it’s mostly vfb.

application software components

for now, e.g. ADAS, traditional EE.

play with ros

Posted on 2019-07-13 |

ros filesystem tools

first check ROS_PACKAGE_PATH, where defines all ROS packages that are within the directories.

1
2
3
4
5
6
7
8
9
10
11
rospack find [package-name]
rospack list
roscd [package-name]
```
take an example, to locate `rosbridge_websocket.launch`
```shell
rospack find rosbridge* #rosbridge_server
roscd rosbridge_server
cd launch

another tool to view ros-launch: roslaunch-logs

write a .launch file

launch files, which uses XML format, usually make a directory named “launch” inside the workspace to organize all launch files, and it provides a convenient way to start up multiple nodes and master, it processs in a depth-first tarversal order. usually launch files can be put a launch folder under a ros node project.

1
2
3
roslaunch package_name launch_file
#or
roslaunch /path/to/launch_file

an sample launch file:

1
2
3
<launch>
<node pkg="package_name" type=" " name=" " output=" " args=" " />
</launch>

args can define either env variables or a command.

node/type There must be a corresponding executable with the same name.

rviz

rviz can help to playback sensor rosbag at lab. and also the visualization tool in algorithm/simulation development.

someone(at 2014) said Google’s self-driving simulation has used rviz:

google simulation

a sample with rviz to visualize rosbag info:

1
2
3
4
5
# terminal 1
roscore
# terminal 2
rosbag play kitti.bag -l
rosrun rviz rviz -f kitti-velodyne

rosbag

the sensor ros node will collect data in rosbag during physical or vitual test, then playback rosbag to develop or verify the sensing algorithms. or use to build simulation scene.

a few common commands, and also rosbag support interactive C++/Python APIs.

1
2
3
4
rosbag record #use to write a bag file wit contents on the specified topics
rosbag info #display the contents of bag files
rosbag play #play back bag file in a time-synchronized fashion

kitti dataset

are we ready for autonoous driving? – the KITTI vision benchmark suite, which is a famous test dataset in self-driving sensing, prediction also mapping and SLAM algorithm development.

there are a few benchmars includes:

  • stereo, basically rebuild the 3D object from multi 2D images.

  • optical flow, used to detect object movement(speed, direction)

  • scene flow, include other 3D env info, and objects from optical flow
  • depth
  • visual odometry
  • object detection
  • object tracking
  • road/lane detection
  • semantic evaluation

catkin package

catkin is ros package build/manage tool.

mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
catkin_create_pgk demo std_msgs rviz 
cd demo
mkdir launch
cat "<launch> <node  name="demo"  type="rviz" -d="ls `pwd`" /> </launch> " >  demo.launch
cd ~/catkin_ws
catkin_make --pkg demo

# add catkin_ws to 
cat "export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:/path/to/catkin_ws/" >> ~/.bashrc

play with Docker swarm/compose

Posted on 2019-07-12 |

Docker swarm

docker swarm is Docker nature cluster manager, with built-in DNS service found mechanism, and load-balancing mechanism. compare to k8s, is a light-weight and easy goers.

create a swarm cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
export MASTER_IP=192.168.0.1
docker swarm init --advertise-addr ${MASTER_IP} --name master
docker swarm join --token tokens ${MASTER_IP} --name worker1
docker node ls
# demote/ promote nodeID as manager
docker node demote/promote nodeID
# rm node
docker node rm worker1
# stop swarm mode
docker swarm leave

create service

1
2
3
4
5
6
7
8
9
docker service create service-name
# scale service
docker service scael SERVICE=replicas
docker service rm
docker service inspect

create overlay network

1
2
3
4
5
docker network create --subnet=192.168.0.0/24 -d overlay ppss-net
docker network rm
docker network connect network-name docker-node

in swarm mode, there are three network created by default:

  • bridge0, the default network
  • docker_gwbridge, local bridge used to connect containers hosted in the same host
  • ingress, is a overlay network used in the swarm cluster

however, in swarm mode, the default network for service is bridge, to across physical host, services need go through overlay network.

load balancing

Ingress load balancing

expose Docker service to external network env

Internal load balancing

swarm mode has build-in DNS

Docker compose

docker compose is a manage/build tool to create application, which combine a bunch of micro-services, each of which can be ran as a Docker container.

the docker-compose.yml configure file has to include each micro-service Dockerfile, and the application running scripts.

service startup order

the services defined in docker-compose.yml is not necessary depended to each other, so each serice can up individually, but of course they can has based on each other.

docker-compose.yml

best practice

  • build

    path to Dockerfile, can be absolute path or relative (to .yml) path. Compose will buid the image based on

  • context
    sub-choice under build, point to the Dockerfile

  • image

    the image will be used, if not locally, will pull from hub (vs Dockerfile)

  • containe_name

  • volumes
    path to attached volumes, in the format HOST:CONTAINER[:access mode]

  • network_mode

same as docker run --network

  • init

  • privileged

  • command

override launch command when service contianer start

  • environment

set env variables, in the format ENV:value
if only ENV, the value will be derived from host machine

  • runtime: nvidia

to suuport nvidia-docker

e.g.

1
2
3
4
5
6
7
nvsmi:
image: ubuntu:16.04
runtime: nvidia
environment:
- NVIDIA VISIBLE DEVICES=all
command: nvidia-smi
  • stdin_open

std io aviable

  • tty

virtual terminal

sample yml from project

a sample yml for web app:

1
2
3
4
5
6
7
8
9
services:
web:
build: .
links:
- "db: database"
db:
image: postgres

a sample yml for general app CI:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
services:
build:
build:
context: ./Dockerfile
image: docker-image
container_name: build_app
volumes:
- ./build_scripts: /root/build
commands: /root/build/build.sh
run:
context: ./Dockerfile
image: docker-iamge
container_name: run_app
volumes:
- ./run_scripts:/root/run
environments:
-DISPLAY
-ROS_MASTER
network_mode: host
runtime: nvidia
command: /root/run/run.sh
test:
#TODO

docker-machine

docker-machine is a tool to build virtial host when hosted in one physical host or among multi-physical hosts.

ros-docker

in self-driving software stack, ros is often used. and there is a need to deploy ros in docker. next time.

play with Docker

Posted on 2019-07-12 |

Docker networking

  • bridge: the default network driver, only in standalone containers; best when have multiple containers to communicate on the same host machine.

  • host: for standalone containers, the Docker host use the host machine’s networking directly; best when need no isolated from the host machine.

  • overlay: connect multi Docker containers, enable swarm services to communicate with each other, no OS-level routing; best when need containers running on different host machines to communicate.

  • none: disable all networking

the network base is open source project libnetwork

containers can communicate through hostname, or through DNS(the now Docker engine has default built-in DNS server), but in early days, can use external dns, e.g. Blowb

to host docker images, either pull to Docker Hub, or to create a private cloud by ownCloud, or docker save/export tools:

1
2
3
4
docker save -o /path/to/save/file image | gzip
scp *.tar.gz remote_user@remote_hostname
docker load *.tar.gz

running GUI apps in Docker

there is a benchmark GUI/openGL test in Linux glxgears. to test the host machine support ssh or container based apps, we can first do the following test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
sudo apt-get install mesa-utils
glxinfo
glxgears
```
Docker by default is bash/text based, but `nvidia-docker` is a gui-supported Docker engine, which requires Nvidia OpenGL drivers and Nvidia Gpus of course. since the gpu hardware version and the docker engine version, please check the compatability at first.
## remote hosted apps
* configure master and worker nodes communication by setting IP address in the same domain, and setting the master node IP address as the gateway IP address for all worker nodes, basically the master node will work as the swticher.
* install xserver-common, xserver-utils, as Ubuntu by deafult doesn't have X-server.
```shell
master:~/ ssh -X user@worker
worker:~/ DISPLAY=:0
worker:~/ ./gui_app
  • for docker containers, there is also authority property need take care.

Dockfile

when the Docker container starts, usually we want to auto start a shell process, so by CMD or ENTRYPOINT defined in the Dockfile.

for the base images, e.g. Ubuntu, busybox, are used as the base for upper applications, usually will use CMD at the end of Dockerfile; but if the Docker container is specially for a certain application, it’s usually using ENTRYPOINT.

the last line of Ubuntu 16.04 Dockerfile:

CMD '/bin/bash'

usually in one Dockerfile, there is only one CMD or ENTRYPOINT, and it’s better written in exec format:

CMD ["executable", "param1", "param2"]

ENTRYPOINT ["executable", "param1", "param2"]

since in shell format:

CMD exectuable, param1, param2 

Docker will trigger /bin/sh first by default, if not define a shell. and this shell is always the first process in this Docker container, which sometimes is not what we expected.

when using both CMD and ENTRYPOINT in one Dockfile, the output will looks like append/pipe CMD command after ENTRYPOINT command.

a sample of Dockerfile:

1
2
3
4
5
6
7
8
9
FROM <image>:[<tag>] [AS <name>]
ADD [--chown=<user>:<group>] <src> ... <dest>
ADD [" ", ... " "]
COPY <src> ... <dest>
RUN <command>
VOLUME /mount/name
CMD ["executable", "param1", "param2"]
CMD command par1 par2

from docker image to dockerfile

we can easily pull images from hub, but when we try to build some images directly, there is also way to get Dockfile from existing docker image: dfimage

1
sudo docker pull chenzj/dfimage

mount host volume to container

either we can define in Dockerfile by VOLUME, which is create a volume name in the base image, or during runtime, docker run -v /host/volume/path:/container/volume/path, to bind a host volume to current container.

where are you in next 5 years 8

Posted on 2019-07-09 |

工薪和自由职业者/创业/老板,退休以后会有什么不同。也许对于工薪阶层,退休的生活也会如上班的:说不上的无奈感,没有痛快酣畅的体验过人生 — 离开了根本不享受的工作,也谈不上享受生活。就像《肖生克的救赎》被放出去的老头,离开了监狱,也融不进社会了。

有个表哥创业了,做了一个健身品牌,小有所成,进入持续创业,第二份是乡政企业办公软件。两份业很不一样。甚至表哥说,这份业估计一年后就结束,生活还在继续,下一份业在哪里,现在根本想不到,但是也不担心没有。

认识一个姑娘,88年,澳洲留学读了两年mba就12年回国,用我的眼光看,之后就没正紧上过班,走走玩玩也在这些年学了瑜伽,今年(2019年)开了一家店,32岁了过的跟20岁出头的姑娘一样,到底是没心没肺,还是把生活过成了别人羡慕的样子。她并不符合我的价值观,但是难道这样的人生不值得吗?

年初在深圳,被这里的年轻人着实震惊了。大公司(华为)加班到凌晨,年轻的生命就像路旁的热带植被在绽放和燃烧。只是大表哥说了句,是被洗脑了。

老妈在好几个城市工作做,做保姆、帮厨、家政,走到哪里都得到顾客的喜欢,走到哪里都可以有饭吃,根本不担心没有技能,找不到工作。像一个自由职业者。

打工和创业当老板的mindset,似乎是本质的不同。 创业者/自由职业者,总会有出路,生活处处都通达。

对比下,打工者的心态。就是处处被堵,操的心一点不少,把脑袋削尖了跟黑压压的长江后浪推前浪的年轻人比拼,担心技术上比不过行家,担心项目被各种原因取消了,担心行业遇冷,担心被老板穿小鞋,担心30岁还没有不可替代的核心,担心35岁要开始讨生活了,担心工资比年轻人高容易被开,也开始担心身体健康、家人健康等等。 采取的解决办法也是围绕着这些压力了,人生没有朝向,谈不上洒脱和享受。

生活不止眼前的苟且,打工者真是委屈了心,把路走窄了,反而觉得这是唯一的出路。打工解决不了焦虑,必须转变。这世上,除了生死,都是小事。所以无所谓待业;找不到真正实现价值的事业,宁可像无业游民一样活着。

民企打工

没有完善的制度,好的方面就是可以立山头,只要说动了领导,技术上可以大胆尝试,当然并不一定能得到支撑。

没有稳定的做产品的氛围,所以即便立了项目,也不一定能看到项目落地。对产品开发人员,就是不利于积淀。

这样的氛围下,见风使舵就是生存哲学。

可以联想到更广大的中小民企,更缺乏完善的产品流程和考评体系,老板一个人的话语权太大,打工者基本没有话语权。

出来自己干的人,哪些不同的品质?

首先,压力的来源不该是跟成千上万人挤独木桥。一切可以明确定义的职位,比如,程序员、会计、工程师、个体网商,都挤满了人。而一旦陷入了这种思维,思考就会局限在削尖脑袋挤到这个行业/职位的头部。付出的代价/成本非常不成比例。简言之,就是洗脑了。

打工只适合初期的资本积累。所以,选择做舞蹈老师、瑜伽教练,咖啡馆老板的,慢慢都会生活和工作双赢,退休了也不愁不知道怎么经营生活。而单一大公司打工的,可能初期会在工作上得意,但是慢慢的生活和工作都会失去,而且退休了根本不会打理生活。

这些愿意出来自己干的人,更珍惜生活吧。所以不能接受将精力埋没在日复一日的工作中。这些自己出来干的人,都是被逼,当初没机会选择有保险的工作,只能自己走出一条道儿来。

刻意要避免打工,估计也是自找麻烦。所以平常心。

做技术的氛围

到底什么环境/氛围适合做技术?北美的工作环境,相比国内,算是无忧无虑了,虽然有讨厌的印度人,但是如果有心总是有钱有时间捣鼓技术。不过反而,普通人在这样的环境下,是没有表现很强的科研热情。国内相比,待遇,工作的可爱度低,周围的大牛少了,但是反倒人因为生活和人的竞争感,反而会想多学点。

另一方面,国内的资本家似乎更缺乏对行业的敬畏心,资本家对这个行业就是个格外挣钱工具的心理,当然不会真正给这个行业带来真正伟大的技术推动。比如,当宝能系,恒大都砸出一叠钱说要造车,网上大张旗鼓的招聘,按照自动驾驶的各个模块:感知算法,运动规划,决策控制,地图定位开始招人的时候。一方面是哭笑不得,一方面是无奈,觉得身为工程师只是个棋子罢了,被一个工具/算法/模块给定义了。

相比这些只有钱的资本家的嘴脸,虽然汽车厂背后也站着资本家,似乎对汽车行业本身也更关心。当然,在中国,总是要面对钱,落地的现实。美国人可以谈vision, 3,5年不出产品,中国的资本市场基本不允许出现。所以,即使在国内的车厂做技术/研究,也是被量产推着。所以,整体氛围是浮躁,也就没办法专心下去

computing chips in AV

Posted on 2019-06-23 |

chips requirements in vehicle

Bosch :
Bosch

BMW:
BMW

the next-generation vehicle EE platform can be easily modulized based on the topology of network composed by domain controllers and in-vehicle Ethernets. take an example with Singulato iS6, which has five domains: smart driving, powertrain, chassis, smart body, smart seat.

each domain need support by a domain controller unit(DCU), the core of which is a powerful computing chip, which is usually more powerful than traditional ECUs. in average, L2 requires computing power about 10TOPS , L3 needs 60TOPS, L4 needs 100TOPS.

computing chips

product MDC600 Driver PX Pegasus EyeQ 4 BlueBox R-car H3 Journey2.0
Huawei Nvidia Mobileye NXP Renesas Horizon
TOPS 352 320 2.5 10
main cores 8 * 晟腾310 16 ARM VMP LS2084A 4*Arm/A57 BPU2.0
other cores Ascend 310 2 TensorCore GPU S32V234 4*Arm/A53 FPGA
AV-level L3+ L5 L3 L4(target) IVI L3
Camera support 16 10 8 8 8 4
Lidar support 8 6
function safety ASIL-D ASIL-B ASIL-D ASIL-D(target) ASIL-B ASIL-B

R-car H3

products timeline

Tier1s/Tier2s timeline

2018 2019 2020 2021
Aptive level3 level4
Bosch level2 level3 level4
Conti level2 level3+
Autoliv level2 level3 level4
Intel level3 level4+
Nvidia level3 level4+

Chinese OEMs timeline

2019 2020 2021 2022+
changan level4
FAW level4
GAC level3 level5
Geely level3 level5
GWM level3 level4
SAIC level3
xiaoPeng level3
WeiMa level3
Nio level2 level4+

global OEMs timeline

2018 2019 2020 2021 2022+
Ford level2 level4
GM level2 level4
Fiat-Crysler level3+
Audi level2 level4
Mercedze level2 level3 level4
Toyota level2 level3 level4
Honda level3 level4

reference

Matrix 1.0

the five chip vendors

from GPU to ASIC

hauwei and the others

global chips vendors

L4 AI chips

the evolution of EyeQ

Mobileye tech

NXP bluebox

R-car H3 soc

NXP function safety

horizontal AI matrix2.0

cars, mobility, chip-to-city design and the Iphone4

2018-2019 汽车域控制器产业研究报告

汽车电子演化

Global L3 self-driving vehicle market insights 2019

self-driving car research report

Lidar in AV

Posted on 2019-06-22 |

science, religion, music, universe as well as other sources of beauty, are what we humans should look for. – zj


operational theory

a pulse of light is emitted and the precise time is recorded. the reflection of that pulse is detected and the precise time is recorded. using the constant speed of light and the delay can convert into distance, with the known position and orientation of the sensor, the xyz position of the reflective surface can be calculated.

components

  • laser scanner/emitter and laser detector

  • high-precision clock

  • GPS and GPS ground station
    record xyz of the scanner

  • IMU
    record angular orientation of the scanner

field of view(FOV)

azimuth with fixed vertical angle resolution, the neighboring laser emmiter-detector pair will create concentric circle, the distance between two neighboring concentric circle will grow with the distance from detected objects to Lidar.

pic of point cloud concentric circle

light source

950nm wavelength producer is Si-based, which makes it cheaper than 1550nm, the InGaAs based, making it safer to human eyes as 950nm can burn retina and powerful.

1550nm is easier to be absorbed by water than 950nm, which makes it performance better in rainy days.

the laser source emiss lines of pulse every frame, and a few photon return back to Photodetector(光电探测器), there are lots of env photons(noise), we can use narrow-band-filter to tick off some env photons, but not all of them, since the solar radiation is in the range from 905nm 50 1550nm.

solid-state

there are two ways ongoing: MEMS based, phased array tech(相位阵列). MEMS tech is using a micro scaning mirror, either rotate or vibrate to control laser direction. the drawback of micro-mirror is the in the process of relection, lots of laser energy is lost.

phased array tech(Quanergy) integerated multi micro laser emission into one socket to control laser direction. and the drawback at this moment is the short detection distance.

the traditional mechanical design Lidar(Velodye) usually has multi light emitters as well as multi corresponded light detectors. while SS-Lidar depends only on one single light emitter and the scanning mirror to control emission direction, which makes it cheaper. for example, each pair of mechanical emitter-detector cost 200 us dollar, so a 64 lines product will cost about 12800 us dollar, compared to MEMS socket about 200 us dollar each.

detection distance(dd) & angle resolution(ar)

detection distacne with 10 % reflectivity

vertical angle range(var)

vertical angle resolution(va_res)

company product dd(m) var va_res channels
Hesai Pandar40 200 23° 0.33° 40
Robo sense RS-Lidar-32 200 40° 0.33° 32
Velodyne HDL-32e 100 41.3° 32
Quanergy M8 150 20° 32
Ibeo NSH_32 80 16° 0.2° 32
InnoVusion Cheetah 200 40° 0.13° 300

env effects

what about weather effects? e.g. snow, dust, rain; what about env effects? e.g. temperature, system vibration.

fusion with camera

the speed of productivization

when I first heard about InnoVusion, the founder Bao Junwei who was working at Baidu AI, then had the idea to produce Lidar around 2015, then he left Baidu and started InnoVusion, at the end of 2016, their first product Cheetah was born.

this process is really speedy, one thought is the drive force either by capital market or industry needs is becoming so fast that every good chance from idea to product is becoming shorter in time; the other thought, only these highly effective persons will survive in this fast-iteration world.

some other founders stories are here : the AI masters who left from Baidu

reference

Ibeo Next 3D SS-Lidar

Innovusion Cheetah Lidar

Velodyne HDL_32e product manual

principles of GNSS positioning

Posted on 2019-06-17 |

novatel introduction

GNSS architecture

a) space segment

the GNSS satellites, each of which broadcasts a signal that identifies ti and provides its time, orbit and status.

b) control segment

a ground-based network of master control stations, data uploading stations adn monitor stations. in case of GPS, 2 master control stations(one primary and one backup), 4 data uploading stations, and 16 monitor stations

c) user segment

the user equipment that process the received signals.

GNSS propagation

the layer of atmoshpere that most influcences the transmission of GPS signals is the ionosphere(电离层), ionoshperic delays are frequency dependent; and the other layer is troposphere(平流层), whose delay is a function of local temperature, pressure and relative humidity.

some singal energy is reflected on the way to the receiver, called “multipath propagation”.

Antenna

each GNSS constellation has its own signal frequencies and bandwidths, an antenan must cover the signal frequencies and bandwidth.

antenna gain is defined as the relative measure of an antenna’s ability to direct or concentrate radio frequency energy in a particular direction or pattern. A minimum gain is required to achieve a minimum carrier : power-noise-ratio to track GNSS satellites.

GNSS error sources

contributing sources         error range

satellite clocks             +- 2m

orbit errors                       +-2.5m

inospheric delays            +-5m

tropospheric delays            +-0.5m

receiver noies                +-0.3m

multi path                     +-1m

Resolving errors

multi-constellation & multi-frequency

multi-frequency is the most effective way to remove ionospheric error, by comparing the delays of two GNSS signals, L1 & L2, the receiver can correct for the impact of ionospheric errors.

multi-constellation has benefits: reduce signal acquisition time, improve position and time accuracy.

D-GNSS

in differential GNSS(D-GNSS), the position of a fixed GNSS receiver, refered as a base station, which sends the atmospheric delay related errors to receivers, which incorporate the corrections into their positoin calculations.

differential positiong requires a data link betwen the base station and rovers, if corrections need to be applied in real-time. and D-GNSS works very well with base station-to-rover separations of up to 10km.

Real time kinematic(RTK)

it uses measurements of the phase of the signal’s carrier wave, in addition to the information content of the signal and relies on a single fixed reference station to provide real-time corrections, up to centimetre-level accuracy.

the range to a satellite is calculated by multiplying the carrier wavelength times the number of whole cycles between the satellite and the rover and adding the phase difference. the results in an error equal to the error in the estimated number of cycles times the wavelength, so-called integer ambiguity search, which is 19cm for L1 signal.

Precise Point Positioning(PPP)

PPP solution depends on GNSS satellite clock and orbit corrections, generated from a network of global reference stations.

GNSS + IMU

the external reference can quite effectively be provided by GNSS, and GNSS provides an absolute set of coordinates that can be used as the initial start point, as well, GNSS provides continuous positions adn velocities thereafter which are used to update the IMU/INS filter estimates.

for additional combined sensors, such as odometers, cameras vision.

challenges of GNSS in AV

talk from iMorpheus.ai

1) antenna

2) multipath mitigation

3) multi-band, multi-constellation signals

4) integrated navigation (camera )

Manhanton SC review

Posted on 2019-06-12 |

conjunctions

the seven conjunctions can used to connect two independent clauses:

For,   And, Nor,  But, Or,  Yet, So

comma only cant connect two sentences; but can connect two independent clauses using a semicolon(;)

semicolon is often followed by a transition expression, (however, therefore, in addition), but these expression are not conjunctions, so must use semicolons, not commas to join.

noun modifiers

in the format: prepositon, part participle, presetn participle without commas. put the noun and its modifier as close together as possible

“ comma which” is a nonessential modifier

relative pronouns

which, cant modify people,

who/ whom, must modify people

whose, can modify either people or things

where, can modify a noun place, can’t modify a metaphorical place, such as situation, case …

when, can modify a noun event or time

Noun Modifier markers

any -ing that are not verbs and not separated from the rest of the sentence by comma will either be a noun, or a modifier of another noun.

any “comma -ing” is adverbial modifiers

adverbial modifier

in the format of: prepositional phrase, present participle with commas , past participle with commas

the adverbial modifier must modify a certain verb or clause at a right position, not structurally closer to another verb or clause

participle modifiers

when using particples, the information present earlier in the sentence leads to or results in the information presented later in the sentence .

subordinators

subordernate clause provides additional info about the main clause. common subordinatetor markers:

although,  before,  unless, because,  that, so that, 
 if, yet, after, while, since, when. 

and using only one connected word per “connection” . .

which vs -ing

whenever using which, must refer to a noun, can’t modify a whole clause.

so if need to modify the whole clause, use an adverbial modifier(either -ing, or past)

quantity

countable modifiers:

many,  few, fewer,  fewest,  number of ,  numerous 

uncountable modifiers:

much, little, less, least, amount of , great 

more, enough, all works with both countable and uncountable.

parallelism

comparable sentence parts must be structurally and logically similar

comparison

comparison are a subset of parallelism, which requires parallelism between two elements, but also require the two compared items are fundamentally the same type of thing

like,  unlike,  as, than,  as adj as, 
 different from,  in contrast to/with 

like vs as

like is used to compare nouns, pronouns or noun phrase, never put a clause or prepositional phrase after like.

as can used to compare two clauses.

how simulation helps for autonomous vehicle

Posted on 2019-06-06 |

prefare

recently joined another Chinese auto OEM, still work in simulation platform for autonomous vehicle. this experience is a different pratice from GM or Ford.

at first, it’s the mindset update, once for a long time, or even take for granted that I was looking for to be treated greatly, like in NA companies, what the company can offer me. it’s about freedom and responsibility. To have the freedom/right to make a difference in project direction, contents, and methods to achieve sth, first prove that I can take it, which is responsibility.

“ you can you up” that kind of phylosphy is very common and actually I’d love it in some point.

beta version

there are ways to make a virtual simulator. traditional tool vendors include Matlab/Simulink, Prescan, VTD, ANSYS e.t.c. and as the beta version of simulator tool-chain, they are common in most OEMs in China. while have to say, in China, cause most software tools, from engineering tools to modern HR managment tools to product managment tools are kind of in practice process in Chinese companies as I can see, rather than as a matured segment pluged-in the company’s DNA. e.g. the product development tools are pretty in trial and error, occasionaly some PLM vendors come to introduce their products. and actually except the CAD/CAE tools, which were introduced in China since 80’s, and which sounds matured right now, the simulation tools for autonomous vehicle is pretty new-things, from map, sensor to all kinds of internal algorithms.

the beta version of simulation tools has its DNAs as well. first they are charged by license fee, which is a pretty old way like IBM days. not many modern Internet mobility companies live in this way anymore.

secondly, they can’t update frequently, once or twice a year with a new version of the product is common, which maybe OK in CAD/CAE tools, since the engineering analysis process are pretty matured and few exceptional requirements jump out in daily work, and there update is sometime by the vendor itself, which is not a required update, maybe optimized the algorithms, maybe added a third-party function, and sometime is from the users conference, usually the tool vendor will organize this kind of user conference to group users together, one way to get some connections, as well as to get some end-user requirements to fix the next version product, which is also a drawback compared to open source tool community, in where the user is more active.

thirdly, the beta version tools are excluding the users someway, who can’t actually manage the tool as part of its whole functions. assuming simulation will be the key to make a difference in future auto product, then this excluding is unacceptable. on the other side, if autonomous simulation tool are at most aided to develop new product, then it becomes a piece of chicken neck either to take or to throw. from a product view, what is the role of simulation tool or as a product itself, where it is blooming point

alpha version

why simulation tool become a role in autonomous, except the traditional vendors, mostly coming from Internet companies, e.g. Google, Uber, Baidu, Tecent etc. they are pretty strong to make a software product, including the simulation software in auto industry. interesting, these Internet companines never go to make a CAD/CAE software, but they actually prefer to support the cloud/HPC infrastructures for CAD/CAE simulation.

these big mobility companines make their simulation tools now and well cowork with their existing IT development methodlogy, software product ideas as well as all ICT infrastures, which make these tools sounds huge great, and which also make themselves as the core role, rather than the OEMs, few are free or not yet open-sourced.

and there are a few popular open source simulation tools, e.g. Carla, Airsim, LG simulator e.t.c. I actually get a chance with Carla at GM research team, which even now still great, but does’t study deeper, and for me it is a great tool for AI training, which should be the key role of simulation tool in future, since L3 or below, really cares little environment info, and its rule-based decision has no need for a large chunk of virtual test. that’s also a difference between Chinese and NA team, NA teams have the trend to invest in new tech even without immeditaly money back, but Chinese companies would prefer in immediate invest.

lg simulator

Unreal never tried, but Unity looks pretty easy to use, while still need sometime to get familiar with the editor and play with scenes. lg simulator has give a great reference to build the virtual city and make autonomous vehicles running. the real problem actually for most team is how to make this tool useful in product development.

we can think about the tool strucutre itself, like cloud running features, implement measure/verified methods, and build scenes database as we can think about or from the sort of system engineering port.

as for a L3 or below usage, the whole meaning of simualtion is not driven, but at most additional support of product development, as I can see. so what simulation for ? L4+, which leads to AI, or data driven product development!

from this point, the simlation itself is not the core, but the AI, so think about that in career development.

1…101112…20
David Z.J. Lee

David Z.J. Lee

what I don't know

193 posts
51 tags
GitHub LinkedIn
© 2020 David Z.J. Lee
Powered by Hexo
Theme - NexT.Muse