play with Docker swarm/compose

Docker swarm

docker swarm is Docker nature cluster manager, with built-in DNS service found mechanism, and load-balancing mechanism. compare to k8s, is a light-weight and easy goers.

create a swarm cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
export MASTER_IP=192.168.0.1
docker swarm init --advertise-addr ${MASTER_IP} --name master
docker swarm join --token tokens ${MASTER_IP} --name worker1
docker node ls
# demote/ promote nodeID as manager
docker node demote/promote nodeID
# rm node
docker node rm worker1
# stop swarm mode
docker swarm leave

create service

1
2
3
4
5
6
7
8
9
docker service create service-name
# scale service
docker service scael SERVICE=replicas
docker service rm
docker service inspect

create overlay network

1
2
3
4
5
docker network create --subnet=192.168.0.0/24 -d overlay ppss-net
docker network rm
docker network connect network-name docker-node

in swarm mode, there are three network created by default:

  • bridge0, the default network
  • docker_gwbridge, local bridge used to connect containers hosted in the same host
  • ingress, is a overlay network used in the swarm cluster

however, in swarm mode, the default network for service is bridge, to across physical host, services need go through overlay network.

load balancing

Ingress load balancing

expose Docker service to external network env

Internal load balancing

swarm mode has build-in DNS

Docker compose

docker compose is a manage/build tool to create application, which combine a bunch of micro-services, each of which can be ran as a Docker container.

the docker-compose.yml configure file has to include each micro-service Dockerfile, and the application running scripts.

service startup order

the services defined in docker-compose.yml is not necessary depended to each other, so each serice can up individually, but of course they can has based on each other.

docker-compose.yml

best practice

  • build

    path to Dockerfile, can be absolute path or relative (to .yml) path. Compose will buid the image based on

  • context
    sub-choice under build, point to the Dockerfile

  • image

    the image will be used, if not locally, will pull from hub (vs Dockerfile)

  • containe_name

  • volumes
    path to attached volumes, in the format HOST:CONTAINER[:access mode]

  • network_mode

same as docker run --network

  • init

  • privileged

  • command

override launch command when service contianer start

  • environment

set env variables, in the format ENV:value
if only ENV, the value will be derived from host machine

  • runtime: nvidia

to suuport nvidia-docker

e.g.

1
2
3
4
5
6
7
nvsmi:
image: ubuntu:16.04
runtime: nvidia
environment:
- NVIDIA VISIBLE DEVICES=all
command: nvidia-smi
  • stdin_open

std io aviable

  • tty

virtual terminal

sample yml from project

a sample yml for web app:

1
2
3
4
5
6
7
8
9
services:
web:
build: .
links:
- "db: database"
db:
image: postgres

a sample yml for general app CI:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
services:
build:
build:
context: ./Dockerfile
image: docker-image
container_name: build_app
volumes:
- ./build_scripts: /root/build
commands: /root/build/build.sh
run:
context: ./Dockerfile
image: docker-iamge
container_name: run_app
volumes:
- ./run_scripts:/root/run
environments:
-DISPLAY
-ROS_MASTER
network_mode: host
runtime: nvidia
command: /root/run/run.sh
test:
#TODO

docker-machine

docker-machine is a tool to build virtial host when hosted in one physical host or among multi-physical hosts.

ros-docker

in self-driving software stack, ros is often used. and there is a need to deploy ros in docker. next time.