play with k8s

basic k8s

kubeadm init

init options can be:

  • --apiserver-bind-port int32, by default, port=6443

  • --config string, can pass in a kubeadm.config file to create a kube master node

  • --node-name string, attach node name

  • --pod-network-cidr string, used to set the IP address range for all Pods.

  • --service-cide string, set service CIDRs, default value is 10.96.0.0/12

  • --service-dns-domain string, default value is cluster.local

  • --apiserver-advertise-address string, the broadcast listened address by API Server

nodes components

IP hostname components
192.168.0.1 master kube-apiserver, kube-controller-manager, kube-scheduler, etcd, kubelet, docker, flannel, dashboard
192.168.0.2 worker kubelet, docker, flannel

ApiServer

when first launch Kubelet, it will send the Bootstrapping request to kube-apiserver, which then verify the sent token is matched or not.

1
2
3
4
5
6
7
8
9
10
--advertise-address = ${master_ip}
--bind-address = ${master_ip} #can't be 127.0.0.1
--insecure-bind-address = ${master_ip}
--token-auth-file = /etc/kubernets/token.csv
--service-node-port-range=${NODE_PORT_RANGE}

how to configure master node

cluster IP

it’s the service IP, which is internal, usually expose the service name.

the cluse IP default values as following:

1
2
--service-cluster-ip-range=10.254.0.0/16
--service-node-port-range=30000-32767

k8s in practice

image

image

image

blueKing is a k8s solution from TenCent. here is a quickstart:

  • create a task

  • add agnet for the task

  • run the task & check the sys log

  • create task pipeline (CI/CD)

create a new service in k8s

1
2
kubectl create -f my-nginx-2.yaml
kubctl get pods -o wide

how external access to k8s pod service ?

pod has itw own special IP and a lifecyle period. once the node shutdown, the controller manager can transfer the pod to another node. when multi pods, provides the same service for front-end users, the front end users doesn’t care which pod is exactaly running, then here is the concept of service:

service is an abstraction which defines a logical set of Pods and a policy by which to access them

service can be defined by yaml, json, the target pod can be define by LabelSeletor. a few ways to expose service:

  • ClusterIP, which is the default way, which only works inside k8s cluster

  • NodePort, which use NAT to provide external access through a special port, the port should be among 8400~9000. then in this way, no matter where exactly the pod is running on, when access *.portID, we can get the serivce.

  • LoadBalancer

1
2
3
kubectl get services
kubectl expose your_service --type="NodePort" --port 8889
kubctl describe your_service

use persistent volume

  • access external sql

  • use volume

volume is for persistent, k8s volume is similar like docker volume, working as dictory, when mount a volume to a pod, all containers in that pod can access that volume.

  • EmptyDir
  • hostPath
  • external storage service(aws, azure), k8s can directly use cloud storage as volume, or distributed storage system(ceph):

sample

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
kind: Pod
metadata:
name: using-ebs
metadata:
name: using-ceph
spec:
containers:
-image: busybox1
name: using-ebs
volumeMounts:
-mountPath: /test-ebs
name: ebs-volume
-image: busybox2
name: using-ceph
volumeMounts:
-name: ceph-volume
mountPath: /test-ceph
volumes:
-name: ebs-volume
awsElasticBlockStore:
volumeID: <volume_id>
fsType: ext4
-name: ceph-volume
cephfs:
path: /path/in/ceph
monitors: "10.20.181.112:6679"
secretFile: "/etc/ceph/admin/secret"

containers communication in same pod

first, containers in the same pod, the share same network namespace and same iPC namespace, and shared volumes.

  • shared volumes in a pod

when one container writes logs or other files to the shared directory, and the other container reads from the shared directory.

image

  • inter-process communication(IPC)

as they share the same IPC namespace, they can communicate with each other using standard ipc, e.g. POSIX shared memory, SystemV semaphores

image

  • inter-container network communication

containers in a pod are accessible via localhost, as they share the same network namespace. for externals, the observable host name is the pod’s name, as containers all have the same IP and port space, so need differnt ports for each container for incoming connections.

image

basically, the external incoming HTTP request to port 80 is forwarded to port 5000 on localhost, in pod, and which is not visiable to external.

how two services communicate

  • ads_runner
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: ads_runner
spec:
selector:
app: ads
tier: api
ports:
-protocol: TCP
port: 5000
nodePort: 30400
type: NodePort

if there is a need to autoscale the service, check
k8s autoscale based on the size of queue.

  • redis-job-queue
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: redis-job-queue
spec:
selector:
app: redis
tier: broker
ports:
-portocol: TCP
port: 6379
targetPort: [the port exposed by Redis pod]

ads_runner can reach Redis by address: redis-server:6379 in the k8s cluster.

redis replication has great async mechanism to support multi redis instance simutanously, when need scale the redis service, it is ok to start a few replicas of redis service as well.

redis work queue

check redis task queue:

  • start a storage service(redis) to hold the work queue
  • create a queu, and fill it with messages, each message represents one task to be done
  • start a job that works on tasks from the queue

refer

jimmysong

BlueKing configure manage DB

k8s: volumes and persistent storage

multi-container pods and container communication in k8s

k8s doc: communicate between containers in the same pod using a shared volume

kubeMQ: k8s message queue broker

3 types of cluster networking in k8s