background
this blog try to deploy two service in k8s: redis and dashboard. the other is engineering.
manual deploy via kubectl
|
|
- error1: pending pod
|
|
gives:
|
|
a few things to check:
swapoff -ato close firewall on working nodekubectl uncordonto make node schedulable kubectl uncordon
- error 2: failed create pod sandbox
|
|
solution is to copy k8s.grc.io/pause:3.2 image to ubuntu node, and restart kubelet on working node.
- error 3: no network plugin CNI
|
|
a temp solution is to cp /run/flannel/subnet.env from master node to worker node, then restart kubelet at the worker node. as further study, the cp subnet.env to worker node is not the right solution, as every time the worker node shutdown, this subnet.env file will delete, and won’t restart when reboot the worker node the next day.
so the final solution here is to pull quay.io/coreos/flannel image to worker node, as well as k8s.gcr.io/kube-proxy. in later k8s version, kube-proxy is like a proxy, what’s really inside is the flannel daemon. so we need both kube-proxy and flannel at worker node, to guarantee the network working.
we can see the busybox service is running well:
|
|
but the problem here is, we can’t access this service from host machine.
exposing an external IP to access an app in cluster
to expose service externally, define the service as eitherLoadBalancer or NodePort type. but LoaderBalancer requires external third-party: 23562 implement of load balancer, e.g. AWS.
why loadBalancer service doesn’t work: if you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.
|
|
looks the NodePort service doesn’t work as expected:
|
|
if pods can’t be cleaned by kubectl delete pods xx, try kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>.
how to access k8s service outside the cluster
kubectl config
reconfigure a node’s kubelet in a live cluster
Basic workflow overview
The basic workflow for configuring a kubelet in a live cluster is as follows:
Write a YAML or JSON configuration file containing the kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the kubelet’s corresponding Node object to use this ConfigMap.
- dump configure file of each node
|
|
our cluster have ubuntu and meng(as leader) two nodes. with these two config files, we found two existing issues:
1) network config on two nodes doesn’ match each other
|
|
after generrating the NODE config files above, we can edit these files, and then push the edited config file to the control plane:
|
|
after this setting up, we can check the new generated configmaps:
|
|
tips: configMaps is also an Object in k8s, just like namespace, pods, svc. but which is only in /tmp, need manually dump.
namely:
|
|
- set node to use new configMap, by
kubectl edit node ${NODE_NAME}, and add the following YAML underspec:
|
|
- observe the node begin with the new configuration
|
|
2) kubectl command doesn’t work at worker node
basically, worker node always report error: Missing or incomplete configuration info. Please point to an existing, complete config file when running kubectl command.
which needs to copy /etc/kubernetes/admin.conf from master to worker, then append cat "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile at worker node.
organizing cluster accesss using kubecnfig files
docker0 iptables transfer
when starting docker engine, docker0 VNC is created, and this vnc add its routing rules to the host’s iptables. From docker 1.13.1, the routing rules of docker0 vnc is only transfer to localhost of the host machine, namely docker0 to any other non-localhost is forbidden, which leads to the service can only be access on the host machine, where this pod/container is running. in multi-nodes k8s, we need enable iptable FORWARD.
append the following line to ExecStart line in file /lib/systemd/system/docker.service:
|
|
then restart docker engine:
|
|
after enable docker0 iptable rules, the following test service can be accessed on both nodes.
deploy redis service
create a k8s-redis image
|
|
build the image and push to both nodes.
deploy a redis-deployment
- create
redis-deployment.yaml:
|
|
- expose deployment as service
|
|
access as pod
|
|
as we can see here, as redis-server as pod, won’t expose any port. and pod-IP(10.4.1.18) is only accessible inside cluster
access as service
|
|
so basically, we can access redis as service with the exposed port 31962, and the host node’s IP(10.20.181.132), (rather than the serivce cluster IP(10.104.43.224).
tips, only check service, won’t tell on which node, the pod is running. so need check the pod, and get its node’s IP.
with docker StartExec with iptable FORWARD, redis-cli on on both ubuntu node and meng node can access the service.
in summary: if we deploy service as NodePort, we suppose to access the service with its host node’s IP and the exposed port, from external/outside of k8s.
endpoints
k8s endpoints. what’s the difference from endpoints to externalIP ?
|
|
it gives us the kubernetes endpoints, which is avaiable on both meng and ubuntu nodes.
|
|
not every service has ENDPOINTS, which gives the way to access outside of the cluster. but NodePort type service can bind to the running pod’s host IP with the exported port.
whenever expose k8s service to either internally or externally, it goes through kube-proxy. when kube-proxy do network transfer, it has two ways: Userspace or Iptables.
clusterIP, is basically expose internnaly, with the service’s cluster IP; while nodePort type, is basically bind the service’s port to each node, so we can access the service from each node with the node’s IP and this fixed port.
apiserver
core of k8s: API Server, is the RESTful API for resource POST/GET/DELETE/UPDATE. we can access through:
|
|
- check apiServer IP
|
|
if check the LISTEN ports on both worker and master nodes, there are many k8s related ports, some are accessible, while some are not.
k8s dashboard
the following is from dashboard doc in cn
- download src
|
|
- clear old dashboard resources
if there are old running dashboard, can clear first.
|
|
- start a fresh dashboard
|
|
or src from github/dashboard/recommended.yaml, and run:
|
|
admin-user.yaml is defined wih admin authorization. if not define or applied, when login to dashboard web UI, it gives some errors like:
|
|
so there are two tips during creating dashboard.
auth/admin-user.yaml is required
add NodePort type service to expose dashboard. if not, can’ access dashboard on host machine.
refer from deploy dashboard && metrics-server
create
external-http.yamlto expose NodePort servicecreate
admin-user.yamlfor admin manage
- get the ServiceAccount token
|
|
- go to
https://nodeIP.6443, tips, dashboard service is usinghttps
- login dashboard
there are two ways to auth to login dashboard:
– kubeconfig, the configure to access the cluster
– token, every service account has a secret with valid Bearer Token, that can used to login to Dashboard.
- system checks
|
|
metrics-server
metrics-server is a replace of Heapster.
|
|
roles
the right way to create a role:
- create a ServiceAccount
- bind a role for the ServiceAccount(cluster-admin role is needed)
- make a ClusterRoleBinding for ServiceAccount
list all container images in all ns
|
|