Environment preparation

  • Master node
1
2
3
172.16.244.14
172.16.244.16
172.16.244.18
  • Node node
1
2
172.16.244.25
172.16.244.27
  • Master node VIP address: 172.16.243.13

  • Deployment tool: Ansible/kubeasz

Initialize the environment

Install Ansible

1
2
3
4
5
apt update
apt-get install ansible expect
git clone https://github.com/easzlab/kubeasz
cd kubeasz
cp * /etc/ansible/

Configure ansible for password-free login

1
2
ssh-keygen -t rsa -b 2048 #Generate key
./tools/yc-ssh-key-copy.sh hosts root 'rootpassword'

Prepare binary files

1
2
cd tools
./easzup -D #By default, all files will be downloaded to the /etc/ansible/bin/directory

Configure the hosts file as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[kube-master]
172.16.244.14
172.16.244.16
172.16.244.18

[etcd]
172.16.244.14 NODE_NAME=etcdd1
172.16.244.16 NODE_NAME=etcd2
172.16.244.18 NODE_NAME=etcd3 #haproxy-keepalived [haproxy] 172.16.244.14 172.16.244.16 172.16.244.18 [kube-node] 172.16.244.25 172.16.244.27 # [optional] loadbalance for accessing k8s from outside [ex-lb] 172.16.244.14 LB_ROLE=backup EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443 172.16.244.16 LB_ROLE=backup EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443 172.16.244.18 LB_ROLE=master EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443 # [optional] ntp server for the cluster [chrony] 172.16.244.18 [all:vars] # ---------- Main Variables --------------- # Cluster container-runtime supported: docker, containerd CONTAINER_RUNTIME="docker" # Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn #CLUSTER_NETWORK="flannel" CLUSTER_NETWORK="calico" # Service proxy mode of kube-proxy: 'iptables' or 'ipvs' PROXY_MODE="ipvs" # K8S Service CIDR, not overlap with node(host) networking SERVICE_CIDR="10.68.0.0/16" # Cluster CIDR (Pod CIDR), not overlap with node(host) networking CLUSTER_CIDR="10.101.0.0/16" # NodePort Range NODE_PORT_RANGE="20000-40000" # Cluster DNS Domain STER _DNS_DOMAIN="cluster.local." # -------- Additional Variables (don't change the default value right now) --- # Binaries Directory bin_dir="/opt/kube/bin" # CA and other components cert/key Directory ca_dir="/etc/kubernetes/ssl" # Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"

Deploy K8S cluster

Initialize configuration

1
2
cd /etc/ansible
ansible-playbook 01.prepare.yml

This process mainly does three things:

1
2
3
chrony role: Cluster node time synchronization [optional]
deploy role: Create CA certificate, kubeconfig, kube-proxy.kubeconfig
prepare role: Distribute CA certificate, kubectl client installation, environment configuration

Install etcd cluster

1
ansible-playbook 02.etcd.yml

Install docker

1
ansible-playbook 03.docker.yml

Deploy kubernetes master

1
ansible-playbook 04.kube-master.yml

Deploy kubernetes node

1
ansible-playbook 05.kube-node.yml

Deploy kubernetes network (select calico here)

1
ansible-playbook 06.network.yml

Deploy ingress/k8s dashbaord/coredns

Configure the ssl certificate used by ingress. The default warehouse here uses traefik1.7.12. We plan to upgrade to 2.0 later

1
2
# kubectl create secret tls traefik-cert --key=test.cn.key --cert=test.cn.pem -n kube-system
secret/traefik-cert created

Deploy cluster extension

1
ansible-playbook 07.cluster-addon.yml

Master node untaint

After the default deployment, the master node status is tainted and no longer in the scheduling strategy as follows:

1
2
3
4
5
6
7
8
9
# kubectl get node
NAME STATUS ROLES AGE VERSION
172.16.244.14 Ready,SchedulingDisabled master 91m v1.16.2
172.16.244.16 Ready,SchedulingDisabled master 91m v1.16.2
172.16.244.18 Ready,SchedulingDisabled master 91m v1.16.2
172.16.244.25 Ready node 90m v1.16.2
172.16.244.27 Ready node 90m v1.16.2
# kubectl describe node 172.16.244.14 |grep Taint
Taints: node.kubernetes.io/unschedulable:NoSchedule

Due to limited machine resources, the master is also added to the available scheduling state

1
2
3
4
5
6
7
8
# kubectl patch node 172.16.244.14 -p '{"spec":{"unschedulable":false}}'
# kubectl get node
NAME STATUS ROLES AGE VERSION
172.16.244.14 Ready master 95m v1.16.2
172.16.244.16 Ready master 95m v1.16.2
172.16.244.18 Ready master 95m v1.16.2
172.16.244.25 Ready node 94m v1.16.2
172.16.244.27 Ready node 94m v1.16.2

Deploy external load balancer (Keepalived+Haproxy)

1
ansible-playbook roles/ex-lb/ex-lb.yml

Deploy Rancher

Install Helm3

Reference https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/

1
2
3
4
5
cd /opt/soft
wget https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz
tar xf helm-v3.0.1-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/

Create certificates

Because it is a certificate of its own domain name, the certificate created by k8s secret is used here. Of course, you can also use the cert-manager tool to issue rancher’s own certificate or use letsEncrypt

1
kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=test.cn.pem --key=test.cn.key

Install rancher

1
2
3
4
5
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo update
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher-cicd.test.cn --set ingress.tls.source=secret

Check ingress, resources and deployment status

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
cattle-system rancher rancher-cicd.test.cn 80, 443 20h
# kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 2 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
# kubectl -n cattle-system get deploy rancher
NAME READY UP-TO-DATE AVAILABLE AGE
rancher 3/3 3 3 5m5s

At this point, the entire K8S cluster has been built. If everything goes well, the whole process should take about 10 minutes. The important thing is to plan the cluster in advance.