Introduction
k8s 是一個用來編排容器的工具,這近幾年非常火紅。有著高可靠的架構、應用程序部署、自動部署、擴展等功能。
本篇文章已安裝為目的。
Kubernetes Cluster Components
- Node
- 被 Master 管理的節點可以是實體機或虛擬機
- 每個 Node 上執行 Pod 服務,並進行管理和啟動透過
kubelet
- Pod
- kubernetes 中運行的最小單位,可以是一個或多個容器
- 透過 Node 新增、刪除、啟動
- 生命週期存在 4 種狀態(Pending、Running、Succeeded、Failed)
- 具有共享
volumn
,提供對 Pod 中所有容器的存取
- Selector
- 透過比對物件針對任何標籤進行的查詢
- Replication Controller
- 定義 Pod 數量
- 在 master 上,
Controller Manager
透過 RC 的定義完成 Pod 建立、監控、啟動、停止等操作 - 任何 Pod 發生故障,它會讓群集狀態恢復正常
- Label
- 定義物件的可辨識屬性,用它們進行管理和選擇
- key-value 定義
- 定義物件的可辨識屬性,用它們進行管理和選擇
- Service
- 用來提供 Pod 對外存取的介面
- NodePort
- LoadBlance
- Ingress
- 用來提供 Pod 對外存取的介面
by yeasy.gitbooks.io
Environment
- CPU
- 2 vCPU
- memory
- 2 GB
192.168.137.144 | k8smaster
192.168.137.145 | k8snode1
192.168.137.146 | k8snode2
Configure Hostname
$ sudo hostnamectl set-hostname k8smaster
$ sudo hostnamectl set-hostname k8snode1
$ sudo hostnamectl set-hostname k8snode2
設定完之後,會出現 DNS 問題。此問題會讓本機無法解析,如下解決
$ sudo vim /etc/hosts
192.168.137.144 k8smaster
192.168.137.145 k8snode1
192.168.137.146 k8snode2
Disable Swap
所有機器應該要固定 CPU/memory
,不應該使用 swap 會導致效能降低。
$ sudo swapoff -a
$ sudo swapon -s
要保持永久效果至 fstab
將 swap 項目註解
$ sudo vim /etc/fstab
...
#/dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
Prerequisites for Kubernetes
$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
還需要安裝 Docker 這邊不說明。可至官方查看
Installing Kubernetes
$ sudo su -c "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -"
創建 Kubernetes
儲存庫檔案
$ sudo vim /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
更新 cache
$ sudo apt update
$ sudo apt-get install -y kubelet kubeadm kubectl
- kubelet
- kubeadm
- kubectl
Initialize the Kubernetes Cluster
在 master 主機執行以下
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.137.144 --kubernetes-version v1.16.0
# --apiserver-advertise-address: Api server 服務 IP
# --apiserver-bind-port: Api server 服務 port,預設 6443
# --pod-network-cidr: 分配 Pod IP 範圍 (CIDR)
# --kubernetes-version: 指定 Kubernetes 版本
...
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7zd2c5.w9825shkwsa7vjat
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.137.144:6443 --token nh519e.dwtglafz2gquwl7u \
--discovery-token-ca-cert-hash sha256:8a8dd1183dc570532f1ed4dd2f9f63b3a32d5753e78e37882f3f90ab63d344d7
以一般用戶身份運行以下。讓 kubectl 可連到 Kubernetes,將設定複製到 $HOME/.kube/config
底下,kubectl 執行時會到該目錄下取得 Api server
等相關資訊。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果給的加入 k8s Node Token 遺失,可透過以下來獲取
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh519e.dwtglafz2gquwl7u 23h 2019-09-25T12:46:34+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
上述如果要重置可用 kubeadm reset
來完成,會移除相關資訊。
TLS 錯誤參考 troubleshooting-kubeadm
Deploy Network for Pod
/proc/sys/net/bridge/bridge-nf-call-iptables
需設置為 1
$ sudo vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 # bridge 層級也會去呼叫 iptables 處理
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
$ sudo sysctl --system
先前已經定義了一個 pod 網路,但是,應該為網路創建一個 namespace。可依照環境選擇適當的 Kubernetes 的 CNI plugin。這邊使用 Flannel
,Flannel
將為 Kubernetes(k8s)提供overlay network
。若想要自定義的 CIDR
,需更改 flannel 中的 net-conf.json
的 Network
並用新的 CIDR init
。
藉由 Flannel YAML 創建 Pod 網路。
$ wget https://raw.githubusercontent.com/coreos/flannel/32a765fd19ba45b387fdc5e3812c41fff47cfd55/Documentation/kube-flannel.yml
$ sudo vim kube-flannel.yml # cni-conf.json 加入 "cniVersion": "0.3.0"
"name": "cbr0",
"cniVersion": "0.3.0",
"plugins": [
...
$ kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
完成後 kubeadm
在 /etc/cni/net.d/10-flannel.conflist
下有網路配置文件。
官網說明了 Kubelet
從 --cni-conf-dir
(默認 /etc/cni/net.d
)讀取檔案,並使用該檔案中的 CNI 配置來設置每個 Pod 的網路。
$ cat /etc/cni/net.d/10-flannel.conflist
{
"name": "cbr0",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
關於此問題,已有人發出 issue 了。
一個叢集只能用一種 Pod 網路(除非用 multus-cni)
Verify the Nodes
檢查 nodes 狀態
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 5m58s v1.16.0
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8smaster Ready master 6m15s v1.16.0 192.168.137.144 <none> Ubuntu 16.04.4 LTS 4.4.0-116-generic docker://19.3.2
要知道當前的 namespace。在建立 Kubernetes 集群時,預設下創建以下所有 namespace。
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-dzrvb 1/1 Running 0 6m13s
kube-system coredns-5644d7b6d9-lmvgx 1/1 Running 0 6m13s
kube-system etcd-k8smaster 1/1 Running 0 5m11s
kube-system kube-apiserver-k8smaster 1/1 Running 0 5m26s
kube-system kube-controller-manager-k8smaster 1/1 Running 0 5m23s
kube-system kube-flannel-ds-amd64-w4zqk 1/1 Running 0 5m6s
kube-system kube-proxy-dmz84 1/1 Running 0 6m13s
kube-system kube-scheduler-k8smaster 1/1 Running 0 5m29s
Join Workers with K8s Master
在前面初始化時,最後給了 Token。透過 Token,將 worker 與 master 連接起來。
$ sudo kubeadm join 192.168.137.144:6443 --token nh519e.dwtglafz2gquwl7u \
--discovery-token-ca-cert-hash sha256:8a8dd1183dc570532f1ed4dd2f9f63b3a32d5753e78e37882f3f90ab63d344d7
...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
再加入節點時輸出錯誤嘗試執行以下,重新嘗試
$ sudo kubeadm reset
Verify the Nodes
狀態已 Ready 顯示表示成功
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 7m51s v1.16.0
k8snode1 Ready <none> 25s v1.16.0
k8snode2 Ready <none> 18s v1.16.0
Solve the error
以下問題是因為 flannel,所配置的 CNI 檔案有問題,新增 cniVersion
即可。
Sep 22 22:50:41 k8smaster kubelet[50085]: E0922 22:50:41.918617 50085 kubelet.go:2187] Container runtime network not ready: NetworkReady=false rea
Sep 22 22:50:46 k8smaster kubelet[50085]: W0922 22:50:46.475206 50085 cni.go:202] Error validating CNI config &{cbr0 false [0xc000b0fac0 0xc000b0
Sep 22 22:50:46 k8smaster kubelet[50085]: W0922 22:50:46.475266 50085 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni
Sep 22 22:50:46 k8smaster kubelet[50085]: E0922 22:50:46.919835 50085 kubelet.go:2187] Container runtime network not ready: NetworkReady=false rea
Sep 22 22:50:51 k8smaster kubelet[50085]: W0922 22:50:51.478988 50085 cni.go:202] Error validating CNI config &{cbr0 false [0xc000308580 0xc00030
Sep 22 22:50:51 k8smaster kubelet[50085]: W0922 22:50:51.479048 50085 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni
Sep 22 22:50:51 k8smaster kubelet[50085]: E0922 22:50:51.922779 50085 kubelet.go:2187] Container runtime network not ready: NetworkReady=false rea
Additional information
$ sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # 是透過 kubeadm 去架設 kubernetes cluster,所以設定檔可以置這邊修改。可參考下面"kubeadm 與 kubelet 溝通"