使用Kubeadm在CentOS7.2上部署Kubernetes集群的方法

2019-10-10 14:21:36王振洲

观察控制节点的docker image:

# docker images
REPOSITORY                        TAG         IMAGE ID      CREATED       SIZE
gcr.io/google_containers/kube-apiserver-amd64      v1.6.4       4e3810a19a64    2 days ago     150.6 MB
gcr.io/google_containers/kube-controller-manager-amd64  v1.6.4       0ea16a85ac34    2 days ago     132.8 MB
gcr.io/google_containers/kube-proxy-amd64        v1.6.4       e073a55c288b    2 days ago     109.2 MB
gcr.io/google_containers/kube-scheduler-amd64      v1.6.4       1fab9be555e1    2 days ago     76.75 MB
gcr.io/google_containers/etcd-amd64           3.0.17       243830dae7dd    12 weeks ago    168.9 MB
gcr.io/google_containers/pause-amd64           3.0         99e59f495ffa    12 months ago    746.9 kB

按照初始化命令的提示执行以下操作:

# cp /etc/kubernetes/admin.conf $HOME/
# chown $(id -u):$(id -g) $HOME/admin.conf
# export KUBECONFIG=$HOME/admin.conf

隔离控制节点

# kubectl taint nodes --all node-role.kubernetes.io/master-
node "kube-master" tainted

安装pod网络

# kubectl apply -f flannel/Documentation/kube-flannel-rbac.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created

# kubectl apply -f flannel/Documentation/kube-flannel.yml
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

可以通过git clone flannel仓库:

# git clone https://github.com/coreos/flannel.git

添加工作节点

# kubeadm join --token <token> 192.168.120.122:6443

该操作输出如下:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.120.122:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.122:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.120.122:6443"
[discovery] Successfully established connection with API Server "192.168.120.122:6443"
[bootstrap] Detected server version: v1.6.4
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
 received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

在控制节点观察集群状态

# kubectl get nodes
NAME     STATUS  AGE    VERSION
kube-agent1  Ready   16m    v1.6.3
kube-agent2  Ready   16m    v1.6.3
kube-agent3  Ready   16m    v1.6.3
kube-master  Ready   37m    v1.6.3

# kubectl get pods --all-namespaces -o wide
NAMESPACE   NAME                 READY   STATUS  RESTARTS  AGE    IP        NODE
kube-system  etcd-kube-master           1/1    Running  0     32m    192.168.120.122  kube-master
kube-system  kube-apiserver-kube-master      1/1    Running  7     32m    192.168.120.122  kube-master
kube-system  kube-controller-manager-kube-master  1/1    Running  0     32m    192.168.120.122  kube-master
kube-system  kube-dns-3913472980-3x9wh       3/3    Running  0     37m    10.244.0.2    kube-master
kube-system  kube-flannel-ds-1m4wz         2/2    Running  0     18m    192.168.120.122  kube-master
kube-system  kube-flannel-ds-3jwf5         2/2    Running  0     17m    192.168.120.123  kube-agent1
kube-system  kube-flannel-ds-41qbs         2/2    Running  4     17m    192.168.120.125  kube-agent3
kube-system  kube-flannel-ds-ssjct         2/2    Running  4     17m    192.168.120.124  kube-agent2
kube-system  kube-proxy-0mmfc           1/1    Running  0     17m    192.168.120.124  kube-agent2
kube-system  kube-proxy-23vwr           1/1    Running  0     17m    192.168.120.125  kube-agent3
kube-system  kube-proxy-5q8vq           1/1    Running  0     17m    192.168.120.123  kube-agent1
kube-system  kube-proxy-8srwn           1/1    Running  0     37m    192.168.120.122  kube-master
kube-system  kube-scheduler-kube-master      1/1    Running  0     32m    192.168.120.122  kube-master