使用Kubeadm在CentOS7.2上部署Kubernetes集群的方法

2019-10-10 14:21:36王振洲

本文参考kubernetes官网文章Installing Kubernetes on Linux with kubeadm在CentOS7.2使用Kubeadm部署Kuebernetes集群,解决了一些在按照该文档部署时遇到的问题。

操作系统版本

# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core)

内核版本

# uname -r
3.10.0-327.el7.x86_64

集群节点

192.168.120.122 kube-master
192.168.120.123 kube-agent1
192.168.120.124 kube-agent2
192.168.120.125 kube-agent3

即该集群包含一个控制节点和三个工作节点。

部署前的准备

配置可以访问google相关网站

这种部署方式使用的软件包由google相关源提供,因此集群节点必须能够访问外网,至于如何配置请自行解决。

关闭防火墙

# systemctl stop firewalld.service && systemctl disable firewalld.service

禁用SELinux

# setenforce 0
# sed -i.bak 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

配置yum源

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

安装kubelet和kubeadm

在所有节点上安装以下软件包:

# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

然后设置内核参数:

# sysctl net.bridge.bridge-nf-call-iptables=1
# sysctl net.bridge.bridge-nf-call-ip6tables=1

初始化控制节点

# kubeadm init --pod-network-cidr=10.244.0.0/16

因为在该集群中将使用flannel搭建pod网络,因此必须添加–pod-network-cidr参数。

注意:初始化较慢,因为该过程会pull一些docker image。

该命令的输出如下:

Initializing your master...
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.122]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1377.560339 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 6.039626 seconds
[token] Using token: 60bc68.e94800f3c5c4c2d5
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 sudo cp /etc/kubernetes/admin.conf $HOME/
 sudo chown $(id -u):$(id -g) $HOME/admin.conf
 export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node as root:

 kubeadm join --token <token> 192.168.120.122:6443