配置背景介绍
kubernetes是google开源的容器集群管理系统,提供应用部署、维护、扩展机制等功能,利用kubernetes能方便管理跨集群运行容器化的应用,简称:k8s(k与s之间有8个字母)
为什么要用kubernetes这么复杂的docker集群管理工具呢?一开始接触了docker内置的swarm,这个工具非常简单快捷的完成docker集群功能。但是在使用docker1.13内置的swarm做集群的时候遇到vip负载均衡没有正确映射端口到外网,或者出现地址被占用的情况,这对高可用性的需求是不利的,然而又没找到一个解决方案,只能转投k8s。
实验环境
腾讯云 centos7.3 64位安装
| yum-config-manager --add-repo https://docs.docker.com/v1.13/engine/installation/linux/repo_files/centos/docker.repo yum makecache fast yum -y install docker-engine-1.13.1 yum install epel-release -y yum remove -y docker-engine* yum install -y kubernetes etcd docker flannel |
修改配置文件
注意下面的10.135.163.237换成自己服务器ip
| sed -i "s/localhost:2379/10.135.163.237:2379/g" /etc/etcd/etcd.conf sed -i "s/localhost:2380/10.135.163.237:2380/g" /etc/etcd/etcd.conf sed -i "s/10.135.163.237:2379/10.135.163.237:2379,http://127.0.0.1:2379/g" /etc/etcd/etcd.conf sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/kubernetes/apiserver sed -i "s/--insecure-bind-address=127.0.0.1/--insecure-bind-address=0.0.0.0/g" /etc/kubernetes/apiserver sed -i "s/--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota/--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota/g" /etc/kubernetes/apiserver sed -i "s/--hostname-override=127.0.0.1/--hostname-override=10.135.163.237/g" /etc/kubernetes/kubelet sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/kubelet sed -i "s/--address=127.0.0.1/--address=0.0.0.0/g" /etc/kubernetes/kubelet sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/config sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/sysconfig/flanneld |
修改host
| vi /etc/hosts 10.135.163.237 k8s_master |
添加网络
| systemctl enable etcd.service systemctl start etcd.service etcdctl mk //atomic.io/network/config '{"Network":"172.17.0.0/16"}' |
启动服务
| service docker start for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet docker flanneld ; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done; |
第一个demo
编写文件a.yaml
| apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-app spec: replicas: 2 template: metadata: labels: app: my-app spec: containers: - name: my-app image: registry.alauda.cn/yubang/paas_base_test ports: - containerPort: 80 command: ["/bin/bash", "/var/start.sh"] resources: limits: cpu: 0.5 memory: 64Mi |








