使用kubeadm启动一个安全的kubernetes集群
官方文档参考链接 :
环境准备
CentOS 7
至少2G以上内存
至少两个CPU
集群中每台机器网络互通(关闭firewall、selinux、NetworkManager)
每个节点有唯一主机名,MAC地址和Product UUID
特定端口没有被占用。
禁用swap,否则kubelet无法正常工作
如何验证每台节点的MAC地址和product_uuid是唯一的
使用ip link或者ifconfig -a来获取网络接口的MAC地址
使用sudo cat /sys/class/dmi/id/product_uuid命令可以检查
(UUID是指在一台机器上生成的数字,它保证对在同一时空中的所有机器都是唯一的,linux中有UUID,保存在文件/sys/class/dmi/id/product_uuid中)
Kubernetes使用这些值来唯一标识集群中的节点。如果这些值对于每个节点不是唯一的,则安装过程可能会失败。
依赖的端口
Master节点:
Protocol | 方向 | 端口范围 | 用途 |
TCP | In | 6443* | Kubernetes API server |
TCP | In | 2379-2380 | etcd(可使用外部单独的etcd集群,也可以是自定义的端口) |
TCP | In | 10250 | kubelet API |
TCP | In | 10251 | kube-scheduler |
TCP | In | 10252 | kube-controller-manager |
TCP | In | 10255 | Read-only Kubelet API |
Node节点:
Protocol | 方向 | 端口范围 | 用途 |
TCP | In | 10250 | kubelet API |
TCP | In | 10255 | Read-only Kubelet API |
TCP | In | 30000-32767 | NodePort Services** |
正式开始部署
Master节点IP:192.168.214.166
Node1节点IP:192.168.214.167
Node2节点IP:192.168.214.168
所有节点更改主机名并加入解析
[root@localhost ~]# hostnamectl set-hostname master[root@localhost ~]# bash[root@master ~]# vim /etc/hosts192.168.214.166 master192.168.214.167 node1192.168.214.168 node2
所有节点关闭防火墙、selinux、NetworkManager
[root@master ~]# systemctl stop firewalld[root@master ~]# systemctl disable firewalld[root@master ~]# sed -i "s/SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config[root@master ~]# setenforce 0[root@master ~]# systemctl stop NetworkManager[root@master ~]# systemctl disable NetworkManager
所有节点禁用swap
[root@master ~]# swapoff -a[root@master ~]# sed -i '/^.*swap.*/d' /etc/fstab
所有节点安装docker
#删除机器上的docker[root@master ~]# yum remove docker docker-common container-selinux docker-selinux docker-engine-selinux docker-ce docker-ee docker-engine#安装yum-utils,它提供一个yum-config-manager单元,同时安装的device-mapper-persistent-data和lvm2用于储存设备映射(devicemapper)必须的两个软件包。[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@master ~]# yum makecache[root@master ~]# yum install docker-ce -y
注:要保证Docker的CGroup驱动要和Kubelet的CGroup驱动一致,这里指定驱动为systemd。
修改Docker的启动参数
sed -i 's#ExecStart=.*#ExecStart=/usr/bin/dockerd -s overlay2 --storage-opt overlay2.override_kernel_check=true --exec-opt native.cgroupdriver=systemd --log-driver=json-file --log-opt max-size=100m --log-opt max-file=10#g' /usr/lib/systemd/system/docker.servicesed -i '/ExecStartPost=.*/d' /usr/lib/systemd/system/docker.servicesed -i '/ExecStart=.*/aExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.servicesed -i '/Environment=.*/d' /usr/lib/systemd/system/docker.service#指定日志驱动为json-file, 同时指定日志最大为100m,最多保留10份历史日志。#指定存储驱动为overlay2。#指定CGroup驱动为systemd
所有节点启动Docker并设置开机自启动
[root@node2 ~]# systemctl daemon-reload[root@node2 ~]# systemctl start docker.service[root@node2 ~]# systemctl enable docker.service
验证docker是否启动,参数是否生效
[root@master ~]# ps -ef | grep dockerroot 3458 1 0 06:56 ? 00:00:00 /usr/bin/dockerd -s overlay2 --storage-opt overlay2.override_kernel_check=true --exec-opt native.cgroupdriver=systemd --log-driver=json-file --log-opt max-size=100m --log-opt max-file=10root 3477 3458 0 06:56 ? 00:00:01 containerd --config /var/run/docker/containerd/containerd.toml --log-level inforoot 3615 1139 0 07:00 pts/0 00:00:00 grep --color=auto docker
所有节点安装 kubelet、kubeadm、kubectl
kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。
kubeadm 用于初始化 Cluster。
kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。
kubeadm的方式不会安装kubelet和kubectl。所以我们要自行安装对应版本的软件包。
kubernetes集群中概念、组件、术语含义作用可参考我之前的文章
注:使用kubeadm启动的集群中,其中Master节点的组件都会交给kubelet来管理,也就是说kube-scheduler,kube-apiserver,kube-controller-manager,kube-proxy和flannl都会以容器的方式启动运行。
[root@master ~]# vim /etc/yum.repo.d/kubernetes.repo[kubernetes]name=kubernetes repobaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0enabled=1[root@master ~]# yum install kubelet kubeadm kubectl -yyum repolist //检查是否配置正确[root@master ~]# rpm -ql kubelet/etc/kubernetes/manifests //清单目录/etc/sysconfig/kubelet //配置文件/etc/systemd/system/kubelet.service //主程序/usr/bin/kubelet[root@master ~]# systemctl enable kubelet && systemctl start kubelet
所有节点修改内核参数
[root@master ~]# sed -i '/net.bridge.bridge-nf-call-iptables/d' /usr/lib/sysctl.d/00-system.conf[root@master ~]# sed -i '/net.bridge.bridge-nf-call-ip6tables/d' /usr/lib/sysctl.d/00-system.conf[root@master ~]# sed -i '$a net.bridge.bridge-nf-call-iptables = 1' /usr/lib/sysctl.d/00-system.conf[root@master ~]# sed -i '$a net.bridge.bridge-nf-call-ip6tables = 1' /usr/lib/sysctl.d/00-system.conf[root@master ~]# sysctl --system[root@master ~]# [ -f /proc/sys/fs/may_detach_mounts ] && sed -i "/fs.may_detach_mounts/ d" /etc/sysctl.conf[root@master ~]# [ -f /proc/sys/fs/may_detach_mounts ] && echo "fs.may_detach_mounts=1" >> /etc/sysctl.conf[root@master ~]# sysctl -pfs.may_detach_mounts = 1
注:在CentOS7.4引入了一个新的参数来控制内核的行为。 /proc/sys/fs/may_detach_mounts 默认设置为0;当系统有容器运行的时候,需要将该值设置为1。
配置kubelet的启动参数(注意区分节点名称)
[root@master ~]# vim /etc/sysconfig/kubeletKUBELET_EXTRA_ARGS="--cluster-dns=172.17.0.10 --cluster-domain=cluster.local --hostname-override=master --provider-id=master --pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 --max-pods=40 --cert-dir=/var/lib/kubelet/pki --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --root-dir=/var/lib/kubelet --authentication-token-webhook --resolv-conf=/etc/resolv.conf --rotate-certificates --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,CustomPodDNS=true --pod-manifest-path=/etc/kubernetes/manifests"[root@master ~]# systemctl daemon-reload
--cluster-dns=172.17.0.10 # DNS服务器IP地址列表。对于具有“dnsPolicy = ClusterFirst”的Pod,此值用于容器DNS服务器。
注意:列表中的所有DNS服务器必须提供相同的记录集,否则集群的名称解析可能无法正常工作。无法保证可以联系哪个DNS服务器进行名称解析。
--cluster-domain=cluster.local #集群的域名。如果设置,除了主机的搜索域之外,kubelet还将配置所有容器以搜索此域
--hostname-override=master #如果非空,则使用此字符串作为标识而不是实际的主机名。如果设置了--cloud-provider,则云提供商决定节点的名称
--provider-id=master #用于标识 machine database 中节点的唯一标识符,即cloudprovider
--pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 # 为kubernetes指定pause容器的镜像
参考链接:http://www.itboth.com/d/MrQjym/nginx-kubernetes
--max-pods=40 # 最大Pods数
--cert-dir=/var/lib/kubelet/pki #TLS证书所在的目录。如果提供了--tls-cert-file和--tls-private-key-file,则将忽略此参数。
--network-plugin=cni #<警告:Alpha功能> kubelet / pod生命周期中的各种事件调用的网络插件。此参数仅在container-runtime设置为docker时有效。
--cni-conf-dir=/etc/cni/net.d # <警告:Alpha功能>搜索CNI配置文件的目录。 仅在container-runtime设置为docker时有效。
--cni-bin-dir=/opt/cni/bin # <警告:Alpha功能>搜索CNI插件二进制文件的目录列表。 仅在container-runtime设置为docker时有效。
--root-dir=/var/lib/kubelet # kubelet文件的路径。一些volume会在该目录,一定要指定到大磁盘上
--authentication-token-webhook #使用 TokenReview API 确定承载令牌的身份验证。
--resolv-conf=/etc/resolv.conf # 如果Pod的dnsPolicy设置为"Default",它将从Pod运行的Node节点继承名称解析配置。
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,CustomPodDNS=true --rotate-certificates
#启用客户端、服务端证书轮转 <警告:测试版功能> 证书过期时,从kube-apiserver请求新证书来自动更新kubelet客户端证书。
--pod-manifest-path=/etc/kubernetes/manifests"
目前为止所有节点都一样:selinux、防火墙、swap关闭,docker安装配置、kubelet安装配置、内存参数修改
接下来部署master
为了应对网络不畅通的问题,我们提前手动下载相关镜像并重新打 tag :
Master上要准备docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1docker pull mirrorgooglecontainers/kube-proxy:v1.13.1docker pull mirrorgooglecontainers/pause:3.1docker pull mirrorgooglecontainers/etcd:3.2.24docker pull coredns/coredns:1.2.6docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64[root@master ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 2 months ago 80.2MBk8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 2 months ago 181MBk8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 2 months ago 79.6MBk8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 2 months ago 146MBk8s.gcr.io/coredns 1.2.6 f59dcacceff4 3 months ago 40MBk8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MBquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 13 months ago 44.6MBk8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kBNode要准备docker pull mirrorgooglecontainers/kube-proxy:v1.13.1docker pull mirrorgooglecontainers/pause:3.1docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
使用kubeadm初始化Master
--apiserver-advertise-address
指定Master节点使用哪个IP来跟Cluster中的其他节点通信。如果不指定会选择有默认网关的接口的IP地址。--apiserver-bind-port
指定Master节点的APIServer监听的端口。--pod-network-cidr
指定Pod网络的范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。--service-cidr
指定Service网络的范围。注意要和kubelet
中指定的DNS的具体地址的CIDR一致。--service-dns-domain
指定Kubernetes内部Service网络使用DNS域名。注意要和kubelet
中指定的一致。--token-ttl
证书过期时间。
[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.214.166 --apiserver-bind-port=6443 --pod-network-cidr=172.16.0.0/16 --service-cidr=172.17.0.0/16 --service-dns-domain=cluster.local --token-ttl=2400h0m0s --kubernetes-version=v1.13.1[init] Using Kubernetes version: v1.13.1[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.214.166 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.214.166 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "ca" certificate and key[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.17.0.1 192.168.214.166][certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 22.508526 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: n6312v.ewq7swb59ceu2fce[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root: kubeadm join 192.168.214.166:6443 --token n6312v.ewq7swb59ceu2fce --discovery-token-ca-cert-hash sha256:25369a6cbe5abc31a3177d28e302c9ea9236766e4447051ad259bef6c209df67
注:
kubeadm执行初始化之前的检查,Docker版本的检查,kubelet的检查,swap的检查等。
生成CA证书和密钥。[cert]
生成apiserver证书和密钥。
生成其他证书和密钥,位置
/etc/kubernetes/pki
生成KubeConfig文件,位置在
/etc/kubernetes/
, kubelet(kubelet.conf)、kubectl(admin.conf)等需要这些文件跟Master通信。[kubeconfig]生成manifest, 位置在
/etc/kubernetes/manifests/
, kubelet会使用生成的yaml文件来启动Master节点的各个组件。给Master节点添加标签
node-role.kubernetes.io/master=""
使其不参与任何的Pod调度。配置RBAC的相关规则。
安装必备的组件
kube-dns
和kube-proxy
。初始化成功,以及一些提示。
另需注意,token需要保存好,用于node加入集群时使用,后期无法重现 ,万一忘记可在master节点使用 kubeadm token list查看 或 kubeadm token create创建 [bootstraptoken]
执行时若出现swap报错请添加参数 --ignore-preflight-errors=Swap
如果你需要重新做 kubeadm init ,那么我们最好把上次初始化和容器服务全部清理掉,具体步骤:
(1)kubeadm reset(2)systemctl stop kubelet(3)docker stop $(docker ps -qa) && docker rm $(docker ps -qa) ## 如果docker上有其他服务,请不要使用这个命令,这时你需要手动排查出kubernetes相关容器并删除(4)systemctl start kubelet(5)kubeadm init
配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具。
在 Master上用 root用户执行下列命令来配置 kubectl:
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile[root@master ~]# source /etc/profile [root@master ~]# echo $KUBECONFIG/etc/kubernetes/admin.conf##安装时出现了这个报错,就是没有配置上面参数导致的,/etc/kubernetes/admin.conf这个文件主要是集群初始化的时候用来传递参数的The connection to the server localhost:8080 was refused - did you specify the right host or port?
配置kubectl自动补全:
1.3版本中,kubectl添加了一个completions的命令, 该命令可用于自动补全
# yum install -y bash-completion# locate bash_completion(没有locate命令安装yum -y install mlocate -y)/usr/share/bash-completion/bash_completion# source /usr/share/bash-completion/bash_completion# source <(kubectl completion bash)
使用kubectl查看组件状态
[root@master ~]# kubectl get componentstatus(cs)NAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
查看集群中的pods,此时看到coredns为pending状态,master组件为running状态
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-86c58d9df4-8r2tf 0/1 Pending 0 5h57mkube-system coredns-86c58d9df4-nltw6 0/1 Pending 0 5h57mkube-system etcd-master 1/1 Running 0 5h57mkube-system kube-apiserver-master 1/1 Running 0 5h57mkube-system kube-controller-manager-master 1/1 Running 0 5h56mkube-system kube-proxy-xcpcg 1/1 Running 0 5h57mkube-system kube-scheduler-master 1/1 Running 0 5h57m
要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。
上面的这些核心组件虽然为running状态,但是并不是跑在Pod network中的(此时pod网络还没有创建),而是采用了Host network,以kube-APIserver为例,我们来验证下
[root@master ~]# kubectl get pods -n kube-system kube-apiserver-masterNAME READY STATUS RESTARTS AGEkube-apiserver-master 1/1 Running 3 1d 1/1 Running 3 1d#查看kube-apiserver的容器id[root@master ~]# docker ps |grep apiserverc120c761b764 9df3c00f55e6 "kube-apiserver --..." 33 minutes ago#查看对应的pause容器的network属性[root@master ~]# docker inspect c120c761b764"NetworkMode": "host",
接下来安装Pod网络
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml[root@master ~]# sed -i 's#"Network": "10.244.0.0/16",#"Network": "172.20.0.0/16",#g' kube-flannel.yml[root@master ~]# sed -i 's#"quay.io/coreos/flannel:v0.9.1-amd64",#"quay.io/coreos/flannel:v0.10.0-amd64",#g' kube-flannel.yml##kube-flannel.yaml文件参考[root@master ~]# vim kube-flannel.yaml---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelrules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannelsubjects:- kind: ServiceAccount name: flannel namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: flannel namespace: kube-system---kind: ConfigMapapiVersion: v1metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flanneldata: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "172.16.0.0/16", "Backend": { "Type": "vxlan" } }---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conf volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name 注: flannel文件中的Network要和kubadm init指定的Pod网络一致,我们设置的是172.16.0.0/16,默认是10.244.0.0/16 flannel版本根据我们之前下载的镜像改为v0.10.0版本
检查一下 CoreDNS Pod是否正常运行起来, 若正常运行,说明master安装成功
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-86c58d9df4-8r2tf 1/1 Running 0 6h16mkube-system coredns-86c58d9df4-nltw6 1/1 Running 0 6h16mkube-system etcd-master 1/1 Running 0 6h15mkube-system kube-apiserver-master 1/1 Running 0 6h15mkube-system kube-controller-manager-master 1/1 Running 0 6h15mkube-system kube-flannel-ds-amd64-b9kfs 1/1 Running 0 8m49skube-system kube-proxy-xcpcg 1/1 Running 0 6h16mkube-system kube-scheduler-master 1/1 Running 0 6h15m
检查主节点状态,我们可以看到已为Ready
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSIONmaster Ready master 6h23m v1.13.3#命令行验证[root@master ~]# curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://192.168.214.166:6443
添加node节点到集群中
刚才初始化Master时返回的信息最后一条命令,在node上执行即可
[root@node1 ~]# kubeadm join 192.168.214.166:6443 --token n6312v.ewq7swb59ceu2fce --discovery-token-ca-cert-hash sha256:25369a6cbe5abc31a3177d28e302c9ea9236766e4447051ad259bef6c209df67[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06[discovery] Trying to connect to API Server "192.168.214.166:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.214.166:6443"[discovery] Requesting info from "https://192.168.214.166:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.214.166:6443"[discovery] Successfully established connection with API Server "192.168.214.166:6443"[join] Reading configuration from the cluster...[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotationThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.
查看集群状态,集群已部署成功
[root@master my.conf]# kubectl get nodes NAME STATUS ROLES AGE VERSIONmaster Ready master 12h v1.13.3node1 Ready28m v1.13.3node2 Ready 153m v1.13.3[root@master my.conf]# kubectl get pods --all-namespaces -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkube-system coredns-86c58d9df4-8r2tf 1/1 Running 0 12h 172.16.0.3 master kube-system coredns-86c58d9df4-nltw6 1/1 Running 0 12h 172.16.0.2 master kube-system etcd-master 1/1 Running 0 12h 192.168.214.166 master kube-system kube-apiserver-master 1/1 Running 0 12h 192.168.214.166 master kube-system kube-controller-manager-master 1/1 Running 0 12h 192.168.214.166 master kube-system kube-flannel-ds-dms4z 1/1 Running 0 136m 192.168.214.166 master kube-system kube-flannel-ds-gf4zk 1/1 Running 6 28m 192.168.214.167 node1 kube-system kube-flannel-ds-wfbh5 1/1 Running 2 136m 192.168.214.168 node2 kube-system kube-proxy-d486m 1/1 Running 0 28m 192.168.214.167 node1 kube-system kube-proxy-qpntl 1/1 Running 0 154m 192.168.214.168 node2 kube-system kube-proxy-xcpcg 1/1 Running 0 12h 192.168.214.166 master kube-system kube-scheduler-master 1/1 Running 0 12h 192.168.214.166 master
拆卸集群
删除node节点
kubectl drain 节点名称 --delete-local-data --force --ignore-daemonsetskubectl delete node node节点名称[root@master ~]# kubectl drain node1 --delete-local-data --force --ignore-daemonsetsnode/node1 cordonedWARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-qmdxs, kube-proxy-rzcprnode/node1 drained[root@master ~]# kubectl delete node node1node "node1" deleted
节点移除后,我们可以执行如下命令来重置集群:
[root@master ~]# kubeadm reset