标签 docker 下的文章

一步步打造基于Kubeadm的高可用Kubernetes集群-第一部分

Kubernetes集群的核心是其master node,但目前默认情况下master node只有一个,一旦master node出现问题,Kubernetes集群将陷入“瘫痪”,对集群的管理、Pod的调度等均将无法实施,即便此时某些用户的Pod依旧可以正常运行。这显然不能符合我们对于运行于生产环境下的Kubernetes集群的要求,我们需要一个高可用的Kubernetes集群。

不过,目前Kubernetes官方针对构建高可用(high-availability)的集群的支持还是非常有限的,只是针对少数cloud-provider提供了粗糙的部署方法,比如:使用kube-up.sh脚本在GCE上使用kops在AWS上等等。

高可用Kubernetes集群是Kubernetes演进的必然方向,官方在“Building High-Availability Clusters”一文中给出了当前搭建HA cluster的粗略思路。Kubeadm也将HA列入了后续版本的里程碑计划,并且已经出了一版使用kubeadm部署高可用cluster的方法提议草案

在kubeadm没有真正支持自动bootstrap的HA Kubernetes cluster之前,如果要搭建一个HA k8s cluster,我们应该如何做呢?本文将探索性地一步一步的给出打造一个HA K8s cluster的思路和具体步骤。不过需要注意的是:这里搭建的HA k8s cluser仅在实验室中测试ok,还并未在生产环境中run过,因此在某些未知的细节方面可能存在思路上的纰漏

一、测试环境

高可用Kubernetes集群主要就是master node的高可用,因此,我们申请了三台美国西部区域的阿里云ECS作为三个master节点。通过hostnamectl将这三个节点的static hostname分别改为shaolin、wudang和emei:

shaolin: 10.27.53.32
wudang: 10.24.138.208
emei: 10.27.52.72

三台主机运行的都是Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-63-generic x86_64),使用root用户。

Docker版本如下:

root@shaolin:~# docker version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64
 Experimental: false

Ubuntu上Docker CE版本的安装步骤可以参看这里,由于我的服务器在美西,因此不存在”墙”的问题。对于主机在国内的朋友,你需要根据安装过程中是否输出错误日志自行决定是否需要配置一个加速器。另外,这里用的docker版本有些新,Kubernetes官网上提及最多的、兼容最好的还是docker 1.12.x版本,你也可以直接安装这个版本。

二、Master节点高可用的思路

通过对single-master node的探索,我们知道master节点上运行着如下几个Kubernetes组件:

  • kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制;
  • etcd:集群的数据中心;
  • kube-scheduler:集群Pod的调度中心;
  • kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;
  • kubelet: kubernetes node agent,负责与node上的docker engine打交道;
  • kubeproxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。

Kubernetes集群的高可用就是master节点的高可用,master节点的高可用归根结底就是上述这些运行于master node上的组件的高可用。因此,我们的思路就是考量如何让这些组件高可用起来!综合Kubernetes官方提供的资料以及一些proposal draft,我们知道完全从头搭建的hard way形式似乎不甚理智^0^,将一个由kubeadm创建的k8s cluster改造为一个ha的k8s cluster似乎更可行。下面是我的思路方案:

img{512x368}

前面提到过,我们的思路是基于kubeadm启动的kubernetes集群,通过逐步修改配置或替换,形成最终HA的k8s cluster。上图是k8s ha cluster的最终图景,我们可以看到:

  • kube-apiserver:得益于apiserver的无状态,每个master节点的apiserver都是active的,并处理来自Load Balance分配过来的流量;
  • etcd:状态的集中存储区。通过将多个master节点上的etcd组成一个etcd集群,使得apiserver共享集群状态和数据;
  • kube-controller-manager:kcm自带leader-elected功能,多个master上的kcm构成一个集群,但只有被elected为leader的kcm在工作。每个master节点上的kcm都连接本node上的apiserver;
  • kube-scheduler:scheduler自带leader-elected功能,多个master上的scheduler构成一个集群,但只有被elected为leader的scheduler在工作。每个master节点上的scheduler都连接本node上的apiserver;
  • kubelet: 由于master上的各个组件均以container的形式呈现,因此不承担workload的master节点上的kubelet更多是用来管理这些master组件容器。每个master节点上的kubelet都连接本node上的apiserver;
  • kube-proxy: 由于master节点不承载workload,因此master节点上的kube-proxy同样仅服务于一些特殊的服务,比如: kube-dns等。由于kubeadm下kube-proxy没有暴露出可供外部调整的配置,因此kube-proxy需要连接Load Balance暴露的apiserver的端口。

接下来,我们就来一步步按照我们的思路,对kubeadm启动的single-master node k8s cluster进行改造,逐步演进到我们期望的ha cluster状态。

三、第一步:使用kubeadm安装single-master k8s cluster

距离第一次使用kubeadm安装kubernetes 1.5.1集群已经有一些日子了,kubernetes和kubeadm都有了一些变化。当前kubernetes和kubeadm的最新release版都是1.6.2版本:

root@wudang:~# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

root@wudang:~# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-proxy-amd64                v1.6.2              7a1b61b8f5d4        3 weeks ago         109 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.6.2              c7ad09fe3b82        3 weeks ago         133 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.6.2              e14b1d5ee474        3 weeks ago         151 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.6.2              b55f2a2481b9        3 weeks ago         76.8 MB
... ...

虽然kubeadm版本有更新,但安装过程没有太多变化,这里仅列出一些关键步骤,一些详细信息输出就在这里省略了。

我们先在shaolin node上安装相关程序文件:

root@shaolin:~# apt-get update && apt-get install -y apt-transport-https

root@shaolin:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK

root@shaolin:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb http://apt.kubernetes.io/ kubernetes-xenial main
> EOF

root@shaolin:~# apt-get update

root@shaolin:~# apt-get install -y kubelet kubeadm kubectl kubernetes-cni

接下来,使用kubeadm启动集群。注意:由于在aliyun上flannel 网络插件一直不好用,这里还是使用weave network

root@shaolin:~/k8s-install# kubeadm init --apiserver-advertise-address 10.27.53.32
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.2
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [shaolin kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.27.53.32]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 17.045449 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 5.008588 seconds
[token] Using token: a8dd42.afdb86eda4a8c987
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token abcdefghijklmn 10.27.53.32:6443

root@shaolin:~/k8s-install# pods
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP            NODE
kube-system   etcd-shaolin                      1/1       Running   0          34s       10.27.53.32   shaolin
kube-system   kube-apiserver-shaolin            1/1       Running   0          35s       10.27.53.32   shaolin
kube-system   kube-controller-manager-shaolin   1/1       Running   0          23s       10.27.53.32   shaolin
kube-system   kube-dns-3913472980-tkr91         0/3       Pending   0          1m        <none>
kube-system   kube-proxy-bzvvk                  1/1       Running   0          1m        10.27.53.32   shaolin
kube-system   kube-scheduler-shaolin            1/1       Running   0          46s       10.27.53.32   shaolin

k8s 1.6.2版本的weave network的安装与之前稍有不同,因为k8s 1.6启用了更为安全的机制,默认采用RBAC对运行于cluster上的workload进行有限授权。我们要使用的weave network plugin的yaml为weave-daemonset-k8s-1.6.yaml

root@shaolin:~/k8s-install# kubectl apply -f https://git.io/weave-kube-1.6
clusterrole "weave-net" created
serviceaccount "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

如果你的weave pod启动失败且原因类似如下日志:

Network 172.30.0.0/16 overlaps with existing route 172.16.0.0/12 on host.

你需要修改你的weave network的 IPALLOC_RANGE(这里我使用了172.32.0.0/16):

//weave-daemonset-k8s-1.6.yaml
... ...
spec:
  template:
    metadata:
      labels:
        name: weave-net
    spec:
      hostNetwork: true
      hostPID: true
      containers:
        - name: weave
          env:
            - name: IPALLOC_RANGE
              value: 172.32.0.0/16
... ...

master安装ok后,我们将wudang、emei两个node作为k8s minion node,来测试一下cluster的搭建是否是正确的,同时这一过程也在wudang、emei上安装上了kubelet和kube-proxy,这两个组件在后续的“改造”过程中是可以直接使用的:

以emei node为例:

root@emei:~# kubeadm join --token abcdefghijklmn 10.27.53.32:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "10.27.53.32:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.27.53.32:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://10.27.53.32:6443"
[discovery] Successfully established connection with API Server "10.27.53.32:6443"
[bootstrap] Detected server version: v1.6.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

建立一个多pod的nginx服务,测试一下集群网络是否通!这里就不赘述了。

安装后的single-master kubernetes cluster的状态就如下图所示:

img{512x368}

四、第二步:搭建etcd cluster for ha k8s cluster

k8s集群状态和数据都存储在etcd中,高可用的k8s集群离不开高可用的etcd cluster。我们需要为最终的ha k8s cluster提供一个ha的etcd cluster,如何做呢?

当前k8s cluster中,shaolin master node上的etcd存储着k8s集群的所有数据和状态。我们需要在wudang和emei两个节点上也建立起etcd实例,与现存在 etcd共同构建成为高可用的且存储有cluster数据和状态的集群。我们将这一过程再细化为几个小步骤:

0、在emei、wudang两个节点上启动kubelet服务

etcd cluster可以采用完全独立的、与k8s组件无关的建立方法。不过这里我采用的是和master一样的方式,即采用由wudang和emei两个node上kubelet启动的etcd作为etcd cluster的两个member。此时,wudang和emei两个node的角色是k8s minion node,我们需要首先清理一下这两个node的数据:

root@shaolin:~/k8s-install # kubectl drain wudang --delete-local-data --force --ignore-daemonsets
node "wudang" cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-proxy-mxwp3, weave-net-03jbh; Deleting pods with local storage: weave-net-03jbh
pod "my-nginx-2267614806-fqzph" evicted
node "wudang" drained

root@wudang:~# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

root@shaolin:~/k8s-install # kubectl drain emei --delete-local-data --force --ignore-daemonsets
root@emei:~# kubeadm reset

root@shaolin:~/k8s-install# kubectl delete node/wudang
root@shaolin:~/k8s-install# kubectl delete node/emei

我们的小目标中:etcd cluster将由各个node上的kubelet自动启动;而kubelet则是由systemd在sys init时启动,且其启动配置如下:

root@wudang:~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS

我们需要首先在wudang和emei node上将kubelet启动起来,我们以wudang node为例:

root@wudang:~# systemctl enable kubelet
root@wudang:~# systemctl start kubelet

查看kubelet service日志:

root@wudang:~# journalctl -u kubelet -f

May 10 10:58:41 wudang systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 10 10:58:41 wudang kubelet[27179]: I0510 10:58:41.798507   27179 feature_gate.go:144] feature gates: map[]
May 10 10:58:41 wudang kubelet[27179]: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory
May 10 10:58:41 wudang systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
May 10 10:58:41 wudang systemd[1]: kubelet.service: Unit entered failed state.
May 10 10:58:41 wudang systemd[1]: kubelet.service: Failed with result 'exit-code'.

kubelet启动失败,因为缺少/etc/kubernetes/kubelet.conf这个配置文件。我们需要向shaolin node求援,我们需要将shaolin node上的同名配置文件copy到wudang和emei两个node下面,当然同时需要copy的还包括shaolin node上的/etc/kubernetes/pki目录:

root@wudang:~# kubectl --kubeconfig=/etc/kubernetes/kubelet.conf config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.27.53.32:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:shaolin
  name: system:node:shaolin@kubernetes
current-context: system:node:shaolin@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:shaolin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

root@wudang:~# ls /etc/kubernetes/pki
apiserver.crt  apiserver-kubelet-client.crt  ca.crt  ca.srl              front-proxy-ca.key      front-proxy-client.key  sa.pub
apiserver.key  apiserver-kubelet-client.key ca.key  front-proxy-ca.crt  front-proxy-client.crt  sa.key

systemctl daemon-reload; systemctl restart kubelet后,再查看kubelet service日志,你会发现kubelet起来了!

以wudang node为例:

root@wudang:~# journalctl -u kubelet -f
-- Logs begin at Mon 2017-05-08 15:12:01 CST. --
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213529   26907 factory.go:54] Registering systemd factory
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213674   26907 factory.go:86] Registering Raw factory
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213813   26907 manager.go:1106] Started watching for new ooms in manager
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.216383   26907 oomparser.go:185] oomparser using systemd
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.217415   26907 manager.go:288] Starting recovery of all containers
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.285428   26907 manager.go:293] Recovery completed
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.344425   26907 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 11 10:37:07 wudang kubelet[26907]: E0511 10:37:07.356188   26907 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node 'wudang' not found
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.358402   26907 kubelet_node_status.go:77] Attempting to register node wudang
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.363083   26907 kubelet_node_status.go:80] Successfully registered node wudang

此时此刻,我们先让wudang、emei node上的kubelet先连着shaolin node上的apiserver。

1、在emei、wudang两个节点上建立一个etcd cluster

我们以shaolin node上的/etc/kubernetes/manifests/etcd.yaml为蓝本,修改出wudang和emei上的etcd.yaml,主要的变化在于containers:command部分:

wudang上的/etc/kubernetes/manifests/etcd.yaml:

spec:
  containers:
  - command:
    - etcd
    - --name=etcd-wudang
    - --initial-advertise-peer-urls=http://10.24.138.208:2380
    - --listen-peer-urls=http://10.24.138.208:2380
    - --listen-client-urls=http://10.24.138.208:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://10.24.138.208:2379
    - --initial-cluster-token=etcd-cluster
    - --initial-cluster=etcd-wudang=http://10.24.138.208:2380,etcd-emei=http://10.27.52.72:2380
    - --initial-cluster-state=new
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17

emei上的/etc/kubernetes/manifests/etcd.yaml:

spec:
  containers:
  - command:
    - etcd
    - --name=etcd-emei
    - --initial-advertise-peer-urls=http://10.27.52.72:2380
    - --listen-peer-urls=http://10.27.52.72:2380
    - --listen-client-urls=http://10.27.52.72:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://10.27.52.72:2379
    - --initial-cluster-token=etcd-cluster
    - --initial-cluster=etcd-emei=http://10.27.52.72:2380,etcd-wudang=http://10.24.138.208:2380
    - --initial-cluster-state=new
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17

将这两个文件分别放入各自node的/etc/kubernetes/manifests目录后,各自node上的kubelet将会自动将对应的etcd pod启动起来!

root@shaolin:~# pods
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   etcd-emei                         1/1       Running   0          11s       10.27.52.72     emei
kube-system   etcd-shaolin                      1/1       Running   0          25m       10.27.53.32     shaolin
kube-system   etcd-wudang                       1/1       Running   0          24s       10.24.138.208   wudang

我们查看一下当前etcd cluster的状态:

# etcdctl endpoint status --endpoints=10.27.52.72:2379,10.24.138.208:2379
10.27.52.72:2379, 6e80adf8cd57f826, 3.0.17, 25 kB, false, 17, 660
10.24.138.208:2379, f3805d1ab19c110b, 3.0.17, 25 kB, true, 17, 660

注:输出的列从左到右分别表示:endpoint URL, ID, version, database size, leadership status, raft term, and raft status.
因此,我们可以看出wudang(10.24.138.208)上的etcd被选为cluster leader了

我们测试一下etcd cluster,put一些key:

在wudang节点:(注意:export ETCDCTL_API=3)

root@wudang:~# etcdctl put foo bar
OK
root@wudang:~# etcdctl put foo1 bar1
OK
root@wudang:~# etcdctl get foo
foo
bar

在emei节点:

root@emei:~# etcdctl get foo
foo
bar

至此,当前kubernetes cluster的状态示意图如下:

img{512x368}

2、同步shaolin上etcd的数据到etcd cluster中

kubernetes 1.6.2版本默认使用3.x版本etcd。etcdctl 3.x版本提供了一个make-mirror功能用于在etcd cluster间同步数据,这样我们就可以通过etcdctl make-mirror将shaolin上etcd的k8s cluster数据同步到上述刚刚创建的etcd cluster中。在emei node上执行下面命令:

root@emei:~# etcdctl make-mirror --no-dest-prefix=true  127.0.0.1:2379  --endpoints=10.27.53.32:2379 --insecure-skip-tls-verify=true
... ...
261
302
341
380
420
459
498
537
577
616
655

... ...

etcdctl make-mirror每隔30s输出一次日志,不过通过这些日志无法看出来同步过程。并且etcdctl make-mirror似乎是流式同步:没有结束的边界。因此你需要手工判断一下数据是否都同步过去了!比如通过查看某个key,对比两边的差异的方式:

# etcdctl get --from-key /api/v2/registry/clusterrolebindings/cluster-admin

.. ..
compact_rev_key
122912

或者通过endpoint status命令查看数据库size大小,对比双方的size是否一致。一旦差不多了,就可以停掉make-mirror的执行了!

3、将shaolin上的apiserver连接的etcd改为连接etcd cluster,停止并删除shaolin上的etcd

修改shaolin node上的/etc/kubernetes/manifests/kube-apiserver.yaml,让shaolin上的kube0-apiserver连接到emei node上的etcd:

修改下面一行:
- --etcd-servers=http://10.27.52.72:2379

修改保存后,kubelet会自动重启kube-apiserver,重启后的kube-apiserver工作正常!

接下来,我们停掉并删除掉shaolin上的etcd(并删除相关数据存放目录):

root@shaolin:~# rm /etc/kubernetes/manifests/etcd.yaml
root@shaolin:~# rm -fr /var/lib/etcd

再查看k8s cluster当前pod,你会发现etcd-shaolin不见了。

至此,k8s集群的当前状态示意图如下:

img{512x368}

4、重新创建shaolin上的etcd ,并以member形式加入etcd cluster

我们首先需要在已存在的etcd cluster中添加etcd-shaolin这个member:

root@wudang:~/kubernetes-conf-shaolin/manifests# etcdctl member add etcd-shaolin --peer-urls=http://10.27.53.32:2380
Member 3184cfa57d8ef00c added to cluster 140cec6dd173ab61

然后,在shaolin node上基于原shaolin上的etcd.yaml文件进行如下修改:

// /etc/kubernetes/manifests/etcd.yaml
... ...
spec:
  containers:
  - command:
    - etcd
    - --name=etcd-shaolin
    - --initial-advertise-peer-urls=http://10.27.53.32:2380
    - --listen-peer-urls=http://10.27.53.32:2380
    - --listen-client-urls=http://10.27.53.32:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://10.27.53.32:2379
    - --initial-cluster-token=etcd-cluster
    - --initial-cluster=etcd-shaolin=http://10.27.53.32:2380,etcd-wudang=http://10.24.138.208:2380,etcd-emei=http://10.27.52.72:2380
    - --initial-cluster-state=existing
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17

修改保存后,kubelet将自动拉起etcd-shaolin:

root@shaolin:~/k8s-install# pods
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   etcd-emei                         1/1       Running   0          3h        10.27.52.72     emei
kube-system   etcd-shaolin                      1/1       Running   0          8s        10.27.53.32     shaolin
kube-system   etcd-wudang                       1/1       Running   0          3h        10.24.138.208   wudang

查看etcd cluster状态:

root@shaolin:~# etcdctl endpoint status --endpoints=10.27.52.72:2379,10.24.138.208:2379,10.27.53.32:2379
10.27.52.72:2379, 6e80adf8cd57f826, 3.0.17, 11 MB, false, 17, 34941
10.24.138.208:2379, f3805d1ab19c110b, 3.0.17, 11 MB, true, 17, 34941
10.27.53.32:2379, 3184cfa57d8ef00c, 3.0.17, 11 MB, false, 17, 34941

可以看出三个etcd实例的数据size、raft status是一致的,wudang node上的etcd是leader!

5、将shaolin上的apiserver的etcdserver指向改回etcd-shaolin

// /etc/kubernetes/manifests/kube-apiserver.yaml

... ...
- --etcd-servers=http://127.0.0.1:2379
... ...

生效重启后,当前kubernetes cluster的状态如下面示意图:

img{512x368}

第二部分在这里

Kubernetes集群node主机名修改导致的异常

除了在生产环境使用的Kubernetes 1.3.7集群之外,我这里还有一套1.5.1的Kubernetes测试环境,这个测试环境一来用于验证各种技术方案,二来也是为了跟踪Kubernetes的最新进展。本篇要记录的一个异常就是发生在该测试Kubernetes集群中的。

一、缘起

前两天我在Kubernetes测试环境搭建一套Ceph,为了便于ceph-deploy的安装,我通过hostnamectl命令将阿里云默认提供的复杂又冗长的主机名改为短小且更有意义的主机名:

iZ25beglnhtZ -> yypdmaster
iz2ze39jeyizepdxhwqci6z -> yypdnode

以yypdmaster为例,修改过程如下:

# hostnamectl --static set-hostname yypdmaster
# hostnamectl status
Static hostname: yypdmaster
Transient hostname: iZ25beglnhtZ
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 91aa4b8f2556de49e743dc2f53e8a5c4
           Boot ID: 5d0e642ebafa460086388da4177e488e
    Virtualization: kvm
  Operating System: Ubuntu 16.04.1 LTS
            Kernel: Linux 4.4.0-58-generic
      Architecture: x86-64

# cat /etc/hostname
yypdmaster

hostnamectl并未修改/etc/hosts,我手动在/etc/hosts中将yypdmaster对应的ip配置上:

xx.xx.xx.xx yypdmaster

重新登录后,我们看到主机名状态:Transient hostname不见了,只剩下了静态主机名:

# hostnamectl status
   Static hostname: yypdmaster
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 91aa4b8f2556de49e743dc2f53e8a5c4
           Boot ID: 5d0e642ebafa460086388da4177e488e
    Virtualization: kvm
  Operating System: Ubuntu 16.04.1 LTS
            Kernel: Linux 4.4.0-58-generic
      Architecture: x86-64

另外一台主机也是如此修改。主机名修改后,整个k8s集群工作一切正常,因此我最初以为hostname的修改对k8s cluster的运行没有影响。

二、集群”Crash”

昨天在做跨节点挂载Cephfs测试时,发现在yypdmaster上kubectl exec另外一个node上的pod不好用,提示:连接10250端口超时!而且从错误日志来看,yypdmaster上的k8s组件居然通过yypdnode的外网ip去访问yypdnode上的10250端口,也就是yypdnode上kubelet监听的端口。由于aliyun的安全组规则限制,这个端口是不允许外网访问的,因此timeout错误是合理的。但为什么之前集群都是好好的?突然间出现这个问题呢?为什么不用内网的ip地址访问呢?

我尝试重启了yypdnode上的kubelet服务。不过似乎没什么效果!正当我疑惑时,我发现集群似乎”Crash”了,下面是当时查看集群的pod情况的输出:

# kubectl get pod --all-namespaces -o wide

NAMESPACE                    NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
default                      ceph-pod2                               1/1       Unknown            0          26m       172.30.192.4   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret                   1/1       Unknown            0          38m       172.30.192.2   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret-on-master         1/1       Unknown            0          34m       172.30.0.51    iz25beglnhtz
default                      nginx-kit-3630450072-2c0jk              0/2       Pending            0          12m       <none>
default                      nginx-kit-3630450072-3n50m              2/2       Unknown            20         35d       172.30.0.44    iz25beglnhtz
default                      nginx-kit-3630450072-90v4q              0/2       Pending            0          12m       <none>
default                      nginx-kit-3630450072-j8qrk              2/2       Unknown            20         72d       172.30.0.47    iz25beglnhtz
kube-system                  dummy-2088944543-9382n                  1/1       Running            0          12m       xx.xx.xx.xx   yypdmaster
kube-system                  dummy-2088944543-93f4c                  1/1       Unknown            16         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  elasticsearch-logging-v1-dhl35          1/1       Running            0          12m       172.30.192.6   yypdnode
kube-system                  elasticsearch-logging-v1-s3sbj          1/1       Unknown            9          35d       172.30.0.45    iz25beglnhtz
kube-system                  elasticsearch-logging-v1-t8wg0          1/1       Unknown            29         68d       172.30.0.43    iz25beglnhtz
kube-system                  elasticsearch-logging-v1-zdp19          1/1       Running            0          12m       172.30.0.3     yypdmaster
kube-system                  etcd-iz25beglnhtz                       1/1       Unknown            17         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  etcd-yypdmaster                         1/1       Running            17         17m       xx.xx.xx.xx   yypdmaster
kube-system                  fluentd-es-v1.22-ggvv4                  1/1       NodeLost           24         68d       172.30.0.46    iz25beglnhtz
kube-system                  fluentd-es-v1.22-rj871                  1/1       Running            0          17m       172.30.0.1     yypdmaster
kube-system                  fluentd-es-v1.22-xn77x                  1/1       NodeLost           0          6d        172.30.192.0   iz2ze39jeyizepdxhwqci6z
kube-system                  fluentd-es-v1.22-z82rz                  1/1       Running            0          18m       172.30.192.5   yypdnode
kube-system                  kibana-logging-3746979809-dplzv         1/1       Running            0          12m       172.30.0.4     yypdmaster
kube-system                  kibana-logging-3746979809-lq9m3         1/1       Unknown            9          35d       172.30.0.49    iz25beglnhtz
kube-system                  kube-apiserver-iz25beglnhtz             1/1       Unknown            19         104d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-apiserver-yypdmaster               1/1       Running            19         17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-controller-manager-iz25beglnhtz    1/1       Unknown            21         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-controller-manager-yypdmaster      1/1       Running            21         17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-discovery-1769846148-wh1z4         1/1       Unknown            12         73d       xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-discovery-1769846148-z2v87         0/1       Pending            0          12m       <none>
kube-system                  kube-dns-2924299975-206tg               4/4       Unknown            129        130d      172.30.0.48    iz25beglnhtz
kube-system                  kube-dns-2924299975-g1kks               4/4       Running            0          12m       172.30.0.5     yypdmaster
kube-system                  kube-proxy-3z29k                        1/1       Running            0          18m       yy.yy.yy.yy    yypdnode
kube-system                  kube-proxy-kfzxv                        1/1       Running            0          17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-proxy-n2xmf                        1/1       NodeLost           16         130d      xx.xx.xx.xx   iz25beglnhtz

观察这个输出,我们看到几点异常:

  • 不常见的Pod状态:Unknown、NodeLost
  • Node一列居然出现了四个Node: yypdmaster、yypdnode、 iz25beglnhtz和 iz2ze39jeyizepdxhwqci6z

等了一会儿,这种状态依然不见好转。我于是重启了master上的kubelet、重启了两个节点上的docker engine,不过启动后问题依旧!

查看Running状态的Pod情况:

# kubectl get pod --all-namespaces -o wide|grep Running
kube-system                  dummy-2088944543-9382n                  1/1       Running            0          18m       xx.xx.xx.xx   yypdmaster
kube-system                  elasticsearch-logging-v1-dhl35          1/1       Running            0          18m       172.30.192.6   yypdnode
kube-system                  elasticsearch-logging-v1-zdp19          1/1       Running            0          18m       172.30.0.3     yypdmaster
kube-system                  etcd-yypdmaster                         1/1       Running            17         23m       xx.xx.xx.xx   yypdmaster
kube-system                  fluentd-es-v1.22-rj871                  1/1       Running            0          23m       172.30.0.1     yypdmaster
kube-system                  fluentd-es-v1.22-z82rz                  1/1       Running            0          24m       172.30.192.5   yypdnode
kube-system                  kibana-logging-3746979809-dplzv         1/1       Running            0          18m       172.30.0.4     yypdmaster
kube-system                  kube-apiserver-yypdmaster               1/1       Running            19         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-controller-manager-yypdmaster      1/1       Running            21         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-dns-2924299975-g1kks               4/4       Running            0          18m       172.30.0.5     yypdmaster
kube-system                  kube-proxy-3z29k                        1/1       Running            0          24m       yy.yy.yy.yy    yypdnode
kube-system                  kube-proxy-kfzxv                        1/1       Running            0          23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-scheduler-yypdmaster               1/1       Running            22         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kubernetes-dashboard-3109525988-cj74d   1/1       Running            0          18m       172.30.0.6     yypdmaster
mioss-namespace-s0fcvegcmw   console-sm7cg2-101699315-f3g55          1/1       Running            0          18m       172.30.0.7     yypdmaster

似乎Kubernetes集群并未真正”Crash”,但从Node列来看,正常的pod归属的node不是yypdmaster就是yypdnode, iz25beglnhtz和 iz2ze39jeyize

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 Go语言编程指南
商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats