分类 技术志 下的文章

一步步打造基于Kubeadm的高可用Kubernetes集群-第一部分

Kubernetes集群的核心是其master node,但目前默认情况下master node只有一个,一旦master node出现问题,Kubernetes集群将陷入“瘫痪”,对集群的管理、Pod的调度等均将无法实施,即便此时某些用户的Pod依旧可以正常运行。这显然不能符合我们对于运行于生产环境下的Kubernetes集群的要求,我们需要一个高可用的Kubernetes集群。

不过,目前Kubernetes官方针对构建高可用(high-availability)的集群的支持还是非常有限的,只是针对少数cloud-provider提供了粗糙的部署方法,比如:使用kube-up.sh脚本在GCE上使用kops在AWS上等等。

高可用Kubernetes集群是Kubernetes演进的必然方向,官方在“Building High-Availability Clusters”一文中给出了当前搭建HA cluster的粗略思路。Kubeadm也将HA列入了后续版本的里程碑计划,并且已经出了一版使用kubeadm部署高可用cluster的方法提议草案

在kubeadm没有真正支持自动bootstrap的HA Kubernetes cluster之前,如果要搭建一个HA k8s cluster,我们应该如何做呢?本文将探索性地一步一步的给出打造一个HA K8s cluster的思路和具体步骤。不过需要注意的是:这里搭建的HA k8s cluser仅在实验室中测试ok,还并未在生产环境中run过,因此在某些未知的细节方面可能存在思路上的纰漏

一、测试环境

高可用Kubernetes集群主要就是master node的高可用,因此,我们申请了三台美国西部区域的阿里云ECS作为三个master节点。通过hostnamectl将这三个节点的static hostname分别改为shaolin、wudang和emei:

shaolin: 10.27.53.32
wudang: 10.24.138.208
emei: 10.27.52.72

三台主机运行的都是Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-63-generic x86_64),使用root用户。

Docker版本如下:

root@shaolin:~# docker version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64
 Experimental: false

Ubuntu上Docker CE版本的安装步骤可以参看这里,由于我的服务器在美西,因此不存在”墙”的问题。对于主机在国内的朋友,你需要根据安装过程中是否输出错误日志自行决定是否需要配置一个加速器。另外,这里用的docker版本有些新,Kubernetes官网上提及最多的、兼容最好的还是docker 1.12.x版本,你也可以直接安装这个版本。

二、Master节点高可用的思路

通过对single-master node的探索,我们知道master节点上运行着如下几个Kubernetes组件:

  • kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制;
  • etcd:集群的数据中心;
  • kube-scheduler:集群Pod的调度中心;
  • kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;
  • kubelet: kubernetes node agent,负责与node上的docker engine打交道;
  • kubeproxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。

Kubernetes集群的高可用就是master节点的高可用,master节点的高可用归根结底就是上述这些运行于master node上的组件的高可用。因此,我们的思路就是考量如何让这些组件高可用起来!综合Kubernetes官方提供的资料以及一些proposal draft,我们知道完全从头搭建的hard way形式似乎不甚理智^0^,将一个由kubeadm创建的k8s cluster改造为一个ha的k8s cluster似乎更可行。下面是我的思路方案:

img{512x368}

前面提到过,我们的思路是基于kubeadm启动的kubernetes集群,通过逐步修改配置或替换,形成最终HA的k8s cluster。上图是k8s ha cluster的最终图景,我们可以看到:

  • kube-apiserver:得益于apiserver的无状态,每个master节点的apiserver都是active的,并处理来自Load Balance分配过来的流量;
  • etcd:状态的集中存储区。通过将多个master节点上的etcd组成一个etcd集群,使得apiserver共享集群状态和数据;
  • kube-controller-manager:kcm自带leader-elected功能,多个master上的kcm构成一个集群,但只有被elected为leader的kcm在工作。每个master节点上的kcm都连接本node上的apiserver;
  • kube-scheduler:scheduler自带leader-elected功能,多个master上的scheduler构成一个集群,但只有被elected为leader的scheduler在工作。每个master节点上的scheduler都连接本node上的apiserver;
  • kubelet: 由于master上的各个组件均以container的形式呈现,因此不承担workload的master节点上的kubelet更多是用来管理这些master组件容器。每个master节点上的kubelet都连接本node上的apiserver;
  • kube-proxy: 由于master节点不承载workload,因此master节点上的kube-proxy同样仅服务于一些特殊的服务,比如: kube-dns等。由于kubeadm下kube-proxy没有暴露出可供外部调整的配置,因此kube-proxy需要连接Load Balance暴露的apiserver的端口。

接下来,我们就来一步步按照我们的思路,对kubeadm启动的single-master node k8s cluster进行改造,逐步演进到我们期望的ha cluster状态。

三、第一步:使用kubeadm安装single-master k8s cluster

距离第一次使用kubeadm安装kubernetes 1.5.1集群已经有一些日子了,kubernetes和kubeadm都有了一些变化。当前kubernetes和kubeadm的最新release版都是1.6.2版本:

root@wudang:~# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

root@wudang:~# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-proxy-amd64                v1.6.2              7a1b61b8f5d4        3 weeks ago         109 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.6.2              c7ad09fe3b82        3 weeks ago         133 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.6.2              e14b1d5ee474        3 weeks ago         151 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.6.2              b55f2a2481b9        3 weeks ago         76.8 MB
... ...

虽然kubeadm版本有更新,但安装过程没有太多变化,这里仅列出一些关键步骤,一些详细信息输出就在这里省略了。

我们先在shaolin node上安装相关程序文件:

root@shaolin:~# apt-get update && apt-get install -y apt-transport-https

root@shaolin:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK

root@shaolin:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb http://apt.kubernetes.io/ kubernetes-xenial main
> EOF

root@shaolin:~# apt-get update

root@shaolin:~# apt-get install -y kubelet kubeadm kubectl kubernetes-cni

接下来,使用kubeadm启动集群。注意:由于在aliyun上flannel 网络插件一直不好用,这里还是使用weave network

root@shaolin:~/k8s-install# kubeadm init --apiserver-advertise-address 10.27.53.32
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.2
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [shaolin kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.27.53.32]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 17.045449 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 5.008588 seconds
[token] Using token: a8dd42.afdb86eda4a8c987
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token abcdefghijklmn 10.27.53.32:6443

root@shaolin:~/k8s-install# pods
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP            NODE
kube-system   etcd-shaolin                      1/1       Running   0          34s       10.27.53.32   shaolin
kube-system   kube-apiserver-shaolin            1/1       Running   0          35s       10.27.53.32   shaolin
kube-system   kube-controller-manager-shaolin   1/1       Running   0          23s       10.27.53.32   shaolin
kube-system   kube-dns-3913472980-tkr91         0/3       Pending   0          1m        <none>
kube-system   kube-proxy-bzvvk                  1/1       Running   0          1m        10.27.53.32   shaolin
kube-system   kube-scheduler-shaolin            1/1       Running   0          46s       10.27.53.32   shaolin

k8s 1.6.2版本的weave network的安装与之前稍有不同,因为k8s 1.6启用了更为安全的机制,默认采用RBAC对运行于cluster上的workload进行有限授权。我们要使用的weave network plugin的yaml为weave-daemonset-k8s-1.6.yaml

root@shaolin:~/k8s-install# kubectl apply -f https://git.io/weave-kube-1.6
clusterrole "weave-net" created
serviceaccount "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

如果你的weave pod启动失败且原因类似如下日志:

Network 172.30.0.0/16 overlaps with existing route 172.16.0.0/12 on host.

你需要修改你的weave network的 IPALLOC_RANGE(这里我使用了172.32.0.0/16):

//weave-daemonset-k8s-1.6.yaml
... ...
spec:
  template:
    metadata:
      labels:
        name: weave-net
    spec:
      hostNetwork: true
      hostPID: true
      containers:
        - name: weave
          env:
            - name: IPALLOC_RANGE
              value: 172.32.0.0/16
... ...

master安装ok后,我们将wudang、emei两个node作为k8s minion node,来测试一下cluster的搭建是否是正确的,同时这一过程也在wudang、emei上安装上了kubelet和kube-proxy,这两个组件在后续的“改造”过程中是可以直接使用的:

以emei node为例:

root@emei:~# kubeadm join --token abcdefghijklmn 10.27.53.32:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "10.27.53.32:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.27.53.32:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://10.27.53.32:6443"
[discovery] Successfully established connection with API Server "10.27.53.32:6443"
[bootstrap] Detected server version: v1.6.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

建立一个多pod的nginx服务,测试一下集群网络是否通!这里就不赘述了。

安装后的single-master kubernetes cluster的状态就如下图所示:

img{512x368}

四、第二步:搭建etcd cluster for ha k8s cluster

k8s集群状态和数据都存储在etcd中,高可用的k8s集群离不开高可用的etcd cluster。我们需要为最终的ha k8s cluster提供一个ha的etcd cluster,如何做呢?

当前k8s cluster中,shaolin master node上的etcd存储着k8s集群的所有数据和状态。我们需要在wudang和emei两个节点上也建立起etcd实例,与现存在 etcd共同构建成为高可用的且存储有cluster数据和状态的集群。我们将这一过程再细化为几个小步骤:

0、在emei、wudang两个节点上启动kubelet服务

etcd cluster可以采用完全独立的、与k8s组件无关的建立方法。不过这里我采用的是和master一样的方式,即采用由wudang和emei两个node上kubelet启动的etcd作为etcd cluster的两个member。此时,wudang和emei两个node的角色是k8s minion node,我们需要首先清理一下这两个node的数据:

root@shaolin:~/k8s-install # kubectl drain wudang --delete-local-data --force --ignore-daemonsets
node "wudang" cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-proxy-mxwp3, weave-net-03jbh; Deleting pods with local storage: weave-net-03jbh
pod "my-nginx-2267614806-fqzph" evicted
node "wudang" drained

root@wudang:~# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

root@shaolin:~/k8s-install # kubectl drain emei --delete-local-data --force --ignore-daemonsets
root@emei:~# kubeadm reset

root@shaolin:~/k8s-install# kubectl delete node/wudang
root@shaolin:~/k8s-install# kubectl delete node/emei

我们的小目标中:etcd cluster将由各个node上的kubelet自动启动;而kubelet则是由systemd在sys init时启动,且其启动配置如下:

root@wudang:~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS

我们需要首先在wudang和emei node上将kubelet启动起来,我们以wudang node为例:

root@wudang:~# systemctl enable kubelet
root@wudang:~# systemctl start kubelet

查看kubelet service日志:

root@wudang:~# journalctl -u kubelet -f

May 10 10:58:41 wudang systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 10 10:58:41 wudang kubelet[27179]: I0510 10:58:41.798507   27179 feature_gate.go:144] feature gates: map[]
May 10 10:58:41 wudang kubelet[27179]: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory
May 10 10:58:41 wudang systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
May 10 10:58:41 wudang systemd[1]: kubelet.service: Unit entered failed state.
May 10 10:58:41 wudang systemd[1]: kubelet.service: Failed with result 'exit-code'.

kubelet启动失败,因为缺少/etc/kubernetes/kubelet.conf这个配置文件。我们需要向shaolin node求援,我们需要将shaolin node上的同名配置文件copy到wudang和emei两个node下面,当然同时需要copy的还包括shaolin node上的/etc/kubernetes/pki目录:

root@wudang:~# kubectl --kubeconfig=/etc/kubernetes/kubelet.conf config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.27.53.32:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:shaolin
  name: system:node:shaolin@kubernetes
current-context: system:node:shaolin@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:shaolin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

root@wudang:~# ls /etc/kubernetes/pki
apiserver.crt  apiserver-kubelet-client.crt  ca.crt  ca.srl              front-proxy-ca.key      front-proxy-client.key  sa.pub
apiserver.key  apiserver-kubelet-client.key ca.key  front-proxy-ca.crt  front-proxy-client.crt  sa.key

systemctl daemon-reload; systemctl restart kubelet后,再查看kubelet service日志,你会发现kubelet起来了!

以wudang node为例:

root@wudang:~# journalctl -u kubelet -f
-- Logs begin at Mon 2017-05-08 15:12:01 CST. --
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213529   26907 factory.go:54] Registering systemd factory
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213674   26907 factory.go:86] Registering Raw factory
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.213813   26907 manager.go:1106] Started watching for new ooms in manager
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.216383   26907 oomparser.go:185] oomparser using systemd
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.217415   26907 manager.go:288] Starting recovery of all containers
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.285428   26907 manager.go:293] Recovery completed
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.344425   26907 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 11 10:37:07 wudang kubelet[26907]: E0511 10:37:07.356188   26907 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node 'wudang' not found
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.358402   26907 kubelet_node_status.go:77] Attempting to register node wudang
May 11 10:37:07 wudang kubelet[26907]: I0511 10:37:07.363083   26907 kubelet_node_status.go:80] Successfully registered node wudang

此时此刻,我们先让wudang、emei node上的kubelet先连着shaolin node上的apiserver。

1、在emei、wudang两个节点上建立一个etcd cluster

我们以shaolin node上的/etc/kubernetes/manifests/etcd.yaml为蓝本,修改出wudang和emei上的etcd.yaml,主要的变化在于containers:command部分:

wudang上的/etc/kubernetes/manifests/etcd.yaml:

spec:
  containers:
  - command:
    - etcd
    - --name=etcd-wudang
    - --initial-advertise-peer-urls=http://10.24.138.208:2380
    - --listen-peer-urls=http://10.24.138.208:2380
    - --listen-client-urls=http://10.24.138.208:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://10.24.138.208:2379
    - --initial-cluster-token=etcd-cluster
    - --initial-cluster=etcd-wudang=http://10.24.138.208:2380,etcd-emei=http://10.27.52.72:2380
    - --initial-cluster-state=new
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17

emei上的/etc/kubernetes/manifests/etcd.yaml:

spec:
  containers:
  - command:
    - etcd
    - --name=etcd-emei
    - --initial-advertise-peer-urls=http://10.27.52.72:2380
    - --listen-peer-urls=http://10.27.52.72:2380
    - --listen-client-urls=http://10.27.52.72:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://10.27.52.72:2379
    - --initial-cluster-token=etcd-cluster
    - --initial-cluster=etcd-emei=http://10.27.52.72:2380,etcd-wudang=http://10.24.138.208:2380
    - --initial-cluster-state=new
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17

将这两个文件分别放入各自node的/etc/kubernetes/manifests目录后,各自node上的kubelet将会自动将对应的etcd pod启动起来!

root@shaolin:~# pods
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   etcd-emei                         1/1       Running   0          11s       10.27.52.72     emei
kube-system   etcd-shaolin                      1/1       Running   0          25m       10.27.53.32     shaolin
kube-system   etcd-wudang                       1/1       Running   0          24s       10.24.138.208   wudang

我们查看一下当前etcd cluster的状态:

# etcdctl endpoint status --endpoints=10.27.52.72:2379,10.24.138.208:2379
10.27.52.72:2379, 6e80adf8cd57f826, 3.0.17, 25 kB, false, 17, 660
10.24.138.208:2379, f3805d1ab19c110b, 3.0.17, 25 kB, true, 17, 660

注:输出的列从左到右分别表示:endpoint URL, ID, version, database size, leadership status, raft term, and raft status.
因此,我们可以看出wudang(10.24.138.208)上的etcd被选为cluster leader了

我们测试一下etcd cluster,put一些key:

在wudang节点:(注意:export ETCDCTL_API=3)

root@wudang:~# etcdctl put foo bar
OK
root@wudang:~# etcdctl put foo1 bar1
OK
root@wudang:~# etcdctl get foo
foo
bar

在emei节点:

root@emei:~# etcdctl get foo
foo
bar

至此,当前kubernetes cluster的状态示意图如下:

img{512x368}

2、同步shaolin上etcd的数据到etcd cluster中

kubernetes 1.6.2版本默认使用3.x版本etcd。etcdctl 3.x版本提供了一个make-mirror功能用于在etcd cluster间同步数据,这样我们就可以通过etcdctl make-mirror将shaolin上etcd的k8s cluster数据同步到上述刚刚创建的etcd cluster中。在emei node上执行下面命令:

root@emei:~# etcdctl make-mirror --no-dest-prefix=true  127.0.0.1:2379  --endpoints=10.27.53.32:2379 --insecure-skip-tls-verify=true
... ...
261
302
341
380
420
459
498
537
577
616
655

... ...

etcdctl make-mirror每隔30s输出一次日志,不过通过这些日志无法看出来同步过程。并且etcdctl make-mirror似乎是流式同步:没有结束的边界。因此你需要手工判断一下数据是否都同步过去了!比如通过查看某个key,对比两边的差异的方式:

# etcdctl get --from-key /api/v2/registry/clusterrolebindings/cluster-admin

.. ..
compact_rev_key
122912

或者通过endpoint status命令查看数据库size大小,对比双方的size是否一致。一旦差不多了,就可以停掉make-mirror的执行了!

3、将shaolin上的apiserver连接的etcd改为连接etcd cluster,停止并删除shaolin上的etcd

修改shaolin node上的/etc/kubernetes/manifests/kube-apiserver.yaml,让shaolin上的kube0-apiserver连接到emei node上的etcd:

修改下面一行:
- --etcd-servers=http://10.27.52.72:2379

修改保存后,kubelet会自动重启kube-apiserver,重启后的kube-apiserver工作正常!

接下来,我们停掉并删除掉shaolin上的etcd(并删除相关数据存放目录):

root@shaolin:~# rm /etc/kubernetes/manifests/etcd.yaml
root@shaolin:~# rm -fr /var/lib/etcd

再查看k8s cluster当前pod,你会发现etcd-shaolin不见了。

至此,k8s集群的当前状态示意图如下:

img{512x368}

4、重新创建shaolin上的etcd ,并以member形式加入etcd cluster

我们首先需要在已存在的etcd cluster中添加etcd-shaolin这个member:

root@wudang:~/kubernetes-conf-shaolin/manifests# etcdctl member add etcd-shaolin --peer-urls=http://10.27.53.32:2380
Member 3184cfa57d8ef00c added to cluster 140cec6dd173ab61

然后,在shaolin node上基于原shaolin上的etcd.yaml文件进行如下修改:

// /etc/kubernetes/manifests/etcd.yaml
... ...
spec:
  containers:
  - command:
    - etcd
    - --name=etcd-shaolin
    - --initial-advertise-peer-urls=http://10.27.53.32:2380
    - --listen-peer-urls=http://10.27.53.32:2380
    - --listen-client-urls=http://10.27.53.32:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://10.27.53.32:2379
    - --initial-cluster-token=etcd-cluster
    - --initial-cluster=etcd-shaolin=http://10.27.53.32:2380,etcd-wudang=http://10.24.138.208:2380,etcd-emei=http://10.27.52.72:2380
    - --initial-cluster-state=existing
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17

修改保存后,kubelet将自动拉起etcd-shaolin:

root@shaolin:~/k8s-install# pods
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   etcd-emei                         1/1       Running   0          3h        10.27.52.72     emei
kube-system   etcd-shaolin                      1/1       Running   0          8s        10.27.53.32     shaolin
kube-system   etcd-wudang                       1/1       Running   0          3h        10.24.138.208   wudang

查看etcd cluster状态:

root@shaolin:~# etcdctl endpoint status --endpoints=10.27.52.72:2379,10.24.138.208:2379,10.27.53.32:2379
10.27.52.72:2379, 6e80adf8cd57f826, 3.0.17, 11 MB, false, 17, 34941
10.24.138.208:2379, f3805d1ab19c110b, 3.0.17, 11 MB, true, 17, 34941
10.27.53.32:2379, 3184cfa57d8ef00c, 3.0.17, 11 MB, false, 17, 34941

可以看出三个etcd实例的数据size、raft status是一致的,wudang node上的etcd是leader!

5、将shaolin上的apiserver的etcdserver指向改回etcd-shaolin

// /etc/kubernetes/manifests/kube-apiserver.yaml

... ...
- --etcd-servers=http://127.0.0.1:2379
... ...

生效重启后,当前kubernetes cluster的状态如下面示意图:

img{512x368}

第二部分在这里

Kubernetes集群node主机名修改导致的异常

除了在生产环境使用的Kubernetes 1.3.7集群之外,我这里还有一套1.5.1的Kubernetes测试环境,这个测试环境一来用于验证各种技术方案,二来也是为了跟踪Kubernetes的最新进展。本篇要记录的一个异常就是发生在该测试Kubernetes集群中的。

一、缘起

前两天我在Kubernetes测试环境搭建一套Ceph,为了便于ceph-deploy的安装,我通过hostnamectl命令将阿里云默认提供的复杂又冗长的主机名改为短小且更有意义的主机名:

iZ25beglnhtZ -> yypdmaster
iz2ze39jeyizepdxhwqci6z -> yypdnode

以yypdmaster为例,修改过程如下:

# hostnamectl --static set-hostname yypdmaster
# hostnamectl status
Static hostname: yypdmaster
Transient hostname: iZ25beglnhtZ
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 91aa4b8f2556de49e743dc2f53e8a5c4
           Boot ID: 5d0e642ebafa460086388da4177e488e
    Virtualization: kvm
  Operating System: Ubuntu 16.04.1 LTS
            Kernel: Linux 4.4.0-58-generic
      Architecture: x86-64

# cat /etc/hostname
yypdmaster

hostnamectl并未修改/etc/hosts,我手动在/etc/hosts中将yypdmaster对应的ip配置上:

xx.xx.xx.xx yypdmaster

重新登录后,我们看到主机名状态:Transient hostname不见了,只剩下了静态主机名:

# hostnamectl status
   Static hostname: yypdmaster
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 91aa4b8f2556de49e743dc2f53e8a5c4
           Boot ID: 5d0e642ebafa460086388da4177e488e
    Virtualization: kvm
  Operating System: Ubuntu 16.04.1 LTS
            Kernel: Linux 4.4.0-58-generic
      Architecture: x86-64

另外一台主机也是如此修改。主机名修改后,整个k8s集群工作一切正常,因此我最初以为hostname的修改对k8s cluster的运行没有影响。

二、集群”Crash”

昨天在做跨节点挂载Cephfs测试时,发现在yypdmaster上kubectl exec另外一个node上的pod不好用,提示:连接10250端口超时!而且从错误日志来看,yypdmaster上的k8s组件居然通过yypdnode的外网ip去访问yypdnode上的10250端口,也就是yypdnode上kubelet监听的端口。由于aliyun的安全组规则限制,这个端口是不允许外网访问的,因此timeout错误是合理的。但为什么之前集群都是好好的?突然间出现这个问题呢?为什么不用内网的ip地址访问呢?

我尝试重启了yypdnode上的kubelet服务。不过似乎没什么效果!正当我疑惑时,我发现集群似乎”Crash”了,下面是当时查看集群的pod情况的输出:

# kubectl get pod --all-namespaces -o wide

NAMESPACE                    NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
default                      ceph-pod2                               1/1       Unknown            0          26m       172.30.192.4   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret                   1/1       Unknown            0          38m       172.30.192.2   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret-on-master         1/1       Unknown            0          34m       172.30.0.51    iz25beglnhtz
default                      nginx-kit-3630450072-2c0jk              0/2       Pending            0          12m       <none>
default                      nginx-kit-3630450072-3n50m              2/2       Unknown            20         35d       172.30.0.44    iz25beglnhtz
default                      nginx-kit-3630450072-90v4q              0/2       Pending            0          12m       <none>
default                      nginx-kit-3630450072-j8qrk              2/2       Unknown            20         72d       172.30.0.47    iz25beglnhtz
kube-system                  dummy-2088944543-9382n                  1/1       Running            0          12m       xx.xx.xx.xx   yypdmaster
kube-system                  dummy-2088944543-93f4c                  1/1       Unknown            16         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  elasticsearch-logging-v1-dhl35          1/1       Running            0          12m       172.30.192.6   yypdnode
kube-system                  elasticsearch-logging-v1-s3sbj          1/1       Unknown            9          35d       172.30.0.45    iz25beglnhtz
kube-system                  elasticsearch-logging-v1-t8wg0          1/1       Unknown            29         68d       172.30.0.43    iz25beglnhtz
kube-system                  elasticsearch-logging-v1-zdp19          1/1       Running            0          12m       172.30.0.3     yypdmaster
kube-system                  etcd-iz25beglnhtz                       1/1       Unknown            17         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  etcd-yypdmaster                         1/1       Running            17         17m       xx.xx.xx.xx   yypdmaster
kube-system                  fluentd-es-v1.22-ggvv4                  1/1       NodeLost           24         68d       172.30.0.46    iz25beglnhtz
kube-system                  fluentd-es-v1.22-rj871                  1/1       Running            0          17m       172.30.0.1     yypdmaster
kube-system                  fluentd-es-v1.22-xn77x                  1/1       NodeLost           0          6d        172.30.192.0   iz2ze39jeyizepdxhwqci6z
kube-system                  fluentd-es-v1.22-z82rz                  1/1       Running            0          18m       172.30.192.5   yypdnode
kube-system                  kibana-logging-3746979809-dplzv         1/1       Running            0          12m       172.30.0.4     yypdmaster
kube-system                  kibana-logging-3746979809-lq9m3         1/1       Unknown            9          35d       172.30.0.49    iz25beglnhtz
kube-system                  kube-apiserver-iz25beglnhtz             1/1       Unknown            19         104d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-apiserver-yypdmaster               1/1       Running            19         17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-controller-manager-iz25beglnhtz    1/1       Unknown            21         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-controller-manager-yypdmaster      1/1       Running            21         17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-discovery-1769846148-wh1z4         1/1       Unknown            12         73d       xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-discovery-1769846148-z2v87         0/1       Pending            0          12m       <none>
kube-system                  kube-dns-2924299975-206tg               4/4       Unknown            129        130d      172.30.0.48    iz25beglnhtz
kube-system                  kube-dns-2924299975-g1kks               4/4       Running            0          12m       172.30.0.5     yypdmaster
kube-system                  kube-proxy-3z29k                        1/1       Running            0          18m       yy.yy.yy.yy    yypdnode
kube-system                  kube-proxy-kfzxv                        1/1       Running            0          17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-proxy-n2xmf                        1/1       NodeLost           16         130d      xx.xx.xx.xx   iz25beglnhtz

观察这个输出,我们看到几点异常:

  • 不常见的Pod状态:Unknown、NodeLost
  • Node一列居然出现了四个Node: yypdmaster、yypdnode、 iz25beglnhtz和 iz2ze39jeyizepdxhwqci6z

等了一会儿,这种状态依然不见好转。我于是重启了master上的kubelet、重启了两个节点上的docker engine,不过启动后问题依旧!

查看Running状态的Pod情况:

# kubectl get pod --all-namespaces -o wide|grep Running
kube-system                  dummy-2088944543-9382n                  1/1       Running            0          18m       xx.xx.xx.xx   yypdmaster
kube-system                  elasticsearch-logging-v1-dhl35          1/1       Running            0          18m       172.30.192.6   yypdnode
kube-system                  elasticsearch-logging-v1-zdp19          1/1       Running            0          18m       172.30.0.3     yypdmaster
kube-system                  etcd-yypdmaster                         1/1       Running            17         23m       xx.xx.xx.xx   yypdmaster
kube-system                  fluentd-es-v1.22-rj871                  1/1       Running            0          23m       172.30.0.1     yypdmaster
kube-system                  fluentd-es-v1.22-z82rz                  1/1       Running            0          24m       172.30.192.5   yypdnode
kube-system                  kibana-logging-3746979809-dplzv         1/1       Running            0          18m       172.30.0.4     yypdmaster
kube-system                  kube-apiserver-yypdmaster               1/1       Running            19         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-controller-manager-yypdmaster      1/1       Running            21         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-dns-2924299975-g1kks               4/4       Running            0          18m       172.30.0.5     yypdmaster
kube-system                  kube-proxy-3z29k                        1/1       Running            0          24m       yy.yy.yy.yy    yypdnode
kube-system                  kube-proxy-kfzxv                        1/1       Running            0          23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-scheduler-yypdmaster               1/1       Running            22         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kubernetes-dashboard-3109525988-cj74d   1/1       Running            0          18m       172.30.0.6     yypdmaster
mioss-namespace-s0fcvegcmw   console-sm7cg2-101699315-f3g55          1/1       Running            0          18m       172.30.0.7     yypdmaster

似乎Kubernetes集群并未真正”Crash”,但从Node列来看,正常的pod归属的node不是yypdmaster就是yypdnode, iz25beglnhtz和 iz2ze39jeyize

Kubernetes集群跨节点挂载CephFS

Kubernetes集群中运行有状态服务或应用总是不那么容易的。比如,之前我在项目中使用了CephRBD,虽然遇到过几次问题,但总体算是运行良好。但最近发现CephRBD无法满足跨节点挂载的需求,我只好另辟蹊径。由于CephFS和CephRBD师出同门,它自然成为了这次我首要考察的目标。这里将跨节点挂载CephFS的考察过程记录一下,一是备忘,二则也可以为其他有相似需求的朋友提供些资料。

一、CephRBD的问题

这里先提一嘴CephRBD的问题。最近项目中有这样的需求:让集群中的Pod共享外部分布式存储,即多个Pod共同挂载一份存储,实现存储共享,这样可大大简化系统设计和复杂性。之前CephRBD都是挂载到一个Pod中运行的,CephRBD是否支持多Pod同时挂载呢?官方文档中给出了否定的答案: 基于CephRBD的Persistent Volume仅支持两种accessmode:
ReadWriteOnce和ReadOnlyMany,不支持ReadWriteMany。这样对于有读写需求的Pod来说,一个CephRBD pv仅能被一个node挂载一次。

我们来验证一下这个“不幸的”事实。

我们首先创建一个测试用的image:foo1。这里我利用了项目里写的CephRBD API服务,也可通过ceph命令手工创建:

# curl -v  -H "Content-type: application/json" -X POST -d '{"kind": "Images","apiVersion": "v1", "metadata": {"name": "foo1", "capacity": 512} ' http://192.168.3.22:8080/api/v1/pools/rbd/images
... ...
{
  "errcode": 0,
  "errmsg": "ok"
}

# curl http://192.168.3.22:8080/api/v1/pools/rbd/images
{
  "Kind": "ImagesList",
  "APIVersion": "v1",
  "Items": [
    {
      "name": "foo1"
    }
  ]
}

利用下面文件创建pv和pvc:

//ceph-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: foo-pv
spec:
  capacity:
    storage: 512Mi
  accessModes:
    - ReadWriteMany
  rbd:
    monitors:
      - ceph_monitor_ip:port
    pool: rbd
    image: foo1
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

//ceph-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: foo-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 512Mi

创建后:

# kubectl get pv
[NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE
foo-pv              512Mi      RWO           Recycle         Bound     default/foo-claim                      20h

# kubectl get pvc
NAME                 STATUS    VOLUME              CAPACITY   ACCESSMODES   AGE
foo-claim            Bound     foo-pv              512Mi      RWO           20h

创建挂载上述image的Pod:

// ceph-pod2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2
spec:
  containers:
  - name: ceph-ubuntu2
    image: ubuntu:14.04
    command: ["tail", "-f", "/var/log/bootstrap.log"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /mnt/cephrbd/data
      readOnly: false
  volumes:
  - name: ceph-vol2
    persistentVolumeClaim:
      claimName: foo-claim

创建成功后,我们可以查看挂载目录的数据:

# kubectl exec ceph-pod2 ls /mnt/cephrbd/data
1.txt
lost+found

我们在同一个kubernetes node上再启动一个pod(可以把上面的ceph-pod2.yaml的pod name改为ceph-pod3),挂载同样的pv:

NAMESPACE                    NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
default                      ceph-pod2                               1/1       Running   0          3m        172.16.57.9    xx.xx.xx.xx
default                      ceph-pod3                               1/1       Running   0          0s        172.16.57.10    xx.xx.xx.xx
# kubectl exec ceph-pod3 ls /mnt/cephrbd/data
1.txt
lost+found

我们通过ceph-pod2写一个文件,在ceph-pod3中将其读出:

# kubectl exec ceph-pod2 -- bash -c "for i in {1..10}; do sleep 1; echo 'pod2: Hello, World'>> /mnt/cephrbd/data/foo.txt ; done "
root@node1:~/k8stest/k8s-cephrbd/footest# kubectl exec ceph-pod3 cat /mnt/cephrbd/data/foo.txt
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World

到目前为止,在一个node上多个Pod是可以以ReadWrite模式挂载同一个CephRBD的。

我们在另外一个节点启动一个试图挂载该pv的Pod,该Pod启动后一直处于pending状态,通过kubectl describe查看其详细信息,可以看到:

Events:
  FirstSeen    LastSeen    Count    From            SubobjectPath    Type        Reason        Message
  ---------    --------    -----    ----            -------------    --------    ------        -------
.. ...
  2m        37s        2    {kubelet yy.yy.yy.yy}            Warning        FailedMount    Unable to mount volumes for pod "ceph-pod2-master_default(a45f62aa-2bc3-11e7-9baa-00163e1625a9)": timeout expired waiting for volumes to attach/mount for pod "ceph-pod2-master"/"default". list of unattached/unmounted volumes=[ceph-vol2]
  2m        37s        2    {kubelet yy.yy.yy.yy}            Warning        FailedSync    Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ceph-pod2-master"/"default". list of unattached/unmounted volumes=[ceph-vol2]

查看kubelet.log中的错误日志:

I0428 11:39:15.737729    1241 reconciler.go:294] MountVolume operation started for volume "kubernetes.io/rbd/a45f62aa-2bc3-11e7-9baa-00163e1625a9-foo-pv" (spec.Name: "foo-pv") to pod "a45f62aa-2bc3-11e7-9baa-00163e1625a9" (UID: "a45f62aa-2bc3-11e7-9baa-00163e1625a9").
I0428 11:39:15.939183    1241 operation_executor.go:768] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/923700ff-12c2-11e7-9baa-00163e1625a9-default-token-40z0x" (spec.Name: "default-token-40z0x") pod "923700ff-12c2-11e7-9baa-00163e1625a9" (UID: "923700ff-12c2-11e7-9baa-00163e1625a9").
E0428 11:39:17.039656    1241 disk_manager.go:56] failed to attach disk
E0428 11:39:17.039722    1241 rbd.go:228] rbd: failed to setup mount /var/lib/kubelet/pods/a45f62aa-2bc3-11e7-9baa-00163e1625a9/volumes/kubernetes.io~rbd/foo-pv rbd: image foo1 is locked by other nodes
E0428 11:39:17.039857    1241 nestedpendingoperations.go:254] Operation for "\"kubernetes.io/rbd/a45f62aa-2bc3-11e7-9baa-00163e1625a9-foo-pv\" (\"a45f62aa-2bc3-11e7-9baa-00163e1625a9\")" failed. No retries permitted until 2017-04-28 11:41:17.039803969 +0800 CST (durationBeforeRetry 2m0s). Error: MountVolume.SetUp failed for volume "kubernetes.io/rbd/a45f62aa-2bc3-11e7-9baa-00163e1625a9-foo-pv" (spec.Name: "foo-pv") pod "a45f62aa-2bc3-11e7-9baa-00163e1625a9" (UID: "a45f62aa-2bc3-11e7-9baa-00163e1625a9") with: rbd: image foo1 is locked by other nodes

可以看到“rbd: image foo1 is locked by other nodes”的日志。我们用试验证明了目前CephRBD仅能被k8s中的一个node挂载的事实。

二、Ceph集群安装mds以支持CephFS

这次我在两个Ubuntu 16.04的vm上新部署了一套Ceph,过程与之前第一次部署Ceph时大同小异,这里就不赘述了。要让Ceph支持CephFS,我们需要安装mds组件,有了前面的基础,通过ceph-deploy工具安装mds十分简单:

# ceph-deploy mds create yypdmaster yypdnode
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy mds create yypdmaster yypdnode
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f60fb5e71b8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f60fba4e140>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('yypdmaster', 'yypdmaster'), ('yypdnode', 'yypdnode')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts yypdmaster:yypdmaster yypdnode:yypdnode
[yypdmaster][DEBUG ] connected to host: yypdmaster
[yypdmaster][DEBUG ] detect platform information from remote host
[yypdmaster][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to yypdmaster
[yypdmaster][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[yypdmaster][DEBUG ] create path if it doesn't exist
[yypdmaster][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.yypdmaster osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-yypdmaster/keyring
[yypdmaster][INFO  ] Running command: systemctl enable ceph-mds@yypdmaster
[yypdmaster][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@yypdmaster.service to /lib/systemd/system/ceph-mds@.service.
[yypdmaster][INFO  ] Running command: systemctl start ceph-mds@yypdmaster
[yypdmaster][INFO  ] Running command: systemctl enable ceph.target
[yypdnode][DEBUG ] connected to host: yypdnode
[yypdnode][DEBUG ] detect platform information from remote host
[yypdnode][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to yypdnode
[yypdnode][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[yypdnode][DEBUG ] create path if it doesn't exist
[yypdnode][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.yypdnode osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-yypdnode/keyring
[yypdnode][INFO  ] Running command: systemctl enable ceph-mds@yypdnode
[yypdnode][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@yypdnode.service to /lib/systemd/system/ceph-mds@.service.
[yypdnode][INFO  ] Running command: systemctl start ceph-mds@yypdnode
[yypdnode][INFO  ] Running command: systemctl enable ceph.target

非常顺利。安装后,可以在任意一个node上看到mds在运行:

# ps -ef|grep ceph
ceph      7967     1  0 17:23 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph     15674     1  0 17:32 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id yypdnode --setuser ceph --setgroup ceph
ceph     18019     1  0 17:35 ?        00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id yypdnode --setuser ceph --setgroup ceph

mds是存储cephfs的元信息的,我的ceph是10.2.7版本:

# ceph -v
ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)

虽然支持多 active mds并行运行,但官方文档建议保持一个active mds,其他mds作为standby(见下面ceph集群信息中的fsmap部分):

# ceph -s
    cluster ffac3489-d678-4caf-ada2-3dd0743158b6
    ... ...
      fsmap e6: 1/1/1 up {0=yypdnode=up:active}, 1 up:standby
     osdmap e19: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v192498: 576 pgs, 5 pools, 126 MB data, 238 objects
            44365 MB used, 31881 MB / 80374 MB avail
                 576 active+clean

三、创建fs并测试挂载

我们在ceph上创建一个fs:

# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created

# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created

# ceph fs new test_fs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1

# ceph fs ls
name: test_fs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

不过,ceph当前正式版功能中仅支持一个fs,对多个fs的支持仅存在于实验feature中:

# ceph osd pool create cephfs1_data 128
# ceph osd pool create cephfs1_metadata 128
# ceph fs new test_fs1 cephfs1_metadata cephfs1_data
Error EINVAL: Creation of multiple filesystems is disabled.  To enable this experimental feature, use 'ceph fs flag set enable_multiple true'

在物理机上挂载cephfs可以使用mount命令、mount.ceph(apt-get install ceph-fs-common)或ceph-fuse(apt-get install ceph-fuse),我们先用mount命令挂载:

我们将上面创建的cephfs挂载到主机的/mnt下:

#mount -t ceph ceph_mon_host:6789:/ /mnt -o name=admin,secretfile=admin.secret

# cat admin.secret //ceph.client.admin.keyring中的key
AQDITghZD+c/DhAArOiWWQqyMAkMJbWmHaxjgQ==

查看cephfs信息:

# df -h
ceph_mon_host:6789:/   79G   45G   35G  57% /mnt

可以看出:cephfs将两个物理节点上的磁盘全部空间作为了自己的空间。

通过ceph-fuse挂载,还可以限制对挂载路径的访问权限,我们来创建用户foo,让其仅仅拥有对/ceph-volume1-test路径具有只读访问权限:

# ceph auth get-or-create client.foo mon 'allow *' mds 'allow r path=/ceph-volume1-test' osd 'allow *'
# ceph-fuse -n client.foo -m 10.47.217.91:6789 /mnt -r /ceph-volume1-test
ceph-fuse[10565]: starting ceph client2017-05-03 16:07:25.958903 7f1a14fbff00 -1 init, newargv = 0x557e350defc0 newargc=11
ceph-fuse[10565]: starting fuse

查看挂载路径,并尝试创建文件:

# cd /mnt
root@yypdnode:/mnt# ls
1.txt
root@yypdnode:/mnt# touch 2.txt
touch: cannot touch '2.txt': Permission denied

由于foo用户只拥有对 /ceph-volume1-test的只读权限,因此创建文件失败了!

四、Kubernetes跨节点挂载CephFS

在K8s中,至少可以通过两种方式挂载CephFS,一种是通过Pod直接挂载;另外一种则是通过pv和pvc挂载。我们分别来看。

1、Pod直接挂载CephFS

//ceph-pod2-with-secret.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2-with-secret
spec:
  containers:
  - name: ceph-ubuntu2
    image: ubuntu:14.04
    command: ["tail", "-f", "/var/log/bootstrap.log"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /mnt/cephfs/data
      readOnly: false
  volumes:
  - name: ceph-vol2
    cephfs:
      monitors:
      - ceph_mon_host:6789
      user: admin
      secretFile: "/etc/ceph/admin.secret"
      readOnly: false

注意:保证每个节点上都存在/etc/ceph/admin.secret文件。

查看Pod挂载的内容:

# docker ps|grep pod
bc96431408c7        ubuntu:14.04                                                  "tail -f /var/log/boo"   About a minute ago   Up About a minute                                                                        k8s_ceph-ubuntu2.66c44128_ceph-pod2-with-secret_default_3d8a05f8-33c3-11e7-bcd9-6640d35a0e90_fc483b8a
bcc65ab82069        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 About a minute ago   Up About a minute                                                                        k8s_POD.d8dbe16c_ceph-pod2-with-secret_default_3d8a05f8-33c3-11e7-bcd9-6640d35a0e90_02381204

root@yypdnode:~# docker exec bc96431408c7 ls /mnt/cephfs/data
1.txt
apps
ceph-volume1-test
test1.txt

我们再在另外一个node上启动挂载同一个cephfs的Pod,看是否可以跨节点挂载:

# kubectl get pods

NAMESPACE                    NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
default                      ceph-pod2-with-secret                   1/1       Running   0          3m        172.30.192.2   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret-on-master         1/1       Running   0          3s        172.30.0.51    iz25beglnhtz
... ...

# kubectl exec ceph-pod2-with-secret-on-master ls /mnt/cephfs/data
1.txt
apps
ceph-volume1-test
test1.txt

可以看到不同节点可以挂载同一CephFS。我们在一个pod中操作一下挂载的cephfs:

# kubectl exec ceph-pod2-with-secret-on-master -- bash -c "for i in {1..10}; do sleep 1; echo 'pod2-with-secret-on-master: Hello, World'>> /mnt/cephfs/data/foo.txt ; done "
root@yypdmaster:~/k8stest/cephfstest/footest# kubectl exec ceph-pod2-with-secret-on-master cat /mnt/cephfs/data/foo.txt
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World

2、通过PV和PVC挂载CephFS

挂载cephfs的pv和pvc在写法方面与上面挂载rbd的类似:

//ceph-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: foo-pv
spec:
  capacity:
    storage: 512Mi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - ceph_mon_host:6789
    path: /
    user: admin
    secretRef:
      name: ceph-secret
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

//ceph-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: foo-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 512Mi

使用pvc的pod:

//ceph-pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2
spec:
  containers:
  - name: ceph-ubuntu2
    image: ubuntu:14.04
    command: ["tail", "-f", "/var/log/bootstrap.log"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /mnt/cephfs/data
      readOnly: false
  volumes:
  - name: ceph-vol2
    persistentVolumeClaim:
      claimName: foo-claim

创建pv、pvc:

# kubectl create -f ceph-pv.yaml
persistentvolume "foo-pv" created
# kubectl create -f ceph-pvc.yaml
persistentvolumeclaim "foo-claim" created

# kubectl get pvc
NAME        STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
foo-claim   Bound     foo-pv    512Mi      RWX           4s
# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM               REASON    AGE
foo-pv    512Mi      RWX           Recycle         Bound     default/foo-claim             24s

启动pod,通过exec命令查看挂载情况:

# docker ps|grep pod
a6895ec0274f        ubuntu:14.04                                                  "tail -f /var/log/boo"   About a minute ago   Up About a minute                                                                        k8s_ceph-ubuntu2.66c44128_ceph-pod2_default_4e4fc8d4-33c6-11e7-bcd9-6640d35a0e90_1b37ed76
52b6811a6584        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 About a minute ago   Up About a minute                                                                        k8s_POD.d8dbe16c_ceph-pod2_default_4e4fc8d4-33c6-11e7-bcd9-6640d35a0e90_27e5f988
55b96edbf4bf        ubuntu:14.04                                                  "tail -f /var/log/boo"   14 minutes ago       Up 14 minutes                                                                            k8s_ceph-ubuntu2.66c44128_ceph-pod2-with-secret_default_9d383b0c-33c4-11e7-bcd9-6640d35a0e90_1656e5e0
f8b699bc0459        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 14 minutes ago       Up 14 minutes                                                                            k8s_POD.d8dbe16c_ceph-pod2-with-secret_default_9d383b0c-33c4-11e7-bcd9-6640d35a0e90_effdfae7
root@yypdnode:~# docker exec a6895ec0274f ls /mnt/cephfs/data
1.txt
apps
ceph-volume1-test
foo.txt
test1.txt

# docker exec a6895ec0274f cat /mnt/cephfs/data/foo.txt
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World

五、pv的状态

如果你不删除pvc,一切都安然无事:

# kubectl get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE
foo-pv              512Mi      RWX           Recycle         Bound     default/foo-claim                      1h

# kubectl get pvc
NAME                 STATUS    VOLUME              CAPACITY   ACCESSMODES   AGE
foo-claim            Bound     foo-pv              512Mi      RWX           1h

但是如果删除pvc,pv的状态将变成failed:

删除pvc:

# kubectl get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE
foo-pv              512Mi      RWX           Recycle         Failed    default/foo-claim                      2h

# kubectl describe pv/foo-pv
Name:        foo-pv
Labels:        <none>
Status:        Failed
Claim:        default/foo-claim
Reclaim Policy:    Recycle
Access Modes:    RWX
Capacity:    512Mi
Message:    No recycler plugin found for the volume!
Source:
    Type:        RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:    [xx.xx.xx.xx:6789]
    RBDImage:        foo1
    FSType:        ext4
    RBDPool:        rbd
    RadosUser:        admin
    Keyring:        /etc/ceph/keyring
    SecretRef:        &{ceph-secret}
    ReadOnly:        false
Events:
  FirstSeen    LastSeen    Count    From                SubobjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
  29s        29s        1    {persistentvolume-controller }            Warning        VolumeFailedRecycle    No recycler plugin found for the volume!

我们在pv中指定的persistentVolumeReclaimPolicy是Recycle,但无论是cephrbd还是cephfs都不没有对应的recycler plugin,导致pv的status变成了failed,只能手工删除重建。




这里是Tony Bai的个人Blog,欢迎访问、订阅和留言!订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您喜欢通过微信App浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:



本站Powered by Digital Ocean VPS。

选择Digital Ocean VPS主机,即可获得10美元现金充值,可免费使用两个月哟!

著名主机提供商Linode 10$优惠码:linode10,在这里注册即可免费获得。

阿里云推荐码:1WFZ0V立享9折!

View Tony Bai's profile on LinkedIn


文章

评论

  • 正在加载...

分类

标签

归档











更多