标签 centos 下的文章

使用Kubeadm安装Kubernetes

在《当Docker遇到systemd》一文中,我提到过这两天儿一直在做的一个task:使用kubeadmUbuntu 16.04上安装部署Kubernetes的最新发布版本-k8s 1.5.1

年中,Docker宣布在Docker engine中集成swarmkit工具包,这一announcement在轻量级容器界引发轩然大波。毕竟开发者是懒惰的^0^,有了docker swarmkit,驱动developer去安装其他容器编排工具的动力在哪里呢?即便docker engine还不是当年那个被人们高频使用的IE浏览器。作为针对Docker公司这一市场行为的回应,容器集群管理和服务编排领先者Kubernetes在三个月后发布了Kubernetes1.4.0版本。在这个版本中K8s新增了kubeadm工具。kubeadm的使用方式有点像集成在docker engine中的swarm kit工具,旨在改善开发者在安装、调试和使用k8s时的体验,降低安装和使用门槛。理论上通过两个命令:init和join即可搭建出一套完整的Kubernetes cluster。

不过,和初入docker引擎的swarmkit一样,kubeadm目前也在active development中,也不是那么stable,因此即便在当前最新的k8s 1.5.1版本中,它仍然处于Alpha状态,官方不建议在Production环境下使用。每次执行kubeadm init时,它都会打印如下提醒日志:

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

不过由于之前部署的k8s 1.3.7集群运行良好,这给了我们在k8s这条路上继续走下去并走好的信心。但k8s在部署和管理方面的体验的确是太繁琐了,于是我们准备试验一下kubeadm是否能带给我们超出预期的体验。之前在aliyun ubuntu 14.04上安装kubernetes 1.3.7的经验和教训,让我略微有那么一丢丢底气,但实际安装过程依旧是一波三折。这既与kubeadm的unstable有关,同样也与cni、第三方网络add-ons的质量有关。无论哪一方出现问题都会让你的install过程异常坎坷曲折。

一、环境与约束

在kubeadm支持的Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+三种操作系统中,我们选择了Ubuntu 16.04。由于阿里云尚无官方16.04 Image可用,我们新开了两个Ubuntu 14.04ECS实例,并通过apt-get命令手工将其升级到Ubuntu 16.04.1,详细版本是:Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-58-generic x86_64)。

Ubuntu 16.04使用了systemd作为init system,在安装和配置Docker时,可以参考我的这篇《当Docker遇到system》。Docker版本我选择了目前可以得到的lastest stable release: 1.12.5。

# docker version
Client:
 Version:      1.12.5
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   7392c3b
 Built:        Fri Dec 16 02:42:17 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.5
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   7392c3b
 Built:        Fri Dec 16 02:42:17 2016
 OS/Arch:      linux/amd64

至于Kubernetes版本,前面已经提到过了,我们就使用最新发布的Kubernetes 1.5.1版本。1.5.1是1.5.0的一个紧急fix版本,主要”to address default flag values which in isolation were not problematic, but in concert could result in an insecure cluster”。官方建议skip 1.5.0,直接用1.5.1。

这里再重申一下:Kubernetes的安装、配置和调通是很难的,在阿里云上调通就更难了,有时还需要些运气。Kubernetes、Docker、cni以及各种网络Add-ons都在active development中,也许今天还好用的step、tip和trick,明天就out-dated,因此在借鉴本文的操作步骤时,请谨记这些^0^。

二、安装包准备

我们这次新开了两个ECS实例,一个作为master node,一个作为minion node。Kubeadm默认安装时,master node将不会参与Pod调度,不会承载work load,即不会有非核心组件的Pod在Master node上被创建出来。当然通过kubectl taint命令可以解除这一限制,不过这是后话了。

集群拓扑:

master node:10.47.217.91,主机名:iZ25beglnhtZ
minion node:10.28.61.30,主机名:iZ2ze39jeyizepdxhwqci6Z

本次安装的主参考文档就是Kubernetes官方的那篇《Installing Kubernetes on Linux with kubeadm》。

本小节,我们将进行安装包准备,即将kubeadm以及此次安装所需要的k8s核心组件统统下载到上述两个Node上。注意:如果你有加速器,那么本节下面的安装过程将尤为顺利,反之,… :( 。以下命令,在两个Node上均要执行。

1、添加apt-key

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK

2、添加Kubernetes源并更新包信息

添加Kubernetes源到sources.list.d目录下:

# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
  deb http://apt.kubernetes.io/ kubernetes-xenial main
  EOF

# cat /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

更新包信息:

# apt-get update
... ...
Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease
Hit:3 https://apt.dockerproject.org/repo ubuntu-xenial InRelease
Get:4 http://mirrors.aliyun.com/ubuntu xenial-security InRelease [102 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [6,299 B]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [1,739 B]
Get:6 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease [102 kB]
Get:7 http://mirrors.aliyun.com/ubuntu xenial-proposed InRelease [253 kB]
Get:8 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease [102 kB]
Fetched 568 kB in 19s (28.4 kB/s)
Reading package lists... Done

3、下载Kubernetes核心组件

在此次安装中,我们通过apt-get就可以下载Kubernetes的核心组件,包括kubelet、kubeadm、kubectl和kubernetes-cni等。

# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libtimedate-perl
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  ebtables ethtool socat
The following NEW packages will be installed:
  ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 37.6 MB of archives.
After this operation, 261 MB of additional disk space will be used.
Get:2 http://mirrors.aliyun.com/ubuntu xenial/main amd64 ebtables amd64 2.0.10.4-3.4ubuntu1 [79.6 kB]
Get:6 http://mirrors.aliyun.com/ubuntu xenial/main amd64 ethtool amd64 1:4.5-1 [97.5 kB]
Get:7 http://mirrors.aliyun.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.3.0.1-07a8a2-00 [6,877 kB]
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.5.1-00 [15.1 MB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.5.1-00 [7,954 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.6.0-alpha.0-2074-a092d8e0f95f52-00 [7,120 kB]
Fetched 37.6 MB in 36s (1,026 kB/s)
... ...
Unpacking kubeadm (1.6.0-alpha.0-2074-a092d8e0f95f52-00) ...
Processing triggers for systemd (229-4ubuntu13) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up ebtables (2.0.10.4-3.4ubuntu1) ...
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up ethtool (1:4.5-1) ...
Setting up kubernetes-cni (0.3.0.1-07a8a2-00) ...
Setting up socat (1.7.3.1-1) ...
Setting up kubelet (1.5.1-00) ...
Setting up kubectl (1.5.1-00) ...
Setting up kubeadm (1.6.0-alpha.0-2074-a092d8e0f95f52-00) ...
Processing triggers for systemd (229-4ubuntu13) ...
Processing triggers for ureadahead (0.100.0-19) ...
... ...

下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service:

# ls /lib/systemd/system|grep kube
kubelet.service

//kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

kubelet的版本:

# kubelet --version
Kubernetes v1.5.1

k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。同时,问题也就随之而来了,而这些问题以及问题的解决才是本篇要说明的重点。

三、初始化集群

前面说过,理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是,kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理,因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images,大约有如下一些:

gcr.io/google_containers/kube-controller-manager-amd64   v1.5.1                     cd5684031720        2 weeks ago         102.4 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.5.1                     8c12509df629        2 weeks ago         124.1 MB
gcr.io/google_containers/kube-proxy-amd64                v1.5.1                     71d2b27b03f6        2 weeks ago         175.6 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.5.1                     6506e7b74dac        2 weeks ago         53.97 MB
gcr.io/google_containers/etcd-amd64                      3.0.14-kubeadm             856e39ac7be3        5 weeks ago         174.9 MB
gcr.io/google_containers/kubedns-amd64                   1.9                        26cf1ed9b144        5 weeks ago         47 MB
gcr.io/google_containers/dnsmasq-metrics-amd64           1.0                        5271aabced07        7 weeks ago         14 MB
gcr.io/google_containers/kube-dnsmasq-amd64              1.4                        3ec65756a89b        3 months ago        5.13 MB
gcr.io/google_containers/kube-discovery-amd64            1.0                        c5e0c9a457fc        3 months ago        134.2 MB
gcr.io/google_containers/exechealthz-amd64               1.2                        93a43bfb39bf        3 months ago        8.375 MB
gcr.io/google_containers/pause-amd64                     3.0                        99e59f495ffa        7 months ago        746.9 kB

在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:“flannel is a network fabric for containers, designed for Kubernetes”。如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。

1、执行kubeadm init

执行kubeadm init命令:

# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "2e7da9.7fc5668ff26430c7"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready //如果没有挂加速器,可能会在这里hang住。
[apiclient] All control plane components are healthy after 54.789750 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.003053 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 62.503441 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=2e7da9.7fc5668ff26430c7 123.56.200.187

init成功后的master node有啥变化?k8s的核心组件均正常启动:

# ps -ef|grep kube
root      2477  2461  1 16:36 ?        00:00:04 kube-proxy --kubeconfig=/run/kubeconfig
root     30860     1 12 16:33 ?        00:01:09 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local
root     30952 30933  0 16:33 ?        00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080
root     31128 31103  2 16:33 ?        00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16
root     31223 31207  2 16:34 ?        00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=123.56.200.187 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379
root     31491 31475  0 16:35 ?        00:00:00 /usr/local/bin/kube-discovery

而且是多以container的形式启动:

# docker ps
CONTAINER ID        IMAGE                                                           COMMAND                  CREATED                  STATUS                  PORTS               NAMES
c16c442b7eca        gcr.io/google_containers/kube-proxy-amd64:v1.5.1                "kube-proxy --kubecon"   6 minutes ago            Up 6 minutes                                k8s_kube-proxy.36dab4e8_kube-proxy-sb4sm_kube-system_43fb1a2c-cb46-11e6-ad8f-00163e1001d7_2ba1648e
9f73998e01d7        gcr.io/google_containers/kube-discovery-amd64:1.0               "/usr/local/bin/kube-"   8 minutes ago            Up 8 minutes                                k8s_kube-discovery.7130cb0a_kube-discovery-1769846148-6z5pw_kube-system_1eb97044-cb46-11e6-ad8f-00163e1001d7_fd49c2e3
dd5412e5e15c        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   9 minutes ago            Up 9 minutes                                k8s_kube-apiserver.1c5a91d9_kube-apiserver-iz25beglnhtz_kube-system_eea8df1717e9fea18d266103f9edfac3_8cae8485
60017f8819b2        gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm              "etcd --listen-client"   9 minutes ago            Up 9 minutes                                k8s_etcd.c323986f_etcd-iz25beglnhtz_kube-system_3a26566bb004c61cd05382212e3f978f_06d517eb
03c2463aba9c        gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1   "kube-controller-mana"   9 minutes ago            Up 9 minutes                                k8s_kube-controller-manager.d30350e1_kube-controller-manager-iz25beglnhtz_kube-system_9a40791dd1642ea35c8d95c9e610e6c1_3b05cb8a
fb9a724540a7        gcr.io/google_containers/kube-scheduler-amd64:v1.5.1            "kube-scheduler --add"   9 minutes ago            Up 9 minutes                                k8s_kube-scheduler.ef325714_kube-scheduler-iz25beglnhtz_kube-system_dc58861a0991f940b0834f8a110815cb_9b3ccda2
.... ...

不过这些核心组件并不是跑在pod network中的(没错,此时的pod network还没有创建),而是采用了host network。以kube-apiserver的pod信息为例:

kube-system   kube-apiserver-iz25beglnhtz            1/1       Running   0          1h        10.47.217.91   iz25beglnhtz

kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出:

# docker ps |grep apiserver
a5a76bc59e38        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   About an hour ago   Up About an hour                        k8s_kube-apiserver.2529402_kube-apiserver-iz25beglnhtz_kube-system_25d646be9a0092138dc6088fae6f1656_ec0079fc
ef4d3bf057a6        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 About an hour ago   Up About an hour                        k8s_POD.d8dbe16c_kube-apiserver-iz25beglnhtz_kube-system_25d646be9a0092138dc6088fae6f1656_bbfd8a31

inspect pause容器,可以看到pause container的NetworkMode的值:

"NetworkMode": "host",

如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出:

# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
    Port 10250 is in use
    /etc/kubernetes/manifests is not empty
    /etc/kubernetes/pki is not empty
    /var/lib/kubelet is not empty
    /etc/kubernetes/admin.conf already exists
    /etc/kubernetes/kubelet.conf already exists
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`

kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。

# kubeadm reset
[preflight] Running pre-flight checks
[reset] Draining node: "iz25beglnhtz"
[reset] Removing node: "iz25beglnhtz"
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]

2、安装flannel pod网络

kubeadm init之后,如果你探索一下当前cluster的状态或者核心组件的日志,你会发现某些“异常”,比如:从kubelet的日志中我们可以看到一直刷屏的错误信息:

Dec 26 16:36:48 iZ25beglnhtZ kubelet[30860]: E1226 16:36:48.365885   30860 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-pddz5_kube-system(43fd7264-cb46-11e6-ad8f-00163e1001d7)" using network plugins "cni": cni config unintialized; Skipping pod

通过命令kubectl get pod –all-namespaces -o wide,你也会发现kube-dns pod处于ContainerCreating状态。

这些都不打紧,因为我们还没有为cluster安装Pod network呢。前面说过,我们要使用Flannel网络,因此我们需要执行如下安装命令:

#kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

稍等片刻,我们再来看master node上的cluster信息:

# ps -ef|grep kube|grep flannel
root      6517  6501  0 17:20 ?        00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root      6573  6546  0 17:20 ?        00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-s0c5g                 1/1       Running   0          50m
kube-system   etcd-iz25beglnhtz                      1/1       Running   0          50m
kube-system   kube-apiserver-iz25beglnhtz            1/1       Running   0          50m
kube-system   kube-controller-manager-iz25beglnhtz   1/1       Running   0          50m
kube-system   kube-discovery-1769846148-6z5pw        1/1       Running   0          50m
kube-system   kube-dns-2924299975-pddz5              4/4       Running   0          49m
kube-system   kube-flannel-ds-5ww9k                  2/2       Running   0          4m
kube-system   kube-proxy-sb4sm                       1/1       Running   0          49m
kube-system   kube-scheduler-iz25beglnhtz            1/1       Running   0          49m

至少集群的核心组件已经全部run起来了。看起来似乎是成功了。

3、minion node:join the cluster

接下来,就该minion node加入cluster了。这里我们用到了kubeadm的第二个命令:kubeadm join。

在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的):

# kubeadm join --token=2e7da9.7fc5668ff26430c7 123.56.200.187
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://123.56.200.187:9898/cluster-info/v1/?token-id=2e7da9"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://123.56.200.187:6443]
[bootstrap] Trying to connect to endpoint https://123.56.200.187:6443
[bootstrap] Detected server version: v1.5.1
[bootstrap] Successfully established connection with endpoint "https://123.56.200.187:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:iZ2ze39jeyizepdxhwqci6Z | CA: false
Not before: 2016-12-26 09:31:00 +0000 UTC Not After: 2017-12-26 09:31:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

也很顺利。我们在minion node上看到的k8s组件情况如下:

d85cf36c18ed        gcr.io/google_containers/kube-proxy-amd64:v1.5.1      "kube-proxy --kubecon"   About an hour ago   Up About an hour                        k8s_kube-proxy.36dab4e8_kube-proxy-lsn0t_kube-system_b8eddf1c-cb4e-11e6-ad8f-00163e1001d7_5826f32b
a60e373b48b8        gcr.io/google_containers/pause-amd64:3.0              "/pause"                 About an hour ago   Up About an hour                        k8s_POD.d8dbe16c_kube-proxy-lsn0t_kube-system_b8eddf1c-cb4e-11e6-ad8f-00163e1001d7_46bfcf67
a665145eb2b5        quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64   "/bin/sh -c 'set -e -"   About an hour ago   Up About an hour                        k8s_install-cni.17d8cf2_kube-flannel-ds-tr8zr_kube-system_06eca729-cb72-11e6-ad8f-00163e1001d7_01e12f61
5b46f2cb0ccf        gcr.io/google_containers/pause-amd64:3.0              "/pause"                 About an hour ago   Up About an hour                        k8s_POD.d8dbe16c_kube-flannel-ds-tr8zr_kube-system_06eca729-cb72-11e6-ad8f-00163e1001d7_ac880d20

我们在master node上查看当前cluster状态:

# kubectl get nodes
NAME                      STATUS         AGE
iz25beglnhtz              Ready,master   1h
iz2ze39jeyizepdxhwqci6z   Ready          21s

k8s cluster创建”成功”!真的成功了吗?“折腾”才刚刚开始:(!

三、Flannel Pod Network问题

Join成功所带来的“余温”还未散去,我就发现了Flannel pod network的问题,troubleshooting正式开始:(。

1、minion node上的flannel时不时地报错

刚join时还好好的,可过了没一会儿,我们就发现在kubectl get pod –all-namespaces中有错误出现:

kube-system   kube-flannel-ds-tr8zr                  1/2       CrashLoopBackOff   189        16h

我们发现这是minion node上的flannel pod中的一个container出错导致的,跟踪到的具体错误如下:

# docker logs bc0058a15969
E1227 06:17:50.605110       1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-tr8zr': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-tr8zr: dial tcp 10.96.0.1:443: i/o timeout

10.96.0.1是pod network中apiserver service的cluster ip,而minion node上的flannel组件居然无法访问到这个cluster ip!这个问题的奇怪之处还在于,有些时候这个Pod在被调度restart N多次后或者被删除重启后,又突然变为running状态了,行为十分怪异。

在flannel github.com issues中,至少有两个open issue与此问题有密切关系:

https://github.com/coreos/flannel/issues/545

https://github.com/coreos/flannel/issues/535

这个问题暂无明确解。当minion node上的flannel pod自恢复为running状态时,我们又可以继续了。

2、minion node上flannel pod启动失败的一个应对方法

在下面issue中,很多developer讨论了minion node上flannel pod启动失败的一种可能原因以及临时应对方法:

https://github.com/kubernetes/kubernetes/issues/34101

这种说法大致就是minion node上的kube-proxy使用了错误的interface,通过下面方法可以fix这个问题。在minion node上执行:

#  kubectl -n kube-system get ds -l 'component=kube-proxy' -o json | jq '.items[0].spec.template.spec.containers[0].command |= .+ ["--cluster-cidr=10.244.0.0/16"]' | kubectl apply -f - && kubectl -n kube-system delete pods -l 'component=kube-proxy'
daemonset "kube-proxy" configured
pod "kube-proxy-lsn0t" deleted
pod "kube-proxy-sb4sm" deleted

执行后,flannel pod的状态:

kube-system   kube-flannel-ds-qw291                  2/2       Running   8          17h
kube-system   kube-flannel-ds-x818z                  2/2       Running   17         1h

经过17次restart,minion node上的flannel pod 启动ok了。其对应的flannel container启动日志如下:

# docker logs 1f64bd9c0386
I1227 07:43:26.670620       1 main.go:132] Installing signal handlers
I1227 07:43:26.671006       1 manager.go:133] Determining IP address of default interface
I1227 07:43:26.670825       1 kube.go:233] starting kube subnet manager
I1227 07:43:26.671514       1 manager.go:163] Using 59.110.67.15 as external interface
I1227 07:43:26.671575       1 manager.go:164] Using 59.110.67.15 as external endpoint
I1227 07:43:26.746811       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1227 07:43:26.749785       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1227 07:43:26.752343       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1227 07:43:26.755126       1 manager.go:246] Lease acquired: 10.244.1.0/24
I1227 07:43:26.755444       1 network.go:58] Watching for L3 misses
I1227 07:43:26.755475       1 network.go:66] Watching for new subnet leases
I1227 07:43:27.755830       1 network.go:153] Handling initial subnet events
I1227 07:43:27.755905       1 device.go:163] calling GetL2List() dev.link.Index: 10
I1227 07:43:27.756099       1 device.go:168] calling NeighAdd: 123.56.200.187, ca:68:7c:9b:cc:67

issue中说到,在kubeadm init时,显式地指定–advertise-address将会避免这个问题。不过目前不要在–advertise-address后面写上多个IP,虽然文档上说是支持的,但实际情况是,当你显式指定–advertise-address的值为两个或两个以上IP时,比如下面这样:

#kubeadm init --api-advertise-addresses=10.47.217.91,123.56.200.187 --pod-network-cidr=10.244.0.0/16

master初始化成功后,当minion node执行join cluster命令时,会panic掉:

# kubeadm join --token=92e977.f1d4d090906fc06a 10.47.217.91
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
... ...
[bootstrap] Successfully established connection with endpoint "https://10.47.217.91:6443"
[bootstrap] Successfully established connection with endpoint "https://123.56.200.187:6443"
E1228 10:14:05.405294   28378 runtime.go:64] Observed a panic: "close of closed channel" (close of closed channel)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
/usr/local/go/src/runtime/asm_amd64.s:479
/usr/local/go/src/runtime/panic.go:458
/usr/local/go/src/runtime/chan.go:311
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:85
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:96
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:97
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:52
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:93
/usr/local/go/src/runtime/asm_amd64.s:2086
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
panic: close of closed channel [recovered]
    panic: close of closed channel

goroutine 29 [running]:
panic(0x1342de0, 0xc4203eebf0)
    /usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
panic(0x1342de0, 0xc4203eebf0)
    /usr/local/go/src/runtime/panic.go:458 +0x243
k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection.func1.1()
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:85 +0x29d
k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc420563ee0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e
k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc420563ee0, 0x12a05f200, 0x0, 0xc420022e01, 0xc4202c2060)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad
k8s.io/kubernetes/pkg/util/wait.Until(0xc420563ee0, 0x12a05f200, 0xc4202c2060)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d
k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection.func1(0xc4203a82f0, 0xc420269b90, 0xc4202c2060, 0xc4202c20c0, 0xc4203d8d80, 0x401, 0x480, 0xc4201e75e0, 0x17, 0xc4201e7560, ...)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:93 +0x100
created by k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:94 +0x3ed

关于join panic这个问题,在这个issue中有详细讨论:https://github.com/kubernetes/kubernetes/issues/36988

3、open /run/flannel/subnet.env: no such file or directory

前面说过,默认情况下,考虑安全原因,master node是不承担work load的,不参与pod调度。我们这里机器少,只能让master node也辛苦一下。通过下面这个命令可以让master node也参与pod调度:

# kubectl taint nodes --all dedicated-
node "iz25beglnhtz" tainted

接下来,我们create一个deployment,manifest描述文件如下:

//run-my-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.10.1
        ports:
        - containerPort: 80

create后,我们发现调度到master上的my-nginx pod启动是ok的,但minion node上的pod则一直失败,查看到的失败原因如下:

Events:
  FirstSeen    LastSeen    Count    From                    SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                    -------------    --------    ------        -------
  28s        28s        1    {default-scheduler }                    Normal        Scheduled    Successfully assigned my-nginx-2560993602-0440x to iz2ze39jeyizepdxhwqci6z
  27s        1s        26    {kubelet iz2ze39jeyizepdxhwqci6z}            Warning        FailedSync    Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-2560993602-0440x_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-2560993602-0440x_default(ba5ce554-cbf1-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

在minion node上的确没有找到/run/flannel/subnet.env该文件。但master node上有这个文件:

// /run/flannel/subnet.env

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

于是手动在minion node上创建一份/run/flannel/subnet.env,并复制master node同名文件的内容,保存。稍许片刻,minion node上的my-nginx pod从error变成running了。

4、no IP addresses available in network: cbr0

将之前的一个my-nginx deployment的replicas改为3,并创建基于该deployment中pods的my-nginx service:

//my-nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30062
    protocol: TCP
  selector:
    run: my-nginx

修改后,通过curl localhost:30062测试服务连通性。发现通过VIP负载均衡到master node上的my-nginx pod的request都成功得到了Response,但是负载均衡到minion node上pod的request,则阻塞在那里,直到timeout。查看pod信息才发现,原来新调度到minion node上的my-nginx pod并没有启动ok,错误原因如下:

Events:
  FirstSeen    LastSeen    Count    From                    SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                    -------------    --------    ------        -------
  2m        2m        1    {default-scheduler }                    Normal        Scheduled    Successfully assigned my-nginx-1948696469-ph11m to iz2ze39jeyizepdxhwqci6z
  2m        0s        177    {kubelet iz2ze39jeyizepdxhwqci6z}            Warning        FailedSync    Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-1948696469-ph11m_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-1948696469-ph11m_default(3700d74a-cc12-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod"

查看minion node上/var/lib/cni/networks/cbr0目录,发现该目录下有如下文件:

10.244.1.10   10.244.1.12   10.244.1.14   10.244.1.16   10.244.1.18   10.244.1.2    10.244.1.219  10.244.1.239  10.244.1.3   10.244.1.5   10.244.1.7   10.244.1.9
10.244.1.100  10.244.1.120  10.244.1.140  10.244.1.160  10.244.1.180  10.244.1.20   10.244.1.22   10.244.1.24   10.244.1.30  10.244.1.50  10.244.1.70  10.244.1.90
10.244.1.101  10.244.1.121  10.244.1.141  10.244.1.161  10.244.1.187  10.244.1.200  10.244.1.220  10.244.1.240  10.244.1.31  10.244.1.51  10.244.1.71  10.244.1.91
10.244.1.102  10.244.1.122  10.244.1.142  10.244.1.162  10.244.1.182  10.244.1.201  10.244.1.221  10.244.1.241  10.244.1.32  10.244.1.52  10.244.1.72  10.244.1.92
10.244.1.103  10.244.1.123  10.244.1.143  10.244.1.163  10.244.1.183  10.244.1.202  10.244.1.222  10.244.1.242  10.244.1.33  10.244.1.53  10.244.1.73  10.244.1.93
10.244.1.104  10.244.1.124  10.244.1.144  10.244.1.164  10.244.1.184  10.244.1.203  10.244.1.223  10.244.1.243  10.244.1.34  10.244.1.54  10.244.1.74  10.244.1.94
10.244.1.105  10.244.1.125  10.244.1.145  10.244.1.165  10.244.1.185  10.244.1.204  10.244.1.224  10.244.1.244  10.244.1.35  10.244.1.55  10.244.1.75  10.244.1.95
10.244.1.106  10.244.1.126  10.244.1.146  10.244.1.166  10.244.1.186  10.244.1.205  10.244.1.225  10.244.1.245  10.244.1.36  10.244.1.56  10.244.1.76  10.244.1.96
10.244.1.107  10.244.1.127  10.244.1.147  10.244.1.167  10.244.1.187  10.244.1.206  10.244.1.226  10.244.1.246  10.244.1.37  10.244.1.57  10.244.1.77  10.244.1.97
10.244.1.108  10.244.1.128  10.244.1.148  10.244.1.168  10.244.1.188  10.244.1.207  10.244.1.227  10.244.1.247  10.244.1.38  10.244.1.58  10.244.1.78  10.244.1.98
10.244.1.109  10.244.1.129  10.244.1.149  10.244.1.169  10.244.1.189  10.244.1.208  10.244.1.228  10.244.1.248  10.244.1.39  10.244.1.59  10.244.1.79  10.244.1.99
10.244.1.11   10.244.1.13   10.244.1.15   10.244.1.17   10.244.1.19   10.244.1.209  10.244.1.229  10.244.1.249  10.244.1.4   10.244.1.6   10.244.1.8   last_reserved_ip
10.244.1.110  10.244.1.130  10.244.1.150  10.244.1.170  10.244.1.190  10.244.1.21   10.244.1.23   10.244.1.25   10.244.1.40  10.244.1.60  10.244.1.80
10.244.1.111  10.244.1.131  10.244.1.151  10.244.1.171  10.244.1.191  10.244.1.210  10.244.1.230  10.244.1.250  10.244.1.41  10.244.1.61  10.244.1.81
10.244.1.112  10.244.1.132  10.244.1.152  10.244.1.172  10.244.1.192  10.244.1.211  10.244.1.231  10.244.1.251  10.244.1.42  10.244.1.62  10.244.1.82
10.244.1.113  10.244.1.133  10.244.1.153  10.244.1.173  10.244.1.193  10.244.1.212  10.244.1.232  10.244.1.252  10.244.1.43  10.244.1.63  10.244.1.83
10.244.1.114  10.244.1.134  10.244.1.154  10.244.1.174  10.244.1.194  10.244.1.213  10.244.1.233  10.244.1.253  10.244.1.44  10.244.1.64  10.244.1.84
10.244.1.115  10.244.1.135  10.244.1.155  10.244.1.175  10.244.1.195  10.244.1.214  10.244.1.234  10.244.1.254  10.244.1.45  10.244.1.65  10.244.1.85
10.244.1.116  10.244.1.136  10.244.1.156  10.244.1.176  10.244.1.196  10.244.1.215  10.244.1.235  10.244.1.26   10.244.1.46  10.244.1.66  10.244.1.86
10.244.1.117  10.244.1.137  10.244.1.157  10.244.1.177  10.244.1.197  10.244.1.216  10.244.1.236  10.244.1.27   10.244.1.47  10.244.1.67  10.244.1.87
10.244.1.118  10.244.1.138  10.244.1.158  10.244.1.178  10.244.1.198  10.244.1.217  10.244.1.237  10.244.1.28   10.244.1.48  10.244.1.68  10.244.1.88
10.244.1.119  10.244.1.139  10.244.1.159  10.244.1.179  10.244.1.199  10.244.1.218  10.244.1.238  10.244.1.29   10.244.1.49  10.244.1.69  10.244.1.89

这已经将10.244.1.x段的所有ip占满,自然没有available的IP可供新pod使用了。至于为何占满,这个原因尚不明朗。下面两个open issue与这个问题相关:

https://github.com/containernetworking/cni/issues/306

https://github.com/kubernetes/kubernetes/issues/21656

进入到/var/lib/cni/networks/cbr0目录下,执行下面命令可以释放那些可能是kubelet leak的IP资源:

for hash in $(tail -n +1 * | grep '^[A-Za-z0-9]*$' | cut -c 1-8); do if [ -z $(docker ps -a | grep $hash | awk '{print $1}') ]; then grep -irl $hash ./; fi; done | xargs rm

执行后,目录下的文件列表变成了:

ls -l
total 32
drw-r--r-- 2 root root 12288 Dec 27 17:11 ./
drw-r--r-- 3 root root  4096 Dec 27 13:52 ../
-rw-r--r-- 1 root root    64 Dec 27 17:11 10.244.1.2
-rw-r--r-- 1 root root    64 Dec 27 17:11 10.244.1.3
-rw-r--r-- 1 root root    64 Dec 27 17:11 10.244.1.4
-rw-r--r-- 1 root root    10 Dec 27 17:11 last_reserved_ip

不过pod仍然处于失败状态,但这次失败的原因又发生了变化:

Events:
  FirstSeen    LastSeen    Count    From                    SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                    -------------    --------    ------        -------
  23s        23s        1    {default-scheduler }                    Normal        Scheduled    Successfully assigned my-nginx-1948696469-7p4nn to iz2ze39jeyizepdxhwqci6z
  22s        1s        22    {kubelet iz2ze39jeyizepdxhwqci6z}            Warning        FailedSync    Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-1948696469-7p4nn_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-1948696469-7p4nn_default(a40fe652-cc14-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": \"cni0\" already has an IP address different from 10.244.1.1/24; Skipping pod"

而/var/lib/cni/networks/cbr0目录下的文件又开始迅速增加!问题陷入僵局。

5、flannel vxlan不通,后端换udp,仍然不通

折腾到这里,基本筋疲力尽了。于是在两个node上执行kubeadm reset,准备重新来过。

kubeadm reset后,之前flannel创建的bridge device cni0和网口设备flannel.1依然健在。为了保证环境彻底恢复到初始状态,我们可以通过下面命令删除这两个设备:

# ifconfig  cni0 down
# brctl delbr cni0
# ip link delete flannel.1

有了前面几个问题的“磨炼”后,重新init和join的k8s cluster显得格外顺利。这次minion node没有再出现什么异常。

#  kubectl get nodes -o wide
NAME                      STATUS         AGE       EXTERNAL-IP
iz25beglnhtz              Ready,master   5m        <none>
iz2ze39jeyizepdxhwqci6z   Ready          51s       <none>

# kubectl get pod --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
default       my-nginx-1948696469-71h1l              1/1       Running   0          3m
default       my-nginx-1948696469-zwt5g              1/1       Running   0          3m
default       my-ubuntu-2560993602-ftdm6             1/1       Running   0          3m
kube-system   dummy-2088944543-lmlbh                 1/1       Running   0          5m
kube-system   etcd-iz25beglnhtz                      1/1       Running   0          6m
kube-system   kube-apiserver-iz25beglnhtz            1/1       Running   0          6m
kube-system   kube-controller-manager-iz25beglnhtz   1/1       Running   0          6m
kube-system   kube-discovery-1769846148-l5lfw        1/1       Running   0          5m
kube-system   kube-dns-2924299975-mdq5r              4/4       Running   0          5m
kube-system   kube-flannel-ds-9zwr1                  2/2       Running   0          5m
kube-system   kube-flannel-ds-p7xh2                  2/2       Running   0          1m
kube-system   kube-proxy-dwt5f                       1/1       Running   0          5m
kube-system   kube-proxy-vm6v2                       1/1       Running   0          1m
kube-system   kube-scheduler-iz25beglnhtz            1/1       Running   0          6m

接下来我们创建my-nginx deployment和service来测试flannel网络的连通性。通过curl my-nginx service的nodeport,发现可以reach master上的两个nginx pod,但是minion node上的pod依旧不通。

在master上看flannel docker的日志:

I1228 02:52:22.097083       1 network.go:225] L3 miss: 10.244.1.2
I1228 02:52:22.097169       1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60
I1228 02:52:22.097335       1 network.go:236] AddL3 succeeded
I1228 02:52:55.169952       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:00.801901       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:03.801923       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:04.801764       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:05.801848       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:06.888269       1 network.go:225] L3 miss: 10.244.1.2
I1228 02:53:06.888340       1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60
I1228 02:53:06.888507       1 network.go:236] AddL3 succeeded
I1228 02:53:39.969791       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:45.153770       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:48.154822       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:49.153774       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:50.153734       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:52.154056       1 network.go:225] L3 miss: 10.244.1.2
I1228 02:53:52.154110       1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60
I1228 02:53:52.154256       1 network.go:236] AddL3 succeeded

日志中有大量:“Ignoring not a miss”字样的日志,似乎vxlan网络有问题。这个问题与下面issue中描述颇为接近:

https://github.com/coreos/flannel/issues/427

Flannel默认采用vxlan作为backend,使用kernel vxlan默认的udp 8742端口。Flannel还支持udp的backend,使用udp 8285端口。于是试着更换一下flannel后端。更换flannel后端的步骤如下:

  • 将https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml文件下载到本地;
  • 修改kube-flannel.yml文件内容:主要是针对net-conf.json属性,增加”Backend”字段属性:
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "udp",
        "Port": 8285
      }
    }
---
... ...
  • 卸载并重新安装pod网络
# kubectl delete -f kube-flannel.yml
configmap "kube-flannel-cfg" deleted
daemonset "kube-flannel-ds" deleted

# kubectl apply -f kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

# netstat -an|grep 8285
udp        0      0 123.56.200.187:8285     0.0.0.0:*

经过测试发现:udp端口是通的。在两个node上tcpdump -i flannel0 可以看到udp数据包的发送和接收。但是两个node间的pod network依旧不通。

6、failed to register network: failed to acquire lease: node “iz25beglnhtz” not found

正常情况下master node和minion node上的flannel pod的启动日志如下:

master node flannel的运行:

I1227 04:56:16.577828       1 main.go:132] Installing signal handlers
I1227 04:56:16.578060       1 kube.go:233] starting kube subnet manager
I1227 04:56:16.578064       1 manager.go:133] Determining IP address of default interface
I1227 04:56:16.578576       1 manager.go:163] Using 123.56.200.187 as external interface
I1227 04:56:16.578616       1 manager.go:164] Using 123.56.200.187 as external endpoint
E1227 04:56:16.579079       1 network.go:106] failed to register network: failed to acquire lease: node "iz25beglnhtz" not found
I1227 04:56:17.583744       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1227 04:56:17.585367       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1227 04:56:17.587765       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1227 04:56:17.589943       1 manager.go:246] Lease acquired: 10.244.0.0/24
I1227 04:56:17.590203       1 network.go:58] Watching for L3 misses
I1227 04:56:17.590255       1 network.go:66] Watching for new subnet leases
I1227 07:43:27.164103       1 network.go:153] Handling initial subnet events
I1227 07:43:27.164211       1 device.go:163] calling GetL2List() dev.link.Index: 5
I1227 07:43:27.164350       1 device.go:168] calling NeighAdd: 59.110.67.15, ca:50:97:1f:c2:ea

minion node上flannel的运行:

# docker logs 1f64bd9c0386
I1227 07:43:26.670620       1 main.go:132] Installing signal handlers
I1227 07:43:26.671006       1 manager.go:133] Determining IP address of default interface
I1227 07:43:26.670825       1 kube.go:233] starting kube subnet manager
I1227 07:43:26.671514       1 manager.go:163] Using 59.110.67.15 as external interface
I1227 07:43:26.671575       1 manager.go:164] Using 59.110.67.15 as external endpoint
I1227 07:43:26.746811       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1227 07:43:26.749785       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1227 07:43:26.752343       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1227 07:43:26.755126       1 manager.go:246] Lease acquired: 10.244.1.0/24
I1227 07:43:26.755444       1 network.go:58] Watching for L3 misses
I1227 07:43:26.755475       1 network.go:66] Watching for new subnet leases
I1227 07:43:27.755830       1 network.go:153] Handling initial subnet events
I1227 07:43:27.755905       1 device.go:163] calling GetL2List() dev.link.Index: 10
I1227 07:43:27.756099       1 device.go:168] calling NeighAdd: 123.56.200.187, ca:68:7c:9b:cc:67

但在进行上面问题5的测试过程中,我们发现flannel container的启动日志中有如下错误:

master node:

# docker logs c2d1cee3df3d
I1228 06:53:52.502571       1 main.go:132] Installing signal handlers
I1228 06:53:52.502735       1 manager.go:133] Determining IP address of default interface
I1228 06:53:52.503031       1 manager.go:163] Using 123.56.200.187 as external interface
I1228 06:53:52.503054       1 manager.go:164] Using 123.56.200.187 as external endpoint
E1228 06:53:52.503869       1 network.go:106] failed to register network: failed to acquire lease: node "iz25beglnhtz" not found
I1228 06:53:52.503899       1 kube.go:233] starting kube subnet manager
I1228 06:53:53.522892       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1228 06:53:53.524325       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1228 06:53:53.526622       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1228 06:53:53.528438       1 manager.go:246] Lease acquired: 10.244.0.0/24
I1228 06:53:53.528744       1 network.go:58] Watching for L3 misses
I1228 06:53:53.528777       1 network.go:66] Watching for new subnet leases

minion node:

# docker logs dcbfef45308b
I1228 05:28:05.012530       1 main.go:132] Installing signal handlers
I1228 05:28:05.012747       1 manager.go:133] Determining IP address of default interface
I1228 05:28:05.013011       1 manager.go:163] Using 59.110.67.15 as external interface
I1228 05:28:05.013031       1 manager.go:164] Using 59.110.67.15 as external endpoint
E1228 05:28:05.013204       1 network.go:106] failed to register network: failed to acquire lease: node "iz2ze39jeyizepdxhwqci6z" not found
I1228 05:28:05.013237       1 kube.go:233] starting kube subnet manager
I1228 05:28:06.041602       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1228 05:28:06.042863       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1228 05:28:06.044896       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1228 05:28:06.046497       1 manager.go:246] Lease acquired: 10.244.1.0/24
I1228 05:28:06.046780       1 network.go:98] Watching for new subnet leases
I1228 05:28:07.047052       1 network.go:191] Subnet added: 10.244.0.0/24

两个Node都有“注册网络”失败的错误:failed to register network: failed to acquire lease: node “xxxx” not found。很难断定是否是因为这两个错误导致的两个node间的网络不通。从整个测试过程来看,这个问题时有时无。在下面flannel issue中也有类似的问题讨论:

https://github.com/coreos/flannel/issues/435

Flannel pod network的诸多问题让我决定暂时放弃在kubeadm创建的kubernetes cluster中继续使用Flannel。

四、Calico pod network

Kubernetes支持的pod network add-ons中,除了Flannel,还有calicoWeave net等。这里我们试试基于边界网关BGP协议实现的Calico pod network。Calico Project针对在kubeadm建立的K8s集群的Pod网络安装也有专门的文档。文档中描述的需求和约束我们均满足,比如:

master node带有kubeadm.alpha.kubernetes.io/role: master标签:

# kubectl get nodes -o wide --show-labels
NAME           STATUS         AGE       EXTERNAL-IP   LABELS
iz25beglnhtz   Ready,master   3m        <none>        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubeadm.alpha.kubernetes.io/role=master,kubernetes.io/hostname=iz25beglnhtz

在安装calico之前,我们还是要执行kubeadm reset重置环境,并将flannel创建的各种网络设备删除,可参考上面几个小节中的命令。

1、初始化集群

使用calico的kubeadm init无需再指定–pod-network-cidr=10.244.0.0/16 option:

# kubeadm init --api-advertise-addresses=10.47.217.91
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "531b3f.3bd900d61b78d6c9"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 13.527323 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 0.503814 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 1.503644 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=531b3f.3bd900d61b78d6c9 10.47.217.91

2、创建calico network

# kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
job "configure-calico" created

实际创建过程需要一段时间,因为calico需要pull 一些images:

# docker images
REPOSITORY                                               TAG                        IMAGE ID            CREATED             SIZE
quay.io/calico/node                                      v1.0.0                     74bff066bc6a        7 days ago          256.4 MB
calico/ctl                                               v1.0.0                     069830246cf3        8 days ago          43.35 MB
calico/cni                                               v1.5.5                     ada87b3276f3        12 days ago         67.13 MB
gcr.io/google_containers/etcd                            2.2.1                      a6cd91debed1        14 months ago       28.19 MB

calico在master node本地创建了两个network device:

# ip a
... ...
47: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.91.0/32 scope global tunl0
       valid_lft forever preferred_lft forever
48: califa32a09679f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 62:39:10:55:44:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0

3、minion node join

执行下面命令,将minion node加入cluster:

# kubeadm join --token=531b3f.3bd900d61b78d6c9 10.47.217.91

calico在minion node上也创建了一个network device:

57988: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.136.192/32 scope global tunl0
       valid_lft forever preferred_lft forever

join成功后,我们查看一下cluster status:

# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   calico-etcd-488qd                          1/1       Running   0          18m       10.47.217.91   iz25beglnhtz
kube-system   calico-node-jcb3c                          2/2       Running   0          18m       10.47.217.91   iz25beglnhtz
kube-system   calico-node-zthzp                          2/2       Running   0          4m        10.28.61.30    iz2ze39jeyizepdxhwqci6z
kube-system   calico-policy-controller-807063459-f21q4   1/1       Running   0          18m       10.47.217.91   iz25beglnhtz
kube-system   dummy-2088944543-rtsfk                     1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   etcd-iz25beglnhtz                          1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-apiserver-iz25beglnhtz                1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-controller-manager-iz25beglnhtz       1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-discovery-1769846148-51wdk            1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-dns-2924299975-fhf5f                  4/4       Running   0          23m       192.168.91.1   iz25beglnhtz
kube-system   kube-proxy-2s7qc                           1/1       Running   0          4m        10.28.61.30    iz2ze39jeyizepdxhwqci6z
kube-system   kube-proxy-h2qds                           1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-scheduler-iz25beglnhtz                1/1       Running   0          23m       10.47.217.91   iz25beglnhtz

所有组件都是ok的。似乎是好兆头!但跨node的pod network是否联通,还需进一步探究。

4、探究跨node的pod network联通性

我们依旧利用上面测试flannel网络的my-nginx-svc.yaml和run-my-nginx.yaml,创建my-nginx service和my-nginx deployment。注意:这之前要先在master node上执行一下”kubectl taint nodes –all dedicated-”,以让master node承载work load。

遗憾的是,结果和flannel很相似,分配到master node上http request得到了nginx的响应;minion node上的pod依旧无法联通。

这次我不想在calico这块过多耽搁,我要快速看看下一个候选者:weave net是否满足要求。

由于wordpress莫名其妙的问题,导致这篇文章无法发布完整,因此将其拆分为两个部分,本文为第一部分,第二部分请移步这里阅读。

使用Ceph RBD为Kubernetes集群提供存储卷

一旦走上使用Kubernetes的道路,你就会发现这条路并不好走,充满荆棘。即便你使用Kubernetes建立起的集群规模不大,也是需要“五脏俱全”的,否则你根本无法真正将kubernetes用起来,或者说一个半拉子Kubernetes集群很可能无法满足你要支撑的业务需求。在目前我正在从事的一个产品就是这样,光有K8s还不够,考虑到”有状态服务”的需求,我们还需要给Kubernetes配一个后端存储以支持Persistent Volume机制,使得Pod在k8s的不同节点间调度迁移时,具有持久化需求的数据不会被清除,且Pod中Container无论被调度到哪个节点,始终都能挂载到同一个Volume。

Kubernetes支持多种Volume类型,这里选择Ceph RBD(Rados Block Device)。选择Ceph大致有三个原因:

  • Ceph经过多年开发,已经逐渐步入成熟;
  • Ceph在Ubuntu 14.04.x上安装方便(仅通过apt-get即可),并且在未经任何调优(调优需要你对Ceph背后的原理十分熟悉)的情况下,性能可以基本满足我们需求;
  • Ceph同时支持对象存储、块存储和文件系统接口,虽然这里我们可能仅需要块存储。

即便这样,Ceph与K8s的集成过程依旧少不了“趟坑”,接下来我们就详细道来。

一、环境和准备条件

我们依然使用两个阿里云ECS Node,操作系统以及内核版本为:Ubuntu 14.04.4 LTS (GNU/Linux 3.19.0-70-generic x86_64)。

Ceph采用当前Ubuntu 14.04源中最新的Ceph LTS版本:JEWEL10.2.3

Kubernetes版本为上次安装时的1.3.7版本。

二、Ceph安装原理

Ceph分布式存储集群由若干组件组成,包括:Ceph Monitor、Ceph OSD和Ceph MDS,其中如果你仅使用对象存储和块存储时,MDS不是必须的(本次我们也不需要安装MDS),仅当你要用到Cephfs时,MDS才是需要安装的。

Ceph的安装模型与k8s有些类似,也是通过一个deploy node远程操作其他Node以create、prepare和activate各个Node上的Ceph组件,官方手册中给出的示意图如下:

img{}

映射到我们实际的环境中,我的安装设计是这样的:

admin-node, deploy-node(ceph-deploy):10.47.136.60  iZ25cn4xxnvZ
mon.node1,(mds.node1): 10.47.136.60  iZ25cn4xxnvZ
osd.0: 10.47.136.60   iZ25cn4xxnvZ
osd.1: 10.46.181.146 iZ25mjza4msZ

实际上就是两个Aliyun ECS节点承担以上多种角色。不过像iZ25cn4xxnvZ这样的host name太反人类,长远考虑还是换成node1、node2这样的简单名字更好。通过编辑各个ECS上的/etc/hostname, /etc/hosts,我们将iZ25cn4xxnvZ换成node1,将iZ25mjza4msZ换成node2:

10.47.136.60 (node1):

# cat /etc/hostname
node1

# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1    localhost.localdomain    localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.47.136.60 admin
10.47.136.60 node1
10.47.136.60 iZ25cn4xxnvZ
10.46.181.146 node2

----------------------------------
10.46.181.146 (node2):

# cat /etc/hostname
node2

# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1    localhost.localdomain    localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.46.181.146  node2
10.46.181.146  iZ25mjza4msZ
10.47.136.60  node1

于是上面的环境设计就变成了:

admin-node, deploy-node(ceph-deploy):node1 10.47.136.60
mon.node1, (mds.node1) : node1  10.47.136.60
osd.0:  node1 10.47.136.60
osd.1:  node2 10.46.181.146

三、Ceph安装步骤

1、安装ceph-deploy

Ceph提供了一键式安装工具ceph-deploy来协助Ceph集群的安装,在deploy node上,我们首先要来安装的就是ceph-deploy,Ubuntu 14.04官方源中的ceph-deploy是1.4.0版本,比较old,我们需要添加Ceph源,安装最新的ceph-deploy:

# wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
OK

# echo deb https://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
deb https://download.ceph.com/debian-jewel/ trusty main

#apt-get update
... ...

# apt-get install ceph-deploy
Reading package lists... Done
Building dependency tree
Reading state information... Done
.... ...
The following NEW packages will be installed:
  ceph-deploy
0 upgraded, 1 newly installed, 0 to remove and 105 not upgraded.
Need to get 96.4 kB of archives.
After this operation, 622 kB of additional disk space will be used.
Get:1 https://download.ceph.com/debian-jewel/ trusty/main ceph-deploy all 1.5.35 [96.4 kB]
Fetched 96.4 kB in 1s (53.2 kB/s)
Selecting previously unselected package ceph-deploy.
(Reading database ... 153022 files and directories currently installed.)
Preparing to unpack .../ceph-deploy_1.5.35_all.deb ...
Unpacking ceph-deploy (1.5.35) ...
Setting up ceph-deploy (1.5.35) ...

注意:ceph-deploy只需要在admin/deploy node上安装即可。

2、前置设置

和安装k8s一样,在ceph-deploy真正执行安装之前,需要确保所有Ceph node都要开启NTP,同时建议在每个node节点上为安装过程创建一个安装账号,即ceph-deploy在ssh登录到每个Node时所用的账号。这个账号有两个约束:

  • 具有sudo权限;
  • 执行sudo命令时,无需输入密码。

我们将这一账号命名为cephd,我们需要在每个ceph node上(包括admin node/deploy node)都建立一个cephd用户,并加入到sudo组中。

以下命令在每个Node上都要执行:

useradd -d /home/cephd -m cephd
passwd cephd

添加sudo权限:
echo "cephd ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephd
sudo chmod 0440 /etc/sudoers.d/cephd

在admin node(deploy node)上,登入cephd账号,创建该账号下deploy node到其他各个Node的ssh免密登录设置,密码留空:

在deploy node上执行:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephd/.ssh/id_rsa):
Created directory '/home/cephd/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephd/.ssh/id_rsa.
Your public key has been saved in /home/cephd/.ssh/id_rsa.pub.
The key fingerprint is:
....

将deploy node的公钥copy到其他节点上去:

$ ssh-copy-id cephd@node1
The authenticity of host 'node1 (10.47.136.60)' can't be established.
ECDSA key fingerprint is d2:69:e2:3a:3e:4c:6b:80:15:30:17:8e:df:3b:62:1f.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephd@node1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephd@node1'"
and check to make sure that only the key(s) you wanted were added.

同样,执行 ssh-copy-id cephd@node2,完成后,测试一下免密登录。

$ ssh node1
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.19.0-70-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Welcome to aliyun Elastic Compute Service!

最后,在Deploy node上创建并编辑~/.ssh/config,这是Ceph官方doc推荐的步骤,这样做的目的是可以避免每次执行ceph-deploy时都要去指定 –username {username} 参数。

//~/.ssh/config
Host node1
   Hostname node1
   User cephd
Host node2
   Hostname node2
   User cephd
3、安装ceph

这个环节参考的是Ceph官方doc手工部署一节

如果之前安装过ceph,可以先执行如下命令以获得一个干净的环境:

ceph-deploy purgedata node1 node2
ceph-deploy forgetkeys
ceph-deploy purge node1 node2

接下来我们就可以来全新安装Ceph了。在deploy node上,建立cephinstall目录,然后进入cephinstall目录执行相关步骤。

我们首先来创建一个ceph cluster,这个环节需要通过执行ceph-deploy new {initial-monitor-node(s)}命令。按照上面的安装设计,我们的ceph monitor node就是node1,因此我们执行下面命令来创建一个名为ceph的ceph cluster:

$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.35): /usr/bin/ceph-deploy new node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f71d2051938>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f71d19f5710>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['node1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /sbin/initctl version
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /bin/ip link show
[node1][INFO  ] Running command: sudo /bin/ip addr show
[node1][DEBUG ] IP addresses found: [u'101.201.78.51', u'192.168.16.1', u'10.47.136.60', u'172.16.99.0', u'172.16.99.1']
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 10.47.136.60
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.47.136.60']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

new命令执行完后,ceph-deploy会在当前目录下创建一些辅助文件:

# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

$ cat ceph.conf
[global]
fsid = f5166c78-e3b6-4fef-b9e7-1ecf7382fd93
mon_initial_members = node1
mon_host = 10.47.136.60
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

由于我们仅有两个OSD节点,因此我们在进一步安装之前,需要先对ceph.conf文件做一些配置调整:
修改配置以进行后续安装:

在[global]标签下,添加下面一行:
osd pool default size = 2

ceph.conf保存退出。接下来,我们执行下面命令在node1和node2上安装ceph运行所需的各个binary包:

# ceph-deploy install nod1 node2
.... ...
[node2][INFO  ] Running command: sudo ceph --version
[node2][DEBUG ] ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)

这一过程ceph-deploy会SSH登录到各个node上去,执行apt-get update, 并install ceph的各种组件包,这个环节耗时可能会长一些(依网络情况不同而不同),请耐心等待。

4、初始化ceph monitor node

有了ceph启动的各个程序后,我们首先来初始化ceph cluster的monitor node。在deploy node的工作目录cephinstall下,执行:

# ceph-deploy mon create-initial

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.35): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0f7ea2fe60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f0f7ee93de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1
[ceph_deploy.mon][DEBUG ] detecting platform for host node1...
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
....

[iZ25cn4xxnvZ][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.iZ25cn4xxnvZ.asok mon_status
[ceph_deploy.mon][INFO  ] mon.iZ25cn4xxnvZ monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpP_SmXX
[iZ25cn4xxnvZ][DEBUG ] connected to host: iZ25cn4xxnvZ
[iZ25cn4xxnvZ][DEBUG ] detect platform information from remote host
[iZ25cn4xxnvZ][DEBUG ] detect machine type
[iZ25cn4xxnvZ][DEBUG ] find the location of an executable
[iZ25cn4xxnvZ][INFO  ] Running command: /sbin/initctl version
[iZ25cn4xxnvZ][DEBUG ] get remote short hostname
[iZ25cn4xxnvZ][DEBUG ] fetch remote file
[iZ25cn4xxnvZ][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.iZ25cn4xxnvZ.asok mon_status
[iZ25cn4xxnvZ][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-iZ25cn4xxnvZ/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[iZ25cn4xxnvZ][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-iZ25cn4xxnvZ/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[iZ25cn4xxnvZ][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-iZ25cn4xxnvZ/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[iZ25cn4xxnvZ][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-iZ25cn4xxnvZ/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
... ...
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpP_SmXX

这一过程很顺利。命令执行完成后我们能看到一些变化:

在当前目录下,出现了若干*.keyring,这是Ceph组件间进行安全访问时所需要的:

# ls -l
total 216
-rw------- 1 root root     71 Nov  3 17:24 ceph.bootstrap-mds.keyring
-rw------- 1 root root     71 Nov  3 17:25 ceph.bootstrap-osd.keyring
-rw------- 1 root root     71 Nov  3 17:25 ceph.bootstrap-rgw.keyring
-rw------- 1 root root     63 Nov  3 17:24 ceph.client.admin.keyring
-rw-r--r-- 1 root root    242 Nov  3 16:40 ceph.conf
-rw-r--r-- 1 root root 192336 Nov  3 17:25 ceph-deploy-ceph.log
-rw------- 1 root root     73 Nov  3 16:28 ceph.mon.keyring
-rw-r--r-- 1 root root   1645 Oct 16  2015 release.asc

在node1(monitor node)上,我们看到ceph-mon已经运行起来了:

cephd@node1:~/cephinstall$ ps -ef|grep ceph
ceph     32326     1  0 14:19 ?        00:00:00 /usr/bin/ceph-mon --cluster=ceph -i node1 -f --setuser ceph --setgroup ceph

如果要手工停止ceph-mon,可以使用stop ceph-mon-all 命令。

5、prepare ceph OSD node

至此,ceph-mon组件程序已经成功启动了,剩下的只有OSD这一关了。启动OSD node分为两步:prepare 和 activate。OSD node是真正存储数据的节点,我们需要为ceph-osd提供独立存储空间,一般是一个独立的disk。但我们环境不具备这个条件,于是在本地盘上创建了个目录,提供给OSD。

在deploy node上执行:

ssh node1
sudo mkdir /var/local/osd0
exit

ssh node2
sudo mkdir /var/local/osd1
exit

接下来,我们就可以执行prepare操作了,prepare操作会在上述的两个osd0和osd1目录下创建一些后续activate激活以及osd运行时所需要的文件:

cephd@node1:~/cephinstall$ ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.35): /usr/bin/ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('node1', '/var/local/osd0', None), ('node2', '/var/local/osd1', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f072603e8c0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f0726492d70>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/var/local/osd0: node2:/var/local/osd1:
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /sbin/initctl version
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /var/local/osd0 journal None activate False
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd0
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/local/osd0
[node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/local/osd0/ceph_fsid.782.tmp
[node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/local/osd0/fsid.782.tmp
[node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/local/osd0/magic.782.tmp
[node1][INFO  ] checking OSD status...
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[
ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
... ...
[node2][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.

prepare并不会启动ceph osd,那是activate的职责。

6、激活ceph OSD node

接下来,我们来激活各个OSD node:

$ ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1

... ...
[node1][WARNIN] got monmap epoch 1
[node1][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid 6def4f7f-4f37-43a5-8699-5c6ab608c89c --keyring /var/local/osd0/keyring --setuser ceph --setgroup ceph
[node1][WARNIN] Traceback (most recent call last):
[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[node1][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5011, in run
[node1][WARNIN]     main(sys.argv[1:])
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4962, in main
[node1][WARNIN]     args.func(args)
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3324, in main_activate
[node1][WARNIN]     init=args.mark_init,
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3144, in activate_dir
[node1][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3249, in activate
[node1][WARNIN]     keyring=keyring,
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2742, in mkfs
[node1][WARNIN]     '--setgroup', get_ceph_group(),
[node1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2689, in ceph_osd_mkfs
[node1][WARNIN]     raise Error('%s failed : %s' % (str(arguments), error))
[node1][WARNIN] ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0', '--monmap', '/var/local/osd0/activate.monmap', '--osd-data', '/var/local/osd0', '--osd-journal', '/var/local/osd0/journal', '--osd-uuid', '6def4f7f-4f37-43a5-8699-5c6ab608c89c', '--keyring', '/var/local/osd0/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2016-11-04 14:25:40.325009 7fd1aa73f800 -1 filestore(/var/local/osd0) mkfs: write_version_stamp() failed: (13) Permission denied
[node1][WARNIN] 2016-11-04 14:25:40.325032 7fd1aa73f800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
[node1][WARNIN] 2016-11-04 14:25:40.325075 7fd1aa73f800 -1  ** ERROR: error creating empty object store in /var/local/osd0: (13) Permission denied
[node1][WARNIN]
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /var/local/osd0

激活没能成功,在激活第一个节点时,就输出了如上错误日志。日志的error含义很明显:权限问题。

ceph-deploy尝试在osd node1上以ceph:ceph启动ceph-osd,但/var/local/osd0目录的权限情况如下:

$ ls -l /var/local
drwxr-sr-x 2 root staff 4096 Nov  4 14:25 osd0

osd0被root拥有,以ceph用户启动的ceph-osd程序自然没有权限在/var/local/osd0目录下创建文件并写入数据了。这个问题在ceph官方issue中有很多人提出来,也给出了临时修正方法:

将osd0和osd1的权限赋予ceph:ceph:

node1:
sudo chown -R ceph:ceph /var/local/osd0

node2:
sudo chown -R ceph:ceph /var/local/osd1

修改完权限后,我们再来执行activate:

$ ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.35): /usr/bin/ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3c90c678c0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f3c910bbd70>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node1', '/var/local/osd0', None), ('node2', '/var/local/osd1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/var/local/osd0: node2:/var/local/osd1:
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /sbin/initctl version
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host node1 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /var/local/osd0
[node1][WARNIN] main_activate: path = /var/local/osd0
[node1][WARNIN] activate: Cluster uuid is f5166c78-e3b6-4fef-b9e7-1ecf7382fd93
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] activate: Cluster name is ceph
[node1][WARNIN] activate: OSD uuid is 6def4f7f-4f37-43a5-8699-5c6ab608c89c
[node1][WARNIN] activate: OSD id is 0
[node1][WARNIN] activate: Initializing OSD...
[node1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/local/osd0/activate.monmap
[node1][WARNIN] got monmap epoch 1
[node1][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid 6def4f7f-4f37-43a5-8699-5c6ab608c89c --keyring /var/local/osd0/keyring --setuser ceph --setgroup ceph
[node1][WARNIN] activate: Marking with init system upstart
[node1][WARNIN] activate: Authorizing OSD key...
[node1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /var/local/osd0/keyring osd allow * mon allow profile osd
[node1][WARNIN] added key for osd.0
[node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/local/osd0/active.4616.tmp
[node1][WARNIN] activate: ceph osd.0 data dir is ready at /var/local/osd0
[node1][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-0 -> /var/local/osd0
[node1][WARNIN] start_daemon: Starting ceph osd.0...
[node1][WARNIN] command_check_call: Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=0
[node1][INFO  ] checking OSD status...
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[node1][WARNIN] there is 1 OSD down
[node1][WARNIN] there is 1 OSD out

[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /sbin/initctl version
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /var/local/osd1
[node2][WARNIN] main_activate: path = /var/local/osd1
[node2][WARNIN] activate: Cluster uuid is f5166c78-e3b6-4fef-b9e7-1ecf7382fd93
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNIN] activate: Cluster name is ceph
[node2][WARNIN] activate: OSD uuid is 4733f683-0376-4708-86a6-818af987ade2
[node2][WARNIN] allocate_osd_id: Allocating OSD id...
[node2][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 4733f683-0376-4708-86a6-818af987ade2
[node2][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/local/osd1/whoami.27470.tmp
[node2][WARNIN] activate: OSD id is 1
[node2][WARNIN] activate: Initializing OSD...
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/local/osd1/activate.monmap
[node2][WARNIN] got monmap epoch 1
[node2][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/local/osd1/activate.monmap --osd-data /var/local/osd1 --osd-journal /var/local/osd1/journal --osd-uuid 4733f683-0376-4708-86a6-818af987ade2 --keyring /var/local/osd1/keyring --setuser ceph --setgroup ceph
[node2][WARNIN] activate: Marking with init system upstart
[node2][WARNIN] activate: Authorizing OSD key...
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /var/local/osd1/keyring osd allow * mon allow profile osd
[node2][WARNIN] added key for osd.1
[node2][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/local/osd1/active.27470.tmp
[node2][WARNIN] activate: ceph osd.1 data dir is ready at /var/local/osd1
[node2][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-1 -> /var/local/osd1
[node2][WARNIN] start_daemon: Starting ceph osd.1...
[node2][WARNIN] command_check_call: Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=1
[node2][INFO  ] checking OSD status...
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json

没有错误报出!但OSD真的运行起来了吗?我们还需要再确认一下。

我们先通过ceph admin命令将各个.keyring同步到各个Node上,以便可以在各个Node上使用ceph命令连接到monitor:

注意:执行ceph admin前,需要在deploy-node的/etc/hosts中添加:

10.47.136.60 admin

执行ceph admin:

$ ceph-deploy admin admin node1 node2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.35): /usr/bin/ceph-deploy admin admin node1 node2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f072ee3b758>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['admin', 'node1', 'node2']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f072f6cf5f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to admin
[admin][DEBUG ] connection detected need for sudo
[admin][DEBUG ] connected to host: admin
[admin][DEBUG ] detect platform information from remote host
[admin][DEBUG ] detect machine type
[admin][DEBUG ] find the location of an executable
[admin][INFO  ] Running command: sudo /sbin/initctl version
[admin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /sbin/initctl version
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /sbin/initctl version
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

$sudo chmod +r /etc/ceph/ceph.client.admin.keyring

接下来,查看一下ceph集群中的OSD节点状态:

$ ceph osd tree
ID WEIGHT  TYPE NAME             UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.07660 root default
-2 0.03830     host node1
 0 0.03830         osd.0            down        0          1.00000
-3 0.03830     host iZ25mjza4msZ
 1 0.03830         osd.1            down        0          1.00000

果不其然,两个osd节点均处于down状态,一个也没有启动起来。问题在哪?

我们来查看一下node1上的日志:/var/log/ceph/ceph-osd.0.log:

016-11-04 15:33:17.088971 7f568d6db800  0 pidfile_write: ignore empty --pid-file
2016-11-04 15:33:17.102052 7f568d6db800  0 filestore(/var/lib/ceph/osd/ceph-0) backend generic (magic 0xef53)
2016-11-04 15:33:17.102071 7f568d6db800 -1 filestore(/var/lib/ceph/osd/ceph-0) WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048).  Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size.  You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior
2016-11-04 15:33:17.102410 7f568d6db800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2016-11-04 15:33:17.102425 7f568d6db800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2016-11-04 15:33:17.102445 7f568d6db800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice is supported
2016-11-04 15:33:17.119261 7f568d6db800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2016-11-04 15:33:17.127630 7f568d6db800  0 filestore(/var/lib/ceph/osd/ceph-0) limited size xattrs
2016-11-04 15:33:17.128125 7f568d6db800  1 leveldb: Recovering log #38
2016-11-04 15:33:17.136595 7f568d6db800  1 leveldb: Delete type=3 #37
2016-11-04 15:33:17.136656 7f568d6db800  1 leveldb: Delete type=0 #38
2016-11-04 15:33:17.136845 7f568d6db800  0 filestore(/var/lib/ceph/osd/ceph-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2016-11-04 15:33:17.137064 7f568d6db800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2016-11-04 15:33:17.137068 7f568d6db800  1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 18: 5368709120 bytes, block size 4096 bytes, directio = 1, aio = 0
2016-11-04 15:33:17.137897 7f568d6db800  1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 18: 5368709120 bytes, block size 4096 bytes, directio = 1, aio = 0
2016-11-04 15:33:17.138243 7f568d6db800  1 filestore(/var/lib/ceph/osd/ceph-0) upgrade
2016-11-04 15:33:17.138453 7f568d6db800 -1 osd.0 0 backend (filestore) is unable to support max object name[space] len
2016-11-04 15:33:17.138481 7f568d6db800 -1 osd.0 0    osd max object name len = 2048
2016-11-04 15:33:17.138485 7f568d6db800 -1 osd.0 0    osd max object namespace len = 256
2016-11-04 15:33:17.138488 7f568d6db800 -1 osd.0 0 (36) File name too long
2016-11-04 15:33:17.138895 7f568d6db800  1 journal close /var/lib/ceph/osd/ceph-0/journal
2016-11-04 15:33:17.140041 7f568d6db800 -1  ** ERROR: osd init failed: (36) File name too long

的确发现了错误日志:

2016-11-04 15:33:17.138481 7f568d6db800 -1 osd.0 0    osd max object name len = 2048
2016-11-04 15:33:17.138485 7f568d6db800 -1 osd.0 0    osd max object namespace len = 256
2016-11-04 15:33:17.138488 7f568d6db800 -1 osd.0 0 (36) File name too long
2016-11-04 15:33:17.138895 7f568d6db800  1 journal close /var/lib/ceph/osd/ceph-0/journal
2016-11-04 15:33:17.140041 7f568d6db800 -1  ** ERROR: osd init failed: (36) File name too long

进一步搜索ceph官方文档,发现在文件系统推荐这个doc中有提到,官方不建议采用ext4文件系统作为ceph的后端文件系统,如果采用,那么对于ext4的filesystem,应该在ceph.conf中添加如下配置:

osd max object name len = 256
osd max object namespace len = 64

由于配置已经分发到个个node上,我们需要到各个Node上同步修改:/etc/ceph/ceph.conf,添加上面两行。然后重新activate osd node,这里不赘述。重新激活后,我们来查看ceph osd状态:

$ ceph osd tree
ID WEIGHT  TYPE NAME             UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.07660 root default
-2 0.03830     host node1
 0 0.03830         osd.0              up  1.00000          1.00000
-3 0.03830     host iZ25mjza4msZ
 1 0.03830         osd.1              up  1.00000          1.00000

 $ceph -s
    cluster f5166c78-e3b6-4fef-b9e7-1ecf7382fd93
     health HEALTH_OK
     monmap e1: 1 mons at {node1=10.47.136.60:6789/0}
            election epoch 3, quorum 0 node1
     osdmap e11: 2 osds: 2 up, 2 in
            flags sortbitwise
      pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
            37834 MB used, 38412 MB / 80374 MB avail
                  64 active+clean

$ !ps
ps -ef|grep ceph
ceph       17139       1  0 16:20 ?        00:00:00 /usr/bin/ceph-osd --cluster=ceph -i 0 -f --setuser ceph --setgroup ceph

可以看到ceph osd节点上的ceph-osd启动正常,cluster 状态为active+clean,至此,Ceph Cluster集群安装ok(我们暂不需要Ceph MDS组件)。

四、创建一个使用Ceph RBD作为后端Volume的Pod

在这一节中,我们就要将Ceph RBD与Kubenetes做集成了。Kubernetes的官方源码的examples/volumes/rbd目录下,就有一个使用cephrbd作为kubernetes pod volume的例子,我们试着将其跑起来。

例子提供了两个pod描述文件:rbd.json和rbd-with-secret.json。由于我们在ceph install时在ceph.conf中使用默认的安全验证协议cephx – The Ceph authentication protocol了:

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

因此我们将采用rbd-with-secret.json这个pod描述文件来创建例子中的Pod,限于篇幅,这里仅节选json文件中的volumes部分:

//例子中的rbd-with-secret.json

{
    ... ...
        "volumes": [
            {
                "name": "rbdpd",
                "rbd": {
                    "monitors": [
                           "10.16.154.78:6789",
                           "10.16.154.82:6789",
                           "10.16.154.83:6789"
                                 ],
                    "pool": "kube",
                    "image": "foo",
                    "user": "admin",
                    "secretRef": {
                           "name": "ceph-secret"
                                         },
                    "fsType": "ext4",
                    "readOnly": true
                }
            }
        ]
    }
}

volumes部分是和ceph rbd紧密相关的一些信息,各个字段的大致含义如下:

name:volume名字,这个没什么可说的,顾名思义即可。
rbd.monitors:前面提到过ceph集群的monitor组件,这里填写monitor组件的通信信息,集群里有几个monitor就填几个;
rbd.pool:Ceph中的pool记号,它用来给ceph中存储的对象进行逻辑分区用的。默认的pool是”rbd”;
rbd.image:Ceph磁盘块设备映像文件
rbd.user:ceph client访问ceph storage cluster所使用的用户名。ceph有自己的一套user管理系统,user的写法通常是TYPE.ID,比如client.admin(是不是想到对应的文件:ceph.client.admin.keyring)。client是一种type,而admin则是user。一般来说,Type基本都是client。
secret.Ref:引用的k8s secret对象名称。

上面的字段中,有两个字段值我们无法提供:rbd.image和secret.Ref,现在我们就来“填空”。我们在root用户下建立k8s-cephrbd工作目录,我们首先需要使用ceph提供的rbd工具创建Pod要用到image:

# rbd create foo -s 1024

# rbd list
foo

我们在rbd pool中(在上述命令中未指定pool name,默认image建立在rbd pool中)创建一个大小为1024Mi的ceph image foo,rbd list命令的输出告诉我们foo image创建成功。接下来,我们尝试将foo image映射到内核,并格式化该image:

root@node1:~# rbd map foo
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address

map操作报错。不过从错误提示信息,我们能找到一些蛛丝马迹:“RBD image feature set mismatch”。ceph新版中在map image时,给image默认加上了许多feature,通过rbd info可以查看到:

# rbd info foo
rbd image 'foo':
    size 1024 MB in 256 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.10612ae8944a
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags:

可以看到foo image拥有: layering, exclusive-lock, object-map, fast-diff, deep-flatten。不过遗憾的是我的Ubuntu 14.04的3.19内核仅支持其中的layering feature,其他feature概不支持。我们需要手动disable这些features:

# rbd feature disable foo exclusive-lock, object-map, fast-diff, deep-flatten
root@node1:/var/log/ceph# rbd info foo
rbd image 'foo':
    size 1024 MB in 256 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.10612ae8944a
    format: 2
    features: layering
    flags:

不过每次这么来disable可是十分麻烦的,一劳永逸的方法是在各个cluster node的/etc/ceph/ceph.conf中加上这样一行配置:

rbd_default_features = 1 #仅是layering对应的bit码所对应的整数值

设置完后,通过下面命令查看配置变化:

# ceph --show-config|grep rbd|grep features
rbd_default_features = 1

关于image features的这个问题,zphj1987的这篇文章中有较为详细的讲解。

我们再来map一下foo这个image:

# rbd map foo
/dev/rbd0

# ls -l /dev/rbd0
brw-rw---- 1 root disk 251, 0 Nov  5 10:33 /dev/rbd0

map后,我们就可以像格式化一个空image那样对其进行格式化了,这里格成ext4文件系统(格式化这一步大可不必,在后续小节中你会看到):

# mkfs.ext4 /dev/rbd0
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

接下来我们来创建ceph-secret这个k8s secret对象,这个secret对象用于k8s volume插件访问ceph集群:

获取client.admin的keyring值,并用base64编码:

# ceph auth get-key client.admin
AQBiKBxYuPXiJRAAsupnTBsURoWzb0k00oM3iQ==

# echo "AQBiKBxYuPXiJRAAsupnTBsURoWzb0k00oM3iQ=="|base64
QVFCaUtCeFl1UFhpSlJBQXN1cG5UQnNVUm9XemIwazAwb00zaVE9PQo=

在k8s-cephrbd下建立ceph-secret.yaml文件,data下的key字段值即为上面得到的编码值:

//ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFCaUtCeFl1UFhpSlJBQXN1cG5UQnNVUm9XemIwazAwb00zaVE9PQo=

创建ceph-secret:

# kubectl create -f ceph-secret.yaml
secret "ceph-secret" created

# kubectl get secret
NAME                  TYPE                                  DATA      AGE
ceph-secret           Opaque                                1         16s

至此,我们的rbd-with-secret.json全貌如下:

{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "rbd2"
    },
    "spec": {
        "containers": [
            {
                "name": "rbd-rw",
                "image": "kubernetes/pause",
                "volumeMounts": [
                    {
                        "mountPath": "/mnt/rbd",
                        "name": "rbdpd"
                    }
                ]
            }
        ],
        "volumes": [
            {
                "name": "rbdpd",
                "rbd": {
                    "monitors": [
                        "10.47.136.60:6789"
                                 ],
                    "pool": "rbd",
                    "image": "foo",
                    "user": "admin",
                    "secretRef": {
                        "name": "ceph-secret"
                        },
                    "fsType": "ext4",
                    "readOnly": true
                }
            }
        ]
    }
}

基于该Pod描述文件,创建使用cephrbd作为后端存储的pod:

# kubectl create -f rbd-with-secret.json
pod "rbd2" created

# kubectl get pod
NAME                        READY     STATUS    RESTARTS   AGE
rbd2                        1/1       Running   0          16s

# rbd showmapped
id pool image snap device
0  rbd  foo   -    /dev/rbd0

# mount
... ...
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/rbd-image-foo type ext4 (rw)

在我的环境中,pod实际被调度到了另外一个k8s node上运行了:

pod被调度到另外一个 node2 上:

# docker ps
CONTAINER ID        IMAGE                                          COMMAND                  CREATED             STATUS              PORTS                    NAMES
32f92243f911        kubernetes/pause                               "/pause"                 2 minutes ago       Up 2 minutes                                 k8s_rbd-rw.c1dc309e_rbd2_default_6b6541b9-a306-11e6-ba01-00163e1625a9_a6bb1b20

#docker inspect 32f92243f911
... ...
"Mounts": [
            {
                "Source": "/var/lib/kubelet/pods/6b6541b9-a306-11e6-ba01-00163e1625a9/volumes/kubernetes.io~secret/default-token-40z0x",
                "Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/6b6541b9-a306-11e6-ba01-00163e1625a9/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/6b6541b9-a306-11e6-ba01-00163e1625a9/containers/rbd-rw/a6bb1b20",
                "Destination": "/dev/termination-log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/6b6541b9-a306-11e6-ba01-00163e1625a9/volumes/kubernetes.io~rbd/rbdpd",
                "Destination": "/mnt/rbd",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
... ...

五、Kubernetes Persistent Volume和Persistent Volume Claim

上面一小节讲解了Kubernetes volume与Ceph RBD的结合,但是k8s volume还不能完全满足实际生产过程对持久化存储的需求,因为k8s volume的lifetime和pod的生命周期相同,一旦pod被delete,那么volume中的数据就不复存在了。于是k8s又推出了Persistent Volume(PV)和Persistent Volume Claim(PVC)组合,故名思意:即便挂载其的pod被delete了,PV依旧存在,PV上的数据依旧存在。

由于有了之前的“铺垫”,这里仅仅给出使用PV和PVC的步骤:

1、创建disk image

$ rbd create ceph-image -s 128 #考虑后续format快捷,这里只用了128M,仅适用于Demo哦。

# rbd create ceph-image -s 128
# rbd info rbd/ceph-image
rbd image 'ceph-image':
    size 128 MB in 32 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.37202ae8944a
    format: 2
    features: layering
    flags:

如果这里不先创建一个ceph-image,后续Pod启动时,会出现如下的一些错误,比如pod始终处于ContainerCreating状态:

# kubectl get pod
NAME                        READY     STATUS              RESTARTS   AGE
ceph-pod1                   0/1       ContainerCreating   0          13s

如果出现这种错误情况,可以查看/var/log/upstart/kubelet.log,你也许能看到如下错误信息:

I1107 06:02:27.500247   22037 operation_executor.go:768] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/01d049c6-9430-11e6-ba01-00163e1625a9-default-token-40z0x" (spec.Name: "default-token-40z0x") pod "01d049c6-9430-11e6-ba01-00163e1625a9" (UID: "01d049c6-9430-11e6-ba01-00163e1625a9").
I1107 06:03:08.499628   22037 reconciler.go:294] MountVolume operation started for volume "kubernetes.io/rbd/ea848a49-a46b-11e6-ba01-00163e1625a9-ceph-pv" (spec.Name: "ceph-pv") to pod "ea848a49-a46b-11e6-ba01-00163e1625a9" (UID: "ea848a49-a46b-11e6-ba01-00163e1625a9").
E1107 06:03:09.532348   22037 disk_manager.go:56] failed to attach disk
E1107 06:03:09.532402   22037 rbd.go:228] rbd: failed to setup mount /var/lib/kubelet/pods/ea848a49-a46b-11e6-ba01-00163e1625a9/volumes/kubernetes.io~rbd/ceph-pv rbd: map failed exit status 2 rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (2) No such file or directory
2、创建PV

我们直接复用之前创建的ceph-secret对象,PV的描述文件ceph-pv.yaml如下:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 10.47.136.60:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

执行创建操作:

# kubectl create -f ceph-pv.yaml
persistentvolume "ceph-pv" created

# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
ceph-pv   1Gi        RWO           Recycle         Available                       7s
3、创建PVC

pvc是Pod对Pv的请求,将请求做成一种资源,便于管理以及pod复用。我们用到的pvc描述文件ceph-pvc.yaml如下:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

执行创建操作:

# kubectl create -f ceph-pvc.yaml
persistentvolumeclaim "ceph-claim" created

# kubectl get pvc
NAME         STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
ceph-claim   Bound     ceph-pv   1Gi        RWO           12s

4、创建挂载ceph RBD的pod

pod描述文件ceph-pod1.yaml如下:

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  containers:
  - name: ceph-busybox1
    image: busybox
    command: ["sleep", "600000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim

创建pod操作:

# kubectl create -f ceph-pod1.yaml
pod "ceph-pod1" created

# kubectl get pod
NAME                        READY     STATUS              RESTARTS   AGE
ceph-pod1                   0/1       ContainerCreating   0          13s

Pod还处于ContainerCreating状态。pod的创建,尤其是挂载pv的Pod的创建需要一小段时间,耐心等待一下,我们可以查看一下/var/log/upstart/kubelet.log:

I1107 11:44:38.768541   22037 mount_linux.go:272] `fsck` error fsck from util-linux 2.20.1

fsck.ext2: Bad magic number in super-block while trying to open /dev/rbd1
/dev/rbd1:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

E1107 11:44:38.774080   22037 mount_linux.go:110] Mount failed: exit status 32
Mounting arguments: /dev/rbd1 /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/rbd-image-ceph-image ext4 [defaults]
Output: mount: wrong fs type, bad option, bad superblock on /dev/rbd1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

I1107 11:44:38.839148   22037 mount_linux.go:292] Disk "/dev/rbd1" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/rbd1]
I1107 11:44:39.152689   22037 mount_linux.go:297] Disk successfully formatted (mkfs): ext4 - /dev/rbd1 /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/rbd-image-ceph-image
I1107 11:44:39.220223   22037 operation_executor.go:768] MountVolume.SetUp succeeded for volume "kubernetes.io/rbd/811a57ee-a49c-11e6-ba01-00163e1625a9-ceph-pv" (spec.Name: "ceph-pv") pod "811a57ee-a49c-11e6-ba01-00163e1625a9" (UID: "811a57ee-a49c-11e6-ba01-00163e1625a9").

可以看到,k8s通过fsck发现这个image是一个空image,没有fs在里面,于是默认采用ext4为其格式化,成功后,再行挂载。等待一会后,我们看到ceph-pod1成功run起来了:

# kubectl get pod
NAME                        READY     STATUS    RESTARTS   AGE
ceph-pod1                   1/1       Running   0          4m

# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
f50bb8c31b0f        busybox                                               "sleep 600000"           4 hours ago         Up 4 hours                              k8s_ceph-busybox1.c0c0379f_ceph-pod1_default_811a57ee-a49c-11e6-ba01-00163e1625a9_9d910a29

# docker exec 574b8069e548 df -h
Filesystem                Size      Used Available Use% Mounted on
none                     39.2G     20.9G     16.3G  56% /
tmpfs                     1.9G         0      1.9G   0% /dev
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/vda1                39.2G     20.9G     16.3G  56% /dev/termination-log
/dev/vda1                39.2G     20.9G     16.3G  56% /etc/resolv.conf
/dev/vda1                39.2G     20.9G     16.3G  56% /etc/hostname
/dev/vda1                39.2G     20.9G     16.3G  56% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
/dev/rbd1               120.0M      1.5M    109.5M   1% /usr/share/busybox
tmpfs                     1.9G     12.0K      1.9G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                     1.9G         0      1.9G   0% /proc/kcore
tmpfs                     1.9G         0      1.9G   0% /proc/timer_list
tmpfs                     1.9G         0      1.9G   0% /proc/timer_stats
tmpfs                     1.9G         0      1.9G   0% /proc/sched_debug

六、简单测试

这一节我们要对cephrbd作为k8s PV的效用做一个简单测试。测试步骤:

1) 在container中,向挂载的cephrbd写入数据;
2) 删除ceph-pod1
3) 重新创建ceph-pod1,查看数据是否还存在。

我们首先通过touch 、vi等命令向ceph-pod1挂载的cephrbd volume写入数据:我们通过容器f50bb8c31b0f 创建/usr/share/busybox/hello-ceph.txt,并向文件写入”hello ceph”一行字符串并保存。

# docker exec -it f50bb8c31b0f touch /usr/share/busybox/hello-ceph.txt
# docker exec -it f50bb8c31b0f vi /usr/share/busybox/hello-ceph.txt
# docker exec -it f50bb8c31b0f cat /usr/share/busybox/hello-ceph.txt
hello ceph

接下来删除ceph-pod1:

# kubectl get pod
NAME                        READY     STATUS    RESTARTS   AGE
ceph-pod1                   1/1       Running   0          4h

# kubectl delete pod/ceph-pod1
pod "ceph-pod1" deleted

# kubectl get pod
NAME                        READY     STATUS        RESTARTS   AGE
ceph-pod1                   1/1       Terminating   0          4h

# kubectl get pv,pvc
NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                REASON    AGE
pv/ceph-pv   1Gi        RWO           Recycle         Bound     default/ceph-claim             4h
NAME             STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/ceph-claim   Bound     ceph-pv   1Gi        RWO           4h

可以看到ceph-pod1的删除需要一段时间,这段时间pod一直处于“ Terminating”状态。同时,我们看到pod的删除并没有影响到pv和pvc object,它们依旧存在。

最后,我们再次来创建一下一个使用同一个pvc的pod,为了避免“不必要”的麻烦,我们建立一个名为ceph-pod2.yaml的描述文件:

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2
spec:
  containers:
  - name: ceph-busybox2
    image: busybox
    command: ["sleep", "600000"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol2
    persistentVolumeClaim:
      claimName: ceph-claim

创建ceph-pod2:

# kubectl create -f ceph-pod2.yaml
pod "ceph-pod2" created

root@node1:~/k8stest/k8s-cephrbd# kubectl get pod
NAME                        READY     STATUS    RESTARTS   AGE
ceph-pod2                   1/1       Running   0          14s

root@node1:~/k8stest/k8s-cephrbd# docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
574b8069e548        busybox                                               "sleep 600000"           11 seconds ago      Up 10 seconds                           k8s_ceph-busybox2.c5e637a1_ceph-pod2_default_f4aeebd6-a4c3-11e6-ba01-00163e1625a9_fc94c0fe

查看数据是否依旧存在:

# docker exec -it 574b8069e548 cat /usr/share/busybox/hello-ceph.txt
hello ceph

数据完好无损的被ceph-pod2读取到了!

七、小结

至此,对k8s与ceph的集成仅仅才是一个开端,更多的feature和坑等待挖掘。近期发现文章越写越长,原因么?自己赶脚是因为目标系统越来越大,越来越复杂。深入K8s的过程,就是继续给自己挖坑的过程^_^。

我,不是在填坑的路上,就是在坑里:)。

BTW,列一下参考资料:
1、Ceph官方文档
2、OpenShift中的K8s与Ceph RBD集成的文档;
3、Kubernetes官方文档Persistent volumes部分
4、zphj1987博主的这篇文章

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 Go语言编程指南
商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats