标签 docker 下的文章

Kubernetes集群node主机名修改导致的异常

除了在生产环境使用的Kubernetes 1.3.7集群之外,我这里还有一套1.5.1的Kubernetes测试环境,这个测试环境一来用于验证各种技术方案,二来也是为了跟踪Kubernetes的最新进展。本篇要记录的一个异常就是发生在该测试Kubernetes集群中的。

一、缘起

前两天我在Kubernetes测试环境搭建一套Ceph,为了便于ceph-deploy的安装,我通过hostnamectl命令将阿里云默认提供的复杂又冗长的主机名改为短小且更有意义的主机名:

iZ25beglnhtZ -> yypdmaster
iz2ze39jeyizepdxhwqci6z -> yypdnode

以yypdmaster为例,修改过程如下:

# hostnamectl --static set-hostname yypdmaster
# hostnamectl status
Static hostname: yypdmaster
Transient hostname: iZ25beglnhtZ
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 91aa4b8f2556de49e743dc2f53e8a5c4
           Boot ID: 5d0e642ebafa460086388da4177e488e
    Virtualization: kvm
  Operating System: Ubuntu 16.04.1 LTS
            Kernel: Linux 4.4.0-58-generic
      Architecture: x86-64

# cat /etc/hostname
yypdmaster

hostnamectl并未修改/etc/hosts,我手动在/etc/hosts中将yypdmaster对应的ip配置上:

xx.xx.xx.xx yypdmaster

重新登录后,我们看到主机名状态:Transient hostname不见了,只剩下了静态主机名:

# hostnamectl status
   Static hostname: yypdmaster
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 91aa4b8f2556de49e743dc2f53e8a5c4
           Boot ID: 5d0e642ebafa460086388da4177e488e
    Virtualization: kvm
  Operating System: Ubuntu 16.04.1 LTS
            Kernel: Linux 4.4.0-58-generic
      Architecture: x86-64

另外一台主机也是如此修改。主机名修改后,整个k8s集群工作一切正常,因此我最初以为hostname的修改对k8s cluster的运行没有影响。

二、集群”Crash”

昨天在做跨节点挂载Cephfs测试时,发现在yypdmaster上kubectl exec另外一个node上的pod不好用,提示:连接10250端口超时!而且从错误日志来看,yypdmaster上的k8s组件居然通过yypdnode的外网ip去访问yypdnode上的10250端口,也就是yypdnode上kubelet监听的端口。由于aliyun的安全组规则限制,这个端口是不允许外网访问的,因此timeout错误是合理的。但为什么之前集群都是好好的?突然间出现这个问题呢?为什么不用内网的ip地址访问呢?

我尝试重启了yypdnode上的kubelet服务。不过似乎没什么效果!正当我疑惑时,我发现集群似乎”Crash”了,下面是当时查看集群的pod情况的输出:

# kubectl get pod --all-namespaces -o wide

NAMESPACE                    NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
default                      ceph-pod2                               1/1       Unknown            0          26m       172.30.192.4   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret                   1/1       Unknown            0          38m       172.30.192.2   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret-on-master         1/1       Unknown            0          34m       172.30.0.51    iz25beglnhtz
default                      nginx-kit-3630450072-2c0jk              0/2       Pending            0          12m       <none>
default                      nginx-kit-3630450072-3n50m              2/2       Unknown            20         35d       172.30.0.44    iz25beglnhtz
default                      nginx-kit-3630450072-90v4q              0/2       Pending            0          12m       <none>
default                      nginx-kit-3630450072-j8qrk              2/2       Unknown            20         72d       172.30.0.47    iz25beglnhtz
kube-system                  dummy-2088944543-9382n                  1/1       Running            0          12m       xx.xx.xx.xx   yypdmaster
kube-system                  dummy-2088944543-93f4c                  1/1       Unknown            16         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  elasticsearch-logging-v1-dhl35          1/1       Running            0          12m       172.30.192.6   yypdnode
kube-system                  elasticsearch-logging-v1-s3sbj          1/1       Unknown            9          35d       172.30.0.45    iz25beglnhtz
kube-system                  elasticsearch-logging-v1-t8wg0          1/1       Unknown            29         68d       172.30.0.43    iz25beglnhtz
kube-system                  elasticsearch-logging-v1-zdp19          1/1       Running            0          12m       172.30.0.3     yypdmaster
kube-system                  etcd-iz25beglnhtz                       1/1       Unknown            17         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  etcd-yypdmaster                         1/1       Running            17         17m       xx.xx.xx.xx   yypdmaster
kube-system                  fluentd-es-v1.22-ggvv4                  1/1       NodeLost           24         68d       172.30.0.46    iz25beglnhtz
kube-system                  fluentd-es-v1.22-rj871                  1/1       Running            0          17m       172.30.0.1     yypdmaster
kube-system                  fluentd-es-v1.22-xn77x                  1/1       NodeLost           0          6d        172.30.192.0   iz2ze39jeyizepdxhwqci6z
kube-system                  fluentd-es-v1.22-z82rz                  1/1       Running            0          18m       172.30.192.5   yypdnode
kube-system                  kibana-logging-3746979809-dplzv         1/1       Running            0          12m       172.30.0.4     yypdmaster
kube-system                  kibana-logging-3746979809-lq9m3         1/1       Unknown            9          35d       172.30.0.49    iz25beglnhtz
kube-system                  kube-apiserver-iz25beglnhtz             1/1       Unknown            19         104d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-apiserver-yypdmaster               1/1       Running            19         17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-controller-manager-iz25beglnhtz    1/1       Unknown            21         130d      xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-controller-manager-yypdmaster      1/1       Running            21         17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-discovery-1769846148-wh1z4         1/1       Unknown            12         73d       xx.xx.xx.xx   iz25beglnhtz
kube-system                  kube-discovery-1769846148-z2v87         0/1       Pending            0          12m       <none>
kube-system                  kube-dns-2924299975-206tg               4/4       Unknown            129        130d      172.30.0.48    iz25beglnhtz
kube-system                  kube-dns-2924299975-g1kks               4/4       Running            0          12m       172.30.0.5     yypdmaster
kube-system                  kube-proxy-3z29k                        1/1       Running            0          18m       yy.yy.yy.yy    yypdnode
kube-system                  kube-proxy-kfzxv                        1/1       Running            0          17m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-proxy-n2xmf                        1/1       NodeLost           16         130d      xx.xx.xx.xx   iz25beglnhtz

观察这个输出,我们看到几点异常:

  • 不常见的Pod状态:Unknown、NodeLost
  • Node一列居然出现了四个Node: yypdmaster、yypdnode、 iz25beglnhtz和 iz2ze39jeyizepdxhwqci6z

等了一会儿,这种状态依然不见好转。我于是重启了master上的kubelet、重启了两个节点上的docker engine,不过启动后问题依旧!

查看Running状态的Pod情况:

# kubectl get pod --all-namespaces -o wide|grep Running
kube-system                  dummy-2088944543-9382n                  1/1       Running            0          18m       xx.xx.xx.xx   yypdmaster
kube-system                  elasticsearch-logging-v1-dhl35          1/1       Running            0          18m       172.30.192.6   yypdnode
kube-system                  elasticsearch-logging-v1-zdp19          1/1       Running            0          18m       172.30.0.3     yypdmaster
kube-system                  etcd-yypdmaster                         1/1       Running            17         23m       xx.xx.xx.xx   yypdmaster
kube-system                  fluentd-es-v1.22-rj871                  1/1       Running            0          23m       172.30.0.1     yypdmaster
kube-system                  fluentd-es-v1.22-z82rz                  1/1       Running            0          24m       172.30.192.5   yypdnode
kube-system                  kibana-logging-3746979809-dplzv         1/1       Running            0          18m       172.30.0.4     yypdmaster
kube-system                  kube-apiserver-yypdmaster               1/1       Running            19         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-controller-manager-yypdmaster      1/1       Running            21         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-dns-2924299975-g1kks               4/4       Running            0          18m       172.30.0.5     yypdmaster
kube-system                  kube-proxy-3z29k                        1/1       Running            0          24m       yy.yy.yy.yy    yypdnode
kube-system                  kube-proxy-kfzxv                        1/1       Running            0          23m       xx.xx.xx.xx   yypdmaster
kube-system                  kube-scheduler-yypdmaster               1/1       Running            22         23m       xx.xx.xx.xx   yypdmaster
kube-system                  kubernetes-dashboard-3109525988-cj74d   1/1       Running            0          18m       172.30.0.6     yypdmaster
mioss-namespace-s0fcvegcmw   console-sm7cg2-101699315-f3g55          1/1       Running            0          18m       172.30.0.7     yypdmaster

似乎Kubernetes集群并未真正”Crash”,但从Node列来看,正常的pod归属的node不是yypdmaster就是yypdnode, iz25beglnhtz和 iz2ze39jeyize

Kubernetes集群跨节点挂载CephFS

Kubernetes集群中运行有状态服务或应用总是不那么容易的。比如,之前我在项目中使用了CephRBD,虽然遇到过几次问题,但总体算是运行良好。但最近发现CephRBD无法满足跨节点挂载的需求,我只好另辟蹊径。由于CephFS和CephRBD师出同门,它自然成为了这次我首要考察的目标。这里将跨节点挂载CephFS的考察过程记录一下,一是备忘,二则也可以为其他有相似需求的朋友提供些资料。

一、CephRBD的问题

这里先提一嘴CephRBD的问题。最近项目中有这样的需求:让集群中的Pod共享外部分布式存储,即多个Pod共同挂载一份存储,实现存储共享,这样可大大简化系统设计和复杂性。之前CephRBD都是挂载到一个Pod中运行的,CephRBD是否支持多Pod同时挂载呢?官方文档中给出了否定的答案: 基于CephRBD的Persistent Volume仅支持两种accessmode:
ReadWriteOnce和ReadOnlyMany,不支持ReadWriteMany。这样对于有读写需求的Pod来说,一个CephRBD pv仅能被一个node挂载一次。

我们来验证一下这个“不幸的”事实。

我们首先创建一个测试用的image:foo1。这里我利用了项目里写的CephRBD API服务,也可通过ceph命令手工创建:

# curl -v  -H "Content-type: application/json" -X POST -d '{"kind": "Images","apiVersion": "v1", "metadata": {"name": "foo1", "capacity": 512} ' http://192.168.3.22:8080/api/v1/pools/rbd/images
... ...
{
  "errcode": 0,
  "errmsg": "ok"
}

# curl http://192.168.3.22:8080/api/v1/pools/rbd/images
{
  "Kind": "ImagesList",
  "APIVersion": "v1",
  "Items": [
    {
      "name": "foo1"
    }
  ]
}

利用下面文件创建pv和pvc:

//ceph-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: foo-pv
spec:
  capacity:
    storage: 512Mi
  accessModes:
    - ReadWriteMany
  rbd:
    monitors:
      - ceph_monitor_ip:port
    pool: rbd
    image: foo1
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

//ceph-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: foo-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 512Mi

创建后:

# kubectl get pv
[NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE
foo-pv              512Mi      RWO           Recycle         Bound     default/foo-claim                      20h

# kubectl get pvc
NAME                 STATUS    VOLUME              CAPACITY   ACCESSMODES   AGE
foo-claim            Bound     foo-pv              512Mi      RWO           20h

创建挂载上述image的Pod:

// ceph-pod2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2
spec:
  containers:
  - name: ceph-ubuntu2
    image: ubuntu:14.04
    command: ["tail", "-f", "/var/log/bootstrap.log"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /mnt/cephrbd/data
      readOnly: false
  volumes:
  - name: ceph-vol2
    persistentVolumeClaim:
      claimName: foo-claim

创建成功后,我们可以查看挂载目录的数据:

# kubectl exec ceph-pod2 ls /mnt/cephrbd/data
1.txt
lost+found

我们在同一个kubernetes node上再启动一个pod(可以把上面的ceph-pod2.yaml的pod name改为ceph-pod3),挂载同样的pv:

NAMESPACE                    NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
default                      ceph-pod2                               1/1       Running   0          3m        172.16.57.9    xx.xx.xx.xx
default                      ceph-pod3                               1/1       Running   0          0s        172.16.57.10    xx.xx.xx.xx
# kubectl exec ceph-pod3 ls /mnt/cephrbd/data
1.txt
lost+found

我们通过ceph-pod2写一个文件,在ceph-pod3中将其读出:

# kubectl exec ceph-pod2 -- bash -c "for i in {1..10}; do sleep 1; echo 'pod2: Hello, World'>> /mnt/cephrbd/data/foo.txt ; done "
root@node1:~/k8stest/k8s-cephrbd/footest# kubectl exec ceph-pod3 cat /mnt/cephrbd/data/foo.txt
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World
pod2: Hello, World

到目前为止,在一个node上多个Pod是可以以ReadWrite模式挂载同一个CephRBD的。

我们在另外一个节点启动一个试图挂载该pv的Pod,该Pod启动后一直处于pending状态,通过kubectl describe查看其详细信息,可以看到:

Events:
  FirstSeen    LastSeen    Count    From            SubobjectPath    Type        Reason        Message
  ---------    --------    -----    ----            -------------    --------    ------        -------
.. ...
  2m        37s        2    {kubelet yy.yy.yy.yy}            Warning        FailedMount    Unable to mount volumes for pod "ceph-pod2-master_default(a45f62aa-2bc3-11e7-9baa-00163e1625a9)": timeout expired waiting for volumes to attach/mount for pod "ceph-pod2-master"/"default". list of unattached/unmounted volumes=[ceph-vol2]
  2m        37s        2    {kubelet yy.yy.yy.yy}            Warning        FailedSync    Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ceph-pod2-master"/"default". list of unattached/unmounted volumes=[ceph-vol2]

查看kubelet.log中的错误日志:

I0428 11:39:15.737729    1241 reconciler.go:294] MountVolume operation started for volume "kubernetes.io/rbd/a45f62aa-2bc3-11e7-9baa-00163e1625a9-foo-pv" (spec.Name: "foo-pv") to pod "a45f62aa-2bc3-11e7-9baa-00163e1625a9" (UID: "a45f62aa-2bc3-11e7-9baa-00163e1625a9").
I0428 11:39:15.939183    1241 operation_executor.go:768] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/923700ff-12c2-11e7-9baa-00163e1625a9-default-token-40z0x" (spec.Name: "default-token-40z0x") pod "923700ff-12c2-11e7-9baa-00163e1625a9" (UID: "923700ff-12c2-11e7-9baa-00163e1625a9").
E0428 11:39:17.039656    1241 disk_manager.go:56] failed to attach disk
E0428 11:39:17.039722    1241 rbd.go:228] rbd: failed to setup mount /var/lib/kubelet/pods/a45f62aa-2bc3-11e7-9baa-00163e1625a9/volumes/kubernetes.io~rbd/foo-pv rbd: image foo1 is locked by other nodes
E0428 11:39:17.039857    1241 nestedpendingoperations.go:254] Operation for "\"kubernetes.io/rbd/a45f62aa-2bc3-11e7-9baa-00163e1625a9-foo-pv\" (\"a45f62aa-2bc3-11e7-9baa-00163e1625a9\")" failed. No retries permitted until 2017-04-28 11:41:17.039803969 +0800 CST (durationBeforeRetry 2m0s). Error: MountVolume.SetUp failed for volume "kubernetes.io/rbd/a45f62aa-2bc3-11e7-9baa-00163e1625a9-foo-pv" (spec.Name: "foo-pv") pod "a45f62aa-2bc3-11e7-9baa-00163e1625a9" (UID: "a45f62aa-2bc3-11e7-9baa-00163e1625a9") with: rbd: image foo1 is locked by other nodes

可以看到“rbd: image foo1 is locked by other nodes”的日志。我们用试验证明了目前CephRBD仅能被k8s中的一个node挂载的事实。

二、Ceph集群安装mds以支持CephFS

这次我在两个Ubuntu 16.04的vm上新部署了一套Ceph,过程与之前第一次部署Ceph时大同小异,这里就不赘述了。要让Ceph支持CephFS,我们需要安装mds组件,有了前面的基础,通过ceph-deploy工具安装mds十分简单:

# ceph-deploy mds create yypdmaster yypdnode
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy mds create yypdmaster yypdnode
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f60fb5e71b8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f60fba4e140>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('yypdmaster', 'yypdmaster'), ('yypdnode', 'yypdnode')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts yypdmaster:yypdmaster yypdnode:yypdnode
[yypdmaster][DEBUG ] connected to host: yypdmaster
[yypdmaster][DEBUG ] detect platform information from remote host
[yypdmaster][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to yypdmaster
[yypdmaster][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[yypdmaster][DEBUG ] create path if it doesn't exist
[yypdmaster][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.yypdmaster osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-yypdmaster/keyring
[yypdmaster][INFO  ] Running command: systemctl enable ceph-mds@yypdmaster
[yypdmaster][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@yypdmaster.service to /lib/systemd/system/ceph-mds@.service.
[yypdmaster][INFO  ] Running command: systemctl start ceph-mds@yypdmaster
[yypdmaster][INFO  ] Running command: systemctl enable ceph.target
[yypdnode][DEBUG ] connected to host: yypdnode
[yypdnode][DEBUG ] detect platform information from remote host
[yypdnode][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to yypdnode
[yypdnode][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[yypdnode][DEBUG ] create path if it doesn't exist
[yypdnode][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.yypdnode osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-yypdnode/keyring
[yypdnode][INFO  ] Running command: systemctl enable ceph-mds@yypdnode
[yypdnode][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@yypdnode.service to /lib/systemd/system/ceph-mds@.service.
[yypdnode][INFO  ] Running command: systemctl start ceph-mds@yypdnode
[yypdnode][INFO  ] Running command: systemctl enable ceph.target

非常顺利。安装后,可以在任意一个node上看到mds在运行:

# ps -ef|grep ceph
ceph      7967     1  0 17:23 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph     15674     1  0 17:32 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id yypdnode --setuser ceph --setgroup ceph
ceph     18019     1  0 17:35 ?        00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id yypdnode --setuser ceph --setgroup ceph

mds是存储cephfs的元信息的,我的ceph是10.2.7版本:

# ceph -v
ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)

虽然支持多 active mds并行运行,但官方文档建议保持一个active mds,其他mds作为standby(见下面ceph集群信息中的fsmap部分):

# ceph -s
    cluster ffac3489-d678-4caf-ada2-3dd0743158b6
    ... ...
      fsmap e6: 1/1/1 up {0=yypdnode=up:active}, 1 up:standby
     osdmap e19: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v192498: 576 pgs, 5 pools, 126 MB data, 238 objects
            44365 MB used, 31881 MB / 80374 MB avail
                 576 active+clean

三、创建fs并测试挂载

我们在ceph上创建一个fs:

# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created

# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created

# ceph fs new test_fs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1

# ceph fs ls
name: test_fs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

不过,ceph当前正式版功能中仅支持一个fs,对多个fs的支持仅存在于实验feature中:

# ceph osd pool create cephfs1_data 128
# ceph osd pool create cephfs1_metadata 128
# ceph fs new test_fs1 cephfs1_metadata cephfs1_data
Error EINVAL: Creation of multiple filesystems is disabled.  To enable this experimental feature, use 'ceph fs flag set enable_multiple true'

在物理机上挂载cephfs可以使用mount命令、mount.ceph(apt-get install ceph-fs-common)或ceph-fuse(apt-get install ceph-fuse),我们先用mount命令挂载:

我们将上面创建的cephfs挂载到主机的/mnt下:

#mount -t ceph ceph_mon_host:6789:/ /mnt -o name=admin,secretfile=admin.secret

# cat admin.secret //ceph.client.admin.keyring中的key
AQDITghZD+c/DhAArOiWWQqyMAkMJbWmHaxjgQ==

查看cephfs信息:

# df -h
ceph_mon_host:6789:/   79G   45G   35G  57% /mnt

可以看出:cephfs将两个物理节点上的磁盘全部空间作为了自己的空间。

通过ceph-fuse挂载,还可以限制对挂载路径的访问权限,我们来创建用户foo,让其仅仅拥有对/ceph-volume1-test路径具有只读访问权限:

# ceph auth get-or-create client.foo mon 'allow *' mds 'allow r path=/ceph-volume1-test' osd 'allow *'
# ceph-fuse -n client.foo -m 10.47.217.91:6789 /mnt -r /ceph-volume1-test
ceph-fuse[10565]: starting ceph client2017-05-03 16:07:25.958903 7f1a14fbff00 -1 init, newargv = 0x557e350defc0 newargc=11
ceph-fuse[10565]: starting fuse

查看挂载路径,并尝试创建文件:

# cd /mnt
root@yypdnode:/mnt# ls
1.txt
root@yypdnode:/mnt# touch 2.txt
touch: cannot touch '2.txt': Permission denied

由于foo用户只拥有对 /ceph-volume1-test的只读权限,因此创建文件失败了!

四、Kubernetes跨节点挂载CephFS

在K8s中,至少可以通过两种方式挂载CephFS,一种是通过Pod直接挂载;另外一种则是通过pv和pvc挂载。我们分别来看。

1、Pod直接挂载CephFS

//ceph-pod2-with-secret.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2-with-secret
spec:
  containers:
  - name: ceph-ubuntu2
    image: ubuntu:14.04
    command: ["tail", "-f", "/var/log/bootstrap.log"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /mnt/cephfs/data
      readOnly: false
  volumes:
  - name: ceph-vol2
    cephfs:
      monitors:
      - ceph_mon_host:6789
      user: admin
      secretFile: "/etc/ceph/admin.secret"
      readOnly: false

注意:保证每个节点上都存在/etc/ceph/admin.secret文件。

查看Pod挂载的内容:

# docker ps|grep pod
bc96431408c7        ubuntu:14.04                                                  "tail -f /var/log/boo"   About a minute ago   Up About a minute                                                                        k8s_ceph-ubuntu2.66c44128_ceph-pod2-with-secret_default_3d8a05f8-33c3-11e7-bcd9-6640d35a0e90_fc483b8a
bcc65ab82069        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 About a minute ago   Up About a minute                                                                        k8s_POD.d8dbe16c_ceph-pod2-with-secret_default_3d8a05f8-33c3-11e7-bcd9-6640d35a0e90_02381204

root@yypdnode:~# docker exec bc96431408c7 ls /mnt/cephfs/data
1.txt
apps
ceph-volume1-test
test1.txt

我们再在另外一个node上启动挂载同一个cephfs的Pod,看是否可以跨节点挂载:

# kubectl get pods

NAMESPACE                    NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
default                      ceph-pod2-with-secret                   1/1       Running   0          3m        172.30.192.2   iz2ze39jeyizepdxhwqci6z
default                      ceph-pod2-with-secret-on-master         1/1       Running   0          3s        172.30.0.51    iz25beglnhtz
... ...

# kubectl exec ceph-pod2-with-secret-on-master ls /mnt/cephfs/data
1.txt
apps
ceph-volume1-test
test1.txt

可以看到不同节点可以挂载同一CephFS。我们在一个pod中操作一下挂载的cephfs:

# kubectl exec ceph-pod2-with-secret-on-master -- bash -c "for i in {1..10}; do sleep 1; echo 'pod2-with-secret-on-master: Hello, World'>> /mnt/cephfs/data/foo.txt ; done "
root@yypdmaster:~/k8stest/cephfstest/footest# kubectl exec ceph-pod2-with-secret-on-master cat /mnt/cephfs/data/foo.txt
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World

2、通过PV和PVC挂载CephFS

挂载cephfs的pv和pvc在写法方面与上面挂载rbd的类似:

//ceph-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: foo-pv
spec:
  capacity:
    storage: 512Mi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - ceph_mon_host:6789
    path: /
    user: admin
    secretRef:
      name: ceph-secret
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

//ceph-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: foo-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 512Mi

使用pvc的pod:

//ceph-pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2
spec:
  containers:
  - name: ceph-ubuntu2
    image: ubuntu:14.04
    command: ["tail", "-f", "/var/log/bootstrap.log"]
    volumeMounts:
    - name: ceph-vol2
      mountPath: /mnt/cephfs/data
      readOnly: false
  volumes:
  - name: ceph-vol2
    persistentVolumeClaim:
      claimName: foo-claim

创建pv、pvc:

# kubectl create -f ceph-pv.yaml
persistentvolume "foo-pv" created
# kubectl create -f ceph-pvc.yaml
persistentvolumeclaim "foo-claim" created

# kubectl get pvc
NAME        STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
foo-claim   Bound     foo-pv    512Mi      RWX           4s
# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM               REASON    AGE
foo-pv    512Mi      RWX           Recycle         Bound     default/foo-claim             24s

启动pod,通过exec命令查看挂载情况:

# docker ps|grep pod
a6895ec0274f        ubuntu:14.04                                                  "tail -f /var/log/boo"   About a minute ago   Up About a minute                                                                        k8s_ceph-ubuntu2.66c44128_ceph-pod2_default_4e4fc8d4-33c6-11e7-bcd9-6640d35a0e90_1b37ed76
52b6811a6584        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 About a minute ago   Up About a minute                                                                        k8s_POD.d8dbe16c_ceph-pod2_default_4e4fc8d4-33c6-11e7-bcd9-6640d35a0e90_27e5f988
55b96edbf4bf        ubuntu:14.04                                                  "tail -f /var/log/boo"   14 minutes ago       Up 14 minutes                                                                            k8s_ceph-ubuntu2.66c44128_ceph-pod2-with-secret_default_9d383b0c-33c4-11e7-bcd9-6640d35a0e90_1656e5e0
f8b699bc0459        gcr.io/google_containers/pause-amd64:3.0                      "/pause"                 14 minutes ago       Up 14 minutes                                                                            k8s_POD.d8dbe16c_ceph-pod2-with-secret_default_9d383b0c-33c4-11e7-bcd9-6640d35a0e90_effdfae7
root@yypdnode:~# docker exec a6895ec0274f ls /mnt/cephfs/data
1.txt
apps
ceph-volume1-test
foo.txt
test1.txt

# docker exec a6895ec0274f cat /mnt/cephfs/data/foo.txt
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World
pod2-with-secret-on-master: Hello, World

五、pv的状态

如果你不删除pvc,一切都安然无事:

# kubectl get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE
foo-pv              512Mi      RWX           Recycle         Bound     default/foo-claim                      1h

# kubectl get pvc
NAME                 STATUS    VOLUME              CAPACITY   ACCESSMODES   AGE
foo-claim            Bound     foo-pv              512Mi      RWX           1h

但是如果删除pvc,pv的状态将变成failed:

删除pvc:

# kubectl get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                        REASON    AGE
foo-pv              512Mi      RWX           Recycle         Failed    default/foo-claim                      2h

# kubectl describe pv/foo-pv
Name:        foo-pv
Labels:        <none>
Status:        Failed
Claim:        default/foo-claim
Reclaim Policy:    Recycle
Access Modes:    RWX
Capacity:    512Mi
Message:    No recycler plugin found for the volume!
Source:
    Type:        RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:    [xx.xx.xx.xx:6789]
    RBDImage:        foo1
    FSType:        ext4
    RBDPool:        rbd
    RadosUser:        admin
    Keyring:        /etc/ceph/keyring
    SecretRef:        &{ceph-secret}
    ReadOnly:        false
Events:
  FirstSeen    LastSeen    Count    From                SubobjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
  29s        29s        1    {persistentvolume-controller }            Warning        VolumeFailedRecycle    No recycler plugin found for the volume!

我们在pv中指定的persistentVolumeReclaimPolicy是Recycle,但无论是cephrbd还是cephfs都不没有对应的recycler plugin,导致pv的status变成了failed,只能手工删除重建。

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 Go语言编程指南
商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats