标签 集群 下的文章

使用Kubeadm安装Kubernetes

在《当Docker遇到systemd》一文中,我提到过这两天儿一直在做的一个task:使用kubeadmUbuntu 16.04上安装部署Kubernetes的最新发布版本-k8s 1.5.1

年中,Docker宣布在Docker engine中集成swarmkit工具包,这一announcement在轻量级容器界引发轩然大波。毕竟开发者是懒惰的^0^,有了docker swarmkit,驱动developer去安装其他容器编排工具的动力在哪里呢?即便docker engine还不是当年那个被人们高频使用的IE浏览器。作为针对Docker公司这一市场行为的回应,容器集群管理和服务编排领先者Kubernetes在三个月后发布了Kubernetes1.4.0版本。在这个版本中K8s新增了kubeadm工具。kubeadm的使用方式有点像集成在docker engine中的swarm kit工具,旨在改善开发者在安装、调试和使用k8s时的体验,降低安装和使用门槛。理论上通过两个命令:init和join即可搭建出一套完整的Kubernetes cluster。

不过,和初入docker引擎的swarmkit一样,kubeadm目前也在active development中,也不是那么stable,因此即便在当前最新的k8s 1.5.1版本中,它仍然处于Alpha状态,官方不建议在Production环境下使用。每次执行kubeadm init时,它都会打印如下提醒日志:

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

不过由于之前部署的k8s 1.3.7集群运行良好,这给了我们在k8s这条路上继续走下去并走好的信心。但k8s在部署和管理方面的体验的确是太繁琐了,于是我们准备试验一下kubeadm是否能带给我们超出预期的体验。之前在aliyun ubuntu 14.04上安装kubernetes 1.3.7的经验和教训,让我略微有那么一丢丢底气,但实际安装过程依旧是一波三折。这既与kubeadm的unstable有关,同样也与cni、第三方网络add-ons的质量有关。无论哪一方出现问题都会让你的install过程异常坎坷曲折。

一、环境与约束

在kubeadm支持的Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+三种操作系统中,我们选择了Ubuntu 16.04。由于阿里云尚无官方16.04 Image可用,我们新开了两个Ubuntu 14.04ECS实例,并通过apt-get命令手工将其升级到Ubuntu 16.04.1,详细版本是:Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-58-generic x86_64)。

Ubuntu 16.04使用了systemd作为init system,在安装和配置Docker时,可以参考我的这篇《当Docker遇到system》。Docker版本我选择了目前可以得到的lastest stable release: 1.12.5。

# docker version
Client:
 Version:      1.12.5
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   7392c3b
 Built:        Fri Dec 16 02:42:17 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.5
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   7392c3b
 Built:        Fri Dec 16 02:42:17 2016
 OS/Arch:      linux/amd64

至于Kubernetes版本,前面已经提到过了,我们就使用最新发布的Kubernetes 1.5.1版本。1.5.1是1.5.0的一个紧急fix版本,主要”to address default flag values which in isolation were not problematic, but in concert could result in an insecure cluster”。官方建议skip 1.5.0,直接用1.5.1。

这里再重申一下:Kubernetes的安装、配置和调通是很难的,在阿里云上调通就更难了,有时还需要些运气。Kubernetes、Docker、cni以及各种网络Add-ons都在active development中,也许今天还好用的step、tip和trick,明天就out-dated,因此在借鉴本文的操作步骤时,请谨记这些^0^。

二、安装包准备

我们这次新开了两个ECS实例,一个作为master node,一个作为minion node。Kubeadm默认安装时,master node将不会参与Pod调度,不会承载work load,即不会有非核心组件的Pod在Master node上被创建出来。当然通过kubectl taint命令可以解除这一限制,不过这是后话了。

集群拓扑:

master node:10.47.217.91,主机名:iZ25beglnhtZ
minion node:10.28.61.30,主机名:iZ2ze39jeyizepdxhwqci6Z

本次安装的主参考文档就是Kubernetes官方的那篇《Installing Kubernetes on Linux with kubeadm》。

本小节,我们将进行安装包准备,即将kubeadm以及此次安装所需要的k8s核心组件统统下载到上述两个Node上。注意:如果你有加速器,那么本节下面的安装过程将尤为顺利,反之,… :( 。以下命令,在两个Node上均要执行。

1、添加apt-key

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK

2、添加Kubernetes源并更新包信息

添加Kubernetes源到sources.list.d目录下:

# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
  deb http://apt.kubernetes.io/ kubernetes-xenial main
  EOF

# cat /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

更新包信息:

# apt-get update
... ...
Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease
Hit:3 https://apt.dockerproject.org/repo ubuntu-xenial InRelease
Get:4 http://mirrors.aliyun.com/ubuntu xenial-security InRelease [102 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [6,299 B]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [1,739 B]
Get:6 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease [102 kB]
Get:7 http://mirrors.aliyun.com/ubuntu xenial-proposed InRelease [253 kB]
Get:8 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease [102 kB]
Fetched 568 kB in 19s (28.4 kB/s)
Reading package lists... Done

3、下载Kubernetes核心组件

在此次安装中,我们通过apt-get就可以下载Kubernetes的核心组件,包括kubelet、kubeadm、kubectl和kubernetes-cni等。

# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libtimedate-perl
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  ebtables ethtool socat
The following NEW packages will be installed:
  ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 37.6 MB of archives.
After this operation, 261 MB of additional disk space will be used.
Get:2 http://mirrors.aliyun.com/ubuntu xenial/main amd64 ebtables amd64 2.0.10.4-3.4ubuntu1 [79.6 kB]
Get:6 http://mirrors.aliyun.com/ubuntu xenial/main amd64 ethtool amd64 1:4.5-1 [97.5 kB]
Get:7 http://mirrors.aliyun.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.3.0.1-07a8a2-00 [6,877 kB]
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.5.1-00 [15.1 MB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.5.1-00 [7,954 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.6.0-alpha.0-2074-a092d8e0f95f52-00 [7,120 kB]
Fetched 37.6 MB in 36s (1,026 kB/s)
... ...
Unpacking kubeadm (1.6.0-alpha.0-2074-a092d8e0f95f52-00) ...
Processing triggers for systemd (229-4ubuntu13) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up ebtables (2.0.10.4-3.4ubuntu1) ...
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up ethtool (1:4.5-1) ...
Setting up kubernetes-cni (0.3.0.1-07a8a2-00) ...
Setting up socat (1.7.3.1-1) ...
Setting up kubelet (1.5.1-00) ...
Setting up kubectl (1.5.1-00) ...
Setting up kubeadm (1.6.0-alpha.0-2074-a092d8e0f95f52-00) ...
Processing triggers for systemd (229-4ubuntu13) ...
Processing triggers for ureadahead (0.100.0-19) ...
... ...

下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service:

# ls /lib/systemd/system|grep kube
kubelet.service

//kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

kubelet的版本:

# kubelet --version
Kubernetes v1.5.1

k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。同时,问题也就随之而来了,而这些问题以及问题的解决才是本篇要说明的重点。

三、初始化集群

前面说过,理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是,kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理,因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images,大约有如下一些:

gcr.io/google_containers/kube-controller-manager-amd64   v1.5.1                     cd5684031720        2 weeks ago         102.4 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.5.1                     8c12509df629        2 weeks ago         124.1 MB
gcr.io/google_containers/kube-proxy-amd64                v1.5.1                     71d2b27b03f6        2 weeks ago         175.6 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.5.1                     6506e7b74dac        2 weeks ago         53.97 MB
gcr.io/google_containers/etcd-amd64                      3.0.14-kubeadm             856e39ac7be3        5 weeks ago         174.9 MB
gcr.io/google_containers/kubedns-amd64                   1.9                        26cf1ed9b144        5 weeks ago         47 MB
gcr.io/google_containers/dnsmasq-metrics-amd64           1.0                        5271aabced07        7 weeks ago         14 MB
gcr.io/google_containers/kube-dnsmasq-amd64              1.4                        3ec65756a89b        3 months ago        5.13 MB
gcr.io/google_containers/kube-discovery-amd64            1.0                        c5e0c9a457fc        3 months ago        134.2 MB
gcr.io/google_containers/exechealthz-amd64               1.2                        93a43bfb39bf        3 months ago        8.375 MB
gcr.io/google_containers/pause-amd64                     3.0                        99e59f495ffa        7 months ago        746.9 kB

在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:“flannel is a network fabric for containers, designed for Kubernetes”。如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。

1、执行kubeadm init

执行kubeadm init命令:

# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "2e7da9.7fc5668ff26430c7"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready //如果没有挂加速器,可能会在这里hang住。
[apiclient] All control plane components are healthy after 54.789750 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.003053 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 62.503441 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=2e7da9.7fc5668ff26430c7 123.56.200.187

init成功后的master node有啥变化?k8s的核心组件均正常启动:

# ps -ef|grep kube
root      2477  2461  1 16:36 ?        00:00:04 kube-proxy --kubeconfig=/run/kubeconfig
root     30860     1 12 16:33 ?        00:01:09 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local
root     30952 30933  0 16:33 ?        00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080
root     31128 31103  2 16:33 ?        00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16
root     31223 31207  2 16:34 ?        00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=123.56.200.187 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379
root     31491 31475  0 16:35 ?        00:00:00 /usr/local/bin/kube-discovery

而且是多以container的形式启动:

# docker ps
CONTAINER ID        IMAGE                                                           COMMAND                  CREATED                  STATUS                  PORTS               NAMES
c16c442b7eca        gcr.io/google_containers/kube-proxy-amd64:v1.5.1                "kube-proxy --kubecon"   6 minutes ago            Up 6 minutes                                k8s_kube-proxy.36dab4e8_kube-proxy-sb4sm_kube-system_43fb1a2c-cb46-11e6-ad8f-00163e1001d7_2ba1648e
9f73998e01d7        gcr.io/google_containers/kube-discovery-amd64:1.0               "/usr/local/bin/kube-"   8 minutes ago            Up 8 minutes                                k8s_kube-discovery.7130cb0a_kube-discovery-1769846148-6z5pw_kube-system_1eb97044-cb46-11e6-ad8f-00163e1001d7_fd49c2e3
dd5412e5e15c        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   9 minutes ago            Up 9 minutes                                k8s_kube-apiserver.1c5a91d9_kube-apiserver-iz25beglnhtz_kube-system_eea8df1717e9fea18d266103f9edfac3_8cae8485
60017f8819b2        gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm              "etcd --listen-client"   9 minutes ago            Up 9 minutes                                k8s_etcd.c323986f_etcd-iz25beglnhtz_kube-system_3a26566bb004c61cd05382212e3f978f_06d517eb
03c2463aba9c        gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1   "kube-controller-mana"   9 minutes ago            Up 9 minutes                                k8s_kube-controller-manager.d30350e1_kube-controller-manager-iz25beglnhtz_kube-system_9a40791dd1642ea35c8d95c9e610e6c1_3b05cb8a
fb9a724540a7        gcr.io/google_containers/kube-scheduler-amd64:v1.5.1            "kube-scheduler --add"   9 minutes ago            Up 9 minutes                                k8s_kube-scheduler.ef325714_kube-scheduler-iz25beglnhtz_kube-system_dc58861a0991f940b0834f8a110815cb_9b3ccda2
.... ...

不过这些核心组件并不是跑在pod network中的(没错,此时的pod network还没有创建),而是采用了host network。以kube-apiserver的pod信息为例:

kube-system   kube-apiserver-iz25beglnhtz            1/1       Running   0          1h        10.47.217.91   iz25beglnhtz

kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出:

# docker ps |grep apiserver
a5a76bc59e38        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   About an hour ago   Up About an hour                        k8s_kube-apiserver.2529402_kube-apiserver-iz25beglnhtz_kube-system_25d646be9a0092138dc6088fae6f1656_ec0079fc
ef4d3bf057a6        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 About an hour ago   Up About an hour                        k8s_POD.d8dbe16c_kube-apiserver-iz25beglnhtz_kube-system_25d646be9a0092138dc6088fae6f1656_bbfd8a31

inspect pause容器,可以看到pause container的NetworkMode的值:

"NetworkMode": "host",

如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出:

# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
    Port 10250 is in use
    /etc/kubernetes/manifests is not empty
    /etc/kubernetes/pki is not empty
    /var/lib/kubelet is not empty
    /etc/kubernetes/admin.conf already exists
    /etc/kubernetes/kubelet.conf already exists
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`

kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。

# kubeadm reset
[preflight] Running pre-flight checks
[reset] Draining node: "iz25beglnhtz"
[reset] Removing node: "iz25beglnhtz"
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]

2、安装flannel pod网络

kubeadm init之后,如果你探索一下当前cluster的状态或者核心组件的日志,你会发现某些“异常”,比如:从kubelet的日志中我们可以看到一直刷屏的错误信息:

Dec 26 16:36:48 iZ25beglnhtZ kubelet[30860]: E1226 16:36:48.365885   30860 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-pddz5_kube-system(43fd7264-cb46-11e6-ad8f-00163e1001d7)" using network plugins "cni": cni config unintialized; Skipping pod

通过命令kubectl get pod –all-namespaces -o wide,你也会发现kube-dns pod处于ContainerCreating状态。

这些都不打紧,因为我们还没有为cluster安装Pod network呢。前面说过,我们要使用Flannel网络,因此我们需要执行如下安装命令:

#kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

稍等片刻,我们再来看master node上的cluster信息:

# ps -ef|grep kube|grep flannel
root      6517  6501  0 17:20 ?        00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root      6573  6546  0 17:20 ?        00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-s0c5g                 1/1       Running   0          50m
kube-system   etcd-iz25beglnhtz                      1/1       Running   0          50m
kube-system   kube-apiserver-iz25beglnhtz            1/1       Running   0          50m
kube-system   kube-controller-manager-iz25beglnhtz   1/1       Running   0          50m
kube-system   kube-discovery-1769846148-6z5pw        1/1       Running   0          50m
kube-system   kube-dns-2924299975-pddz5              4/4       Running   0          49m
kube-system   kube-flannel-ds-5ww9k                  2/2       Running   0          4m
kube-system   kube-proxy-sb4sm                       1/1       Running   0          49m
kube-system   kube-scheduler-iz25beglnhtz            1/1       Running   0          49m

至少集群的核心组件已经全部run起来了。看起来似乎是成功了。

3、minion node:join the cluster

接下来,就该minion node加入cluster了。这里我们用到了kubeadm的第二个命令:kubeadm join。

在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的):

# kubeadm join --token=2e7da9.7fc5668ff26430c7 123.56.200.187
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://123.56.200.187:9898/cluster-info/v1/?token-id=2e7da9"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://123.56.200.187:6443]
[bootstrap] Trying to connect to endpoint https://123.56.200.187:6443
[bootstrap] Detected server version: v1.5.1
[bootstrap] Successfully established connection with endpoint "https://123.56.200.187:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:iZ2ze39jeyizepdxhwqci6Z | CA: false
Not before: 2016-12-26 09:31:00 +0000 UTC Not After: 2017-12-26 09:31:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

也很顺利。我们在minion node上看到的k8s组件情况如下:

d85cf36c18ed        gcr.io/google_containers/kube-proxy-amd64:v1.5.1      "kube-proxy --kubecon"   About an hour ago   Up About an hour                        k8s_kube-proxy.36dab4e8_kube-proxy-lsn0t_kube-system_b8eddf1c-cb4e-11e6-ad8f-00163e1001d7_5826f32b
a60e373b48b8        gcr.io/google_containers/pause-amd64:3.0              "/pause"                 About an hour ago   Up About an hour                        k8s_POD.d8dbe16c_kube-proxy-lsn0t_kube-system_b8eddf1c-cb4e-11e6-ad8f-00163e1001d7_46bfcf67
a665145eb2b5        quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64   "/bin/sh -c 'set -e -"   About an hour ago   Up About an hour                        k8s_install-cni.17d8cf2_kube-flannel-ds-tr8zr_kube-system_06eca729-cb72-11e6-ad8f-00163e1001d7_01e12f61
5b46f2cb0ccf        gcr.io/google_containers/pause-amd64:3.0              "/pause"                 About an hour ago   Up About an hour                        k8s_POD.d8dbe16c_kube-flannel-ds-tr8zr_kube-system_06eca729-cb72-11e6-ad8f-00163e1001d7_ac880d20

我们在master node上查看当前cluster状态:

# kubectl get nodes
NAME                      STATUS         AGE
iz25beglnhtz              Ready,master   1h
iz2ze39jeyizepdxhwqci6z   Ready          21s

k8s cluster创建”成功”!真的成功了吗?“折腾”才刚刚开始:(!

三、Flannel Pod Network问题

Join成功所带来的“余温”还未散去,我就发现了Flannel pod network的问题,troubleshooting正式开始:(。

1、minion node上的flannel时不时地报错

刚join时还好好的,可过了没一会儿,我们就发现在kubectl get pod –all-namespaces中有错误出现:

kube-system   kube-flannel-ds-tr8zr                  1/2       CrashLoopBackOff   189        16h

我们发现这是minion node上的flannel pod中的一个container出错导致的,跟踪到的具体错误如下:

# docker logs bc0058a15969
E1227 06:17:50.605110       1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-tr8zr': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-tr8zr: dial tcp 10.96.0.1:443: i/o timeout

10.96.0.1是pod network中apiserver service的cluster ip,而minion node上的flannel组件居然无法访问到这个cluster ip!这个问题的奇怪之处还在于,有些时候这个Pod在被调度restart N多次后或者被删除重启后,又突然变为running状态了,行为十分怪异。

在flannel github.com issues中,至少有两个open issue与此问题有密切关系:

https://github.com/coreos/flannel/issues/545

https://github.com/coreos/flannel/issues/535

这个问题暂无明确解。当minion node上的flannel pod自恢复为running状态时,我们又可以继续了。

2、minion node上flannel pod启动失败的一个应对方法

在下面issue中,很多developer讨论了minion node上flannel pod启动失败的一种可能原因以及临时应对方法:

https://github.com/kubernetes/kubernetes/issues/34101

这种说法大致就是minion node上的kube-proxy使用了错误的interface,通过下面方法可以fix这个问题。在minion node上执行:

#  kubectl -n kube-system get ds -l 'component=kube-proxy' -o json | jq '.items[0].spec.template.spec.containers[0].command |= .+ ["--cluster-cidr=10.244.0.0/16"]' | kubectl apply -f - && kubectl -n kube-system delete pods -l 'component=kube-proxy'
daemonset "kube-proxy" configured
pod "kube-proxy-lsn0t" deleted
pod "kube-proxy-sb4sm" deleted

执行后,flannel pod的状态:

kube-system   kube-flannel-ds-qw291                  2/2       Running   8          17h
kube-system   kube-flannel-ds-x818z                  2/2       Running   17         1h

经过17次restart,minion node上的flannel pod 启动ok了。其对应的flannel container启动日志如下:

# docker logs 1f64bd9c0386
I1227 07:43:26.670620       1 main.go:132] Installing signal handlers
I1227 07:43:26.671006       1 manager.go:133] Determining IP address of default interface
I1227 07:43:26.670825       1 kube.go:233] starting kube subnet manager
I1227 07:43:26.671514       1 manager.go:163] Using 59.110.67.15 as external interface
I1227 07:43:26.671575       1 manager.go:164] Using 59.110.67.15 as external endpoint
I1227 07:43:26.746811       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1227 07:43:26.749785       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1227 07:43:26.752343       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1227 07:43:26.755126       1 manager.go:246] Lease acquired: 10.244.1.0/24
I1227 07:43:26.755444       1 network.go:58] Watching for L3 misses
I1227 07:43:26.755475       1 network.go:66] Watching for new subnet leases
I1227 07:43:27.755830       1 network.go:153] Handling initial subnet events
I1227 07:43:27.755905       1 device.go:163] calling GetL2List() dev.link.Index: 10
I1227 07:43:27.756099       1 device.go:168] calling NeighAdd: 123.56.200.187, ca:68:7c:9b:cc:67

issue中说到,在kubeadm init时,显式地指定–advertise-address将会避免这个问题。不过目前不要在–advertise-address后面写上多个IP,虽然文档上说是支持的,但实际情况是,当你显式指定–advertise-address的值为两个或两个以上IP时,比如下面这样:

#kubeadm init --api-advertise-addresses=10.47.217.91,123.56.200.187 --pod-network-cidr=10.244.0.0/16

master初始化成功后,当minion node执行join cluster命令时,会panic掉:

# kubeadm join --token=92e977.f1d4d090906fc06a 10.47.217.91
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
... ...
[bootstrap] Successfully established connection with endpoint "https://10.47.217.91:6443"
[bootstrap] Successfully established connection with endpoint "https://123.56.200.187:6443"
E1228 10:14:05.405294   28378 runtime.go:64] Observed a panic: "close of closed channel" (close of closed channel)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
/usr/local/go/src/runtime/asm_amd64.s:479
/usr/local/go/src/runtime/panic.go:458
/usr/local/go/src/runtime/chan.go:311
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:85
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:96
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:97
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:52
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:93
/usr/local/go/src/runtime/asm_amd64.s:2086
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
panic: close of closed channel [recovered]
    panic: close of closed channel

goroutine 29 [running]:
panic(0x1342de0, 0xc4203eebf0)
    /usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
panic(0x1342de0, 0xc4203eebf0)
    /usr/local/go/src/runtime/panic.go:458 +0x243
k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection.func1.1()
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:85 +0x29d
k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc420563ee0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e
k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc420563ee0, 0x12a05f200, 0x0, 0xc420022e01, 0xc4202c2060)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad
k8s.io/kubernetes/pkg/util/wait.Until(0xc420563ee0, 0x12a05f200, 0xc4202c2060)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d
k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection.func1(0xc4203a82f0, 0xc420269b90, 0xc4202c2060, 0xc4202c20c0, 0xc4203d8d80, 0x401, 0x480, 0xc4201e75e0, 0x17, 0xc4201e7560, ...)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:93 +0x100
created by k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:94 +0x3ed

关于join panic这个问题,在这个issue中有详细讨论:https://github.com/kubernetes/kubernetes/issues/36988

3、open /run/flannel/subnet.env: no such file or directory

前面说过,默认情况下,考虑安全原因,master node是不承担work load的,不参与pod调度。我们这里机器少,只能让master node也辛苦一下。通过下面这个命令可以让master node也参与pod调度:

# kubectl taint nodes --all dedicated-
node "iz25beglnhtz" tainted

接下来,我们create一个deployment,manifest描述文件如下:

//run-my-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.10.1
        ports:
        - containerPort: 80

create后,我们发现调度到master上的my-nginx pod启动是ok的,但minion node上的pod则一直失败,查看到的失败原因如下:

Events:
  FirstSeen    LastSeen    Count    From                    SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                    -------------    --------    ------        -------
  28s        28s        1    {default-scheduler }                    Normal        Scheduled    Successfully assigned my-nginx-2560993602-0440x to iz2ze39jeyizepdxhwqci6z
  27s        1s        26    {kubelet iz2ze39jeyizepdxhwqci6z}            Warning        FailedSync    Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-2560993602-0440x_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-2560993602-0440x_default(ba5ce554-cbf1-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

在minion node上的确没有找到/run/flannel/subnet.env该文件。但master node上有这个文件:

// /run/flannel/subnet.env

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

于是手动在minion node上创建一份/run/flannel/subnet.env,并复制master node同名文件的内容,保存。稍许片刻,minion node上的my-nginx pod从error变成running了。

4、no IP addresses available in network: cbr0

将之前的一个my-nginx deployment的replicas改为3,并创建基于该deployment中pods的my-nginx service:

//my-nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30062
    protocol: TCP
  selector:
    run: my-nginx

修改后,通过curl localhost:30062测试服务连通性。发现通过VIP负载均衡到master node上的my-nginx pod的request都成功得到了Response,但是负载均衡到minion node上pod的request,则阻塞在那里,直到timeout。查看pod信息才发现,原来新调度到minion node上的my-nginx pod并没有启动ok,错误原因如下:

Events:
  FirstSeen    LastSeen    Count    From                    SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                    -------------    --------    ------        -------
  2m        2m        1    {default-scheduler }                    Normal        Scheduled    Successfully assigned my-nginx-1948696469-ph11m to iz2ze39jeyizepdxhwqci6z
  2m        0s        177    {kubelet iz2ze39jeyizepdxhwqci6z}            Warning        FailedSync    Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-1948696469-ph11m_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-1948696469-ph11m_default(3700d74a-cc12-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod"

查看minion node上/var/lib/cni/networks/cbr0目录,发现该目录下有如下文件:

10.244.1.10   10.244.1.12   10.244.1.14   10.244.1.16   10.244.1.18   10.244.1.2    10.244.1.219  10.244.1.239  10.244.1.3   10.244.1.5   10.244.1.7   10.244.1.9
10.244.1.100  10.244.1.120  10.244.1.140  10.244.1.160  10.244.1.180  10.244.1.20   10.244.1.22   10.244.1.24   10.244.1.30  10.244.1.50  10.244.1.70  10.244.1.90
10.244.1.101  10.244.1.121  10.244.1.141  10.244.1.161  10.244.1.187  10.244.1.200  10.244.1.220  10.244.1.240  10.244.1.31  10.244.1.51  10.244.1.71  10.244.1.91
10.244.1.102  10.244.1.122  10.244.1.142  10.244.1.162  10.244.1.182  10.244.1.201  10.244.1.221  10.244.1.241  10.244.1.32  10.244.1.52  10.244.1.72  10.244.1.92
10.244.1.103  10.244.1.123  10.244.1.143  10.244.1.163  10.244.1.183  10.244.1.202  10.244.1.222  10.244.1.242  10.244.1.33  10.244.1.53  10.244.1.73  10.244.1.93
10.244.1.104  10.244.1.124  10.244.1.144  10.244.1.164  10.244.1.184  10.244.1.203  10.244.1.223  10.244.1.243  10.244.1.34  10.244.1.54  10.244.1.74  10.244.1.94
10.244.1.105  10.244.1.125  10.244.1.145  10.244.1.165  10.244.1.185  10.244.1.204  10.244.1.224  10.244.1.244  10.244.1.35  10.244.1.55  10.244.1.75  10.244.1.95
10.244.1.106  10.244.1.126  10.244.1.146  10.244.1.166  10.244.1.186  10.244.1.205  10.244.1.225  10.244.1.245  10.244.1.36  10.244.1.56  10.244.1.76  10.244.1.96
10.244.1.107  10.244.1.127  10.244.1.147  10.244.1.167  10.244.1.187  10.244.1.206  10.244.1.226  10.244.1.246  10.244.1.37  10.244.1.57  10.244.1.77  10.244.1.97
10.244.1.108  10.244.1.128  10.244.1.148  10.244.1.168  10.244.1.188  10.244.1.207  10.244.1.227  10.244.1.247  10.244.1.38  10.244.1.58  10.244.1.78  10.244.1.98
10.244.1.109  10.244.1.129  10.244.1.149  10.244.1.169  10.244.1.189  10.244.1.208  10.244.1.228  10.244.1.248  10.244.1.39  10.244.1.59  10.244.1.79  10.244.1.99
10.244.1.11   10.244.1.13   10.244.1.15   10.244.1.17   10.244.1.19   10.244.1.209  10.244.1.229  10.244.1.249  10.244.1.4   10.244.1.6   10.244.1.8   last_reserved_ip
10.244.1.110  10.244.1.130  10.244.1.150  10.244.1.170  10.244.1.190  10.244.1.21   10.244.1.23   10.244.1.25   10.244.1.40  10.244.1.60  10.244.1.80
10.244.1.111  10.244.1.131  10.244.1.151  10.244.1.171  10.244.1.191  10.244.1.210  10.244.1.230  10.244.1.250  10.244.1.41  10.244.1.61  10.244.1.81
10.244.1.112  10.244.1.132  10.244.1.152  10.244.1.172  10.244.1.192  10.244.1.211  10.244.1.231  10.244.1.251  10.244.1.42  10.244.1.62  10.244.1.82
10.244.1.113  10.244.1.133  10.244.1.153  10.244.1.173  10.244.1.193  10.244.1.212  10.244.1.232  10.244.1.252  10.244.1.43  10.244.1.63  10.244.1.83
10.244.1.114  10.244.1.134  10.244.1.154  10.244.1.174  10.244.1.194  10.244.1.213  10.244.1.233  10.244.1.253  10.244.1.44  10.244.1.64  10.244.1.84
10.244.1.115  10.244.1.135  10.244.1.155  10.244.1.175  10.244.1.195  10.244.1.214  10.244.1.234  10.244.1.254  10.244.1.45  10.244.1.65  10.244.1.85
10.244.1.116  10.244.1.136  10.244.1.156  10.244.1.176  10.244.1.196  10.244.1.215  10.244.1.235  10.244.1.26   10.244.1.46  10.244.1.66  10.244.1.86
10.244.1.117  10.244.1.137  10.244.1.157  10.244.1.177  10.244.1.197  10.244.1.216  10.244.1.236  10.244.1.27   10.244.1.47  10.244.1.67  10.244.1.87
10.244.1.118  10.244.1.138  10.244.1.158  10.244.1.178  10.244.1.198  10.244.1.217  10.244.1.237  10.244.1.28   10.244.1.48  10.244.1.68  10.244.1.88
10.244.1.119  10.244.1.139  10.244.1.159  10.244.1.179  10.244.1.199  10.244.1.218  10.244.1.238  10.244.1.29   10.244.1.49  10.244.1.69  10.244.1.89

这已经将10.244.1.x段的所有ip占满,自然没有available的IP可供新pod使用了。至于为何占满,这个原因尚不明朗。下面两个open issue与这个问题相关:

https://github.com/containernetworking/cni/issues/306

https://github.com/kubernetes/kubernetes/issues/21656

进入到/var/lib/cni/networks/cbr0目录下,执行下面命令可以释放那些可能是kubelet leak的IP资源:

for hash in $(tail -n +1 * | grep '^[A-Za-z0-9]*$' | cut -c 1-8); do if [ -z $(docker ps -a | grep $hash | awk '{print $1}') ]; then grep -irl $hash ./; fi; done | xargs rm

执行后,目录下的文件列表变成了:

ls -l
total 32
drw-r--r-- 2 root root 12288 Dec 27 17:11 ./
drw-r--r-- 3 root root  4096 Dec 27 13:52 ../
-rw-r--r-- 1 root root    64 Dec 27 17:11 10.244.1.2
-rw-r--r-- 1 root root    64 Dec 27 17:11 10.244.1.3
-rw-r--r-- 1 root root    64 Dec 27 17:11 10.244.1.4
-rw-r--r-- 1 root root    10 Dec 27 17:11 last_reserved_ip

不过pod仍然处于失败状态,但这次失败的原因又发生了变化:

Events:
  FirstSeen    LastSeen    Count    From                    SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                    -------------    --------    ------        -------
  23s        23s        1    {default-scheduler }                    Normal        Scheduled    Successfully assigned my-nginx-1948696469-7p4nn to iz2ze39jeyizepdxhwqci6z
  22s        1s        22    {kubelet iz2ze39jeyizepdxhwqci6z}            Warning        FailedSync    Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-1948696469-7p4nn_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-1948696469-7p4nn_default(a40fe652-cc14-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": \"cni0\" already has an IP address different from 10.244.1.1/24; Skipping pod"

而/var/lib/cni/networks/cbr0目录下的文件又开始迅速增加!问题陷入僵局。

5、flannel vxlan不通,后端换udp,仍然不通

折腾到这里,基本筋疲力尽了。于是在两个node上执行kubeadm reset,准备重新来过。

kubeadm reset后,之前flannel创建的bridge device cni0和网口设备flannel.1依然健在。为了保证环境彻底恢复到初始状态,我们可以通过下面命令删除这两个设备:

# ifconfig  cni0 down
# brctl delbr cni0
# ip link delete flannel.1

有了前面几个问题的“磨炼”后,重新init和join的k8s cluster显得格外顺利。这次minion node没有再出现什么异常。

#  kubectl get nodes -o wide
NAME                      STATUS         AGE       EXTERNAL-IP
iz25beglnhtz              Ready,master   5m        <none>
iz2ze39jeyizepdxhwqci6z   Ready          51s       <none>

# kubectl get pod --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
default       my-nginx-1948696469-71h1l              1/1       Running   0          3m
default       my-nginx-1948696469-zwt5g              1/1       Running   0          3m
default       my-ubuntu-2560993602-ftdm6             1/1       Running   0          3m
kube-system   dummy-2088944543-lmlbh                 1/1       Running   0          5m
kube-system   etcd-iz25beglnhtz                      1/1       Running   0          6m
kube-system   kube-apiserver-iz25beglnhtz            1/1       Running   0          6m
kube-system   kube-controller-manager-iz25beglnhtz   1/1       Running   0          6m
kube-system   kube-discovery-1769846148-l5lfw        1/1       Running   0          5m
kube-system   kube-dns-2924299975-mdq5r              4/4       Running   0          5m
kube-system   kube-flannel-ds-9zwr1                  2/2       Running   0          5m
kube-system   kube-flannel-ds-p7xh2                  2/2       Running   0          1m
kube-system   kube-proxy-dwt5f                       1/1       Running   0          5m
kube-system   kube-proxy-vm6v2                       1/1       Running   0          1m
kube-system   kube-scheduler-iz25beglnhtz            1/1       Running   0          6m

接下来我们创建my-nginx deployment和service来测试flannel网络的连通性。通过curl my-nginx service的nodeport,发现可以reach master上的两个nginx pod,但是minion node上的pod依旧不通。

在master上看flannel docker的日志:

I1228 02:52:22.097083       1 network.go:225] L3 miss: 10.244.1.2
I1228 02:52:22.097169       1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60
I1228 02:52:22.097335       1 network.go:236] AddL3 succeeded
I1228 02:52:55.169952       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:00.801901       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:03.801923       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:04.801764       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:05.801848       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:06.888269       1 network.go:225] L3 miss: 10.244.1.2
I1228 02:53:06.888340       1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60
I1228 02:53:06.888507       1 network.go:236] AddL3 succeeded
I1228 02:53:39.969791       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:45.153770       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:48.154822       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:49.153774       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:50.153734       1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2
I1228 02:53:52.154056       1 network.go:225] L3 miss: 10.244.1.2
I1228 02:53:52.154110       1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60
I1228 02:53:52.154256       1 network.go:236] AddL3 succeeded

日志中有大量:“Ignoring not a miss”字样的日志,似乎vxlan网络有问题。这个问题与下面issue中描述颇为接近:

https://github.com/coreos/flannel/issues/427

Flannel默认采用vxlan作为backend,使用kernel vxlan默认的udp 8742端口。Flannel还支持udp的backend,使用udp 8285端口。于是试着更换一下flannel后端。更换flannel后端的步骤如下:

  • 将https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml文件下载到本地;
  • 修改kube-flannel.yml文件内容:主要是针对net-conf.json属性,增加”Backend”字段属性:
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "udp",
        "Port": 8285
      }
    }
---
... ...
  • 卸载并重新安装pod网络
# kubectl delete -f kube-flannel.yml
configmap "kube-flannel-cfg" deleted
daemonset "kube-flannel-ds" deleted

# kubectl apply -f kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

# netstat -an|grep 8285
udp        0      0 123.56.200.187:8285     0.0.0.0:*

经过测试发现:udp端口是通的。在两个node上tcpdump -i flannel0 可以看到udp数据包的发送和接收。但是两个node间的pod network依旧不通。

6、failed to register network: failed to acquire lease: node “iz25beglnhtz” not found

正常情况下master node和minion node上的flannel pod的启动日志如下:

master node flannel的运行:

I1227 04:56:16.577828       1 main.go:132] Installing signal handlers
I1227 04:56:16.578060       1 kube.go:233] starting kube subnet manager
I1227 04:56:16.578064       1 manager.go:133] Determining IP address of default interface
I1227 04:56:16.578576       1 manager.go:163] Using 123.56.200.187 as external interface
I1227 04:56:16.578616       1 manager.go:164] Using 123.56.200.187 as external endpoint
E1227 04:56:16.579079       1 network.go:106] failed to register network: failed to acquire lease: node "iz25beglnhtz" not found
I1227 04:56:17.583744       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1227 04:56:17.585367       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1227 04:56:17.587765       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1227 04:56:17.589943       1 manager.go:246] Lease acquired: 10.244.0.0/24
I1227 04:56:17.590203       1 network.go:58] Watching for L3 misses
I1227 04:56:17.590255       1 network.go:66] Watching for new subnet leases
I1227 07:43:27.164103       1 network.go:153] Handling initial subnet events
I1227 07:43:27.164211       1 device.go:163] calling GetL2List() dev.link.Index: 5
I1227 07:43:27.164350       1 device.go:168] calling NeighAdd: 59.110.67.15, ca:50:97:1f:c2:ea

minion node上flannel的运行:

# docker logs 1f64bd9c0386
I1227 07:43:26.670620       1 main.go:132] Installing signal handlers
I1227 07:43:26.671006       1 manager.go:133] Determining IP address of default interface
I1227 07:43:26.670825       1 kube.go:233] starting kube subnet manager
I1227 07:43:26.671514       1 manager.go:163] Using 59.110.67.15 as external interface
I1227 07:43:26.671575       1 manager.go:164] Using 59.110.67.15 as external endpoint
I1227 07:43:26.746811       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1227 07:43:26.749785       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1227 07:43:26.752343       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1227 07:43:26.755126       1 manager.go:246] Lease acquired: 10.244.1.0/24
I1227 07:43:26.755444       1 network.go:58] Watching for L3 misses
I1227 07:43:26.755475       1 network.go:66] Watching for new subnet leases
I1227 07:43:27.755830       1 network.go:153] Handling initial subnet events
I1227 07:43:27.755905       1 device.go:163] calling GetL2List() dev.link.Index: 10
I1227 07:43:27.756099       1 device.go:168] calling NeighAdd: 123.56.200.187, ca:68:7c:9b:cc:67

但在进行上面问题5的测试过程中,我们发现flannel container的启动日志中有如下错误:

master node:

# docker logs c2d1cee3df3d
I1228 06:53:52.502571       1 main.go:132] Installing signal handlers
I1228 06:53:52.502735       1 manager.go:133] Determining IP address of default interface
I1228 06:53:52.503031       1 manager.go:163] Using 123.56.200.187 as external interface
I1228 06:53:52.503054       1 manager.go:164] Using 123.56.200.187 as external endpoint
E1228 06:53:52.503869       1 network.go:106] failed to register network: failed to acquire lease: node "iz25beglnhtz" not found
I1228 06:53:52.503899       1 kube.go:233] starting kube subnet manager
I1228 06:53:53.522892       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1228 06:53:53.524325       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1228 06:53:53.526622       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1228 06:53:53.528438       1 manager.go:246] Lease acquired: 10.244.0.0/24
I1228 06:53:53.528744       1 network.go:58] Watching for L3 misses
I1228 06:53:53.528777       1 network.go:66] Watching for new subnet leases

minion node:

# docker logs dcbfef45308b
I1228 05:28:05.012530       1 main.go:132] Installing signal handlers
I1228 05:28:05.012747       1 manager.go:133] Determining IP address of default interface
I1228 05:28:05.013011       1 manager.go:163] Using 59.110.67.15 as external interface
I1228 05:28:05.013031       1 manager.go:164] Using 59.110.67.15 as external endpoint
E1228 05:28:05.013204       1 network.go:106] failed to register network: failed to acquire lease: node "iz2ze39jeyizepdxhwqci6z" not found
I1228 05:28:05.013237       1 kube.go:233] starting kube subnet manager
I1228 05:28:06.041602       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I1228 05:28:06.042863       1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I1228 05:28:06.044896       1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I1228 05:28:06.046497       1 manager.go:246] Lease acquired: 10.244.1.0/24
I1228 05:28:06.046780       1 network.go:98] Watching for new subnet leases
I1228 05:28:07.047052       1 network.go:191] Subnet added: 10.244.0.0/24

两个Node都有“注册网络”失败的错误:failed to register network: failed to acquire lease: node “xxxx” not found。很难断定是否是因为这两个错误导致的两个node间的网络不通。从整个测试过程来看,这个问题时有时无。在下面flannel issue中也有类似的问题讨论:

https://github.com/coreos/flannel/issues/435

Flannel pod network的诸多问题让我决定暂时放弃在kubeadm创建的kubernetes cluster中继续使用Flannel。

四、Calico pod network

Kubernetes支持的pod network add-ons中,除了Flannel,还有calicoWeave net等。这里我们试试基于边界网关BGP协议实现的Calico pod network。Calico Project针对在kubeadm建立的K8s集群的Pod网络安装也有专门的文档。文档中描述的需求和约束我们均满足,比如:

master node带有kubeadm.alpha.kubernetes.io/role: master标签:

# kubectl get nodes -o wide --show-labels
NAME           STATUS         AGE       EXTERNAL-IP   LABELS
iz25beglnhtz   Ready,master   3m        <none>        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubeadm.alpha.kubernetes.io/role=master,kubernetes.io/hostname=iz25beglnhtz

在安装calico之前,我们还是要执行kubeadm reset重置环境,并将flannel创建的各种网络设备删除,可参考上面几个小节中的命令。

1、初始化集群

使用calico的kubeadm init无需再指定–pod-network-cidr=10.244.0.0/16 option:

# kubeadm init --api-advertise-addresses=10.47.217.91
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "531b3f.3bd900d61b78d6c9"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 13.527323 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 0.503814 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 1.503644 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=531b3f.3bd900d61b78d6c9 10.47.217.91

2、创建calico network

# kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
job "configure-calico" created

实际创建过程需要一段时间,因为calico需要pull 一些images:

# docker images
REPOSITORY                                               TAG                        IMAGE ID            CREATED             SIZE
quay.io/calico/node                                      v1.0.0                     74bff066bc6a        7 days ago          256.4 MB
calico/ctl                                               v1.0.0                     069830246cf3        8 days ago          43.35 MB
calico/cni                                               v1.5.5                     ada87b3276f3        12 days ago         67.13 MB
gcr.io/google_containers/etcd                            2.2.1                      a6cd91debed1        14 months ago       28.19 MB

calico在master node本地创建了两个network device:

# ip a
... ...
47: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.91.0/32 scope global tunl0
       valid_lft forever preferred_lft forever
48: califa32a09679f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 62:39:10:55:44:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0

3、minion node join

执行下面命令,将minion node加入cluster:

# kubeadm join --token=531b3f.3bd900d61b78d6c9 10.47.217.91

calico在minion node上也创建了一个network device:

57988: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.136.192/32 scope global tunl0
       valid_lft forever preferred_lft forever

join成功后,我们查看一下cluster status:

# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   calico-etcd-488qd                          1/1       Running   0          18m       10.47.217.91   iz25beglnhtz
kube-system   calico-node-jcb3c                          2/2       Running   0          18m       10.47.217.91   iz25beglnhtz
kube-system   calico-node-zthzp                          2/2       Running   0          4m        10.28.61.30    iz2ze39jeyizepdxhwqci6z
kube-system   calico-policy-controller-807063459-f21q4   1/1       Running   0          18m       10.47.217.91   iz25beglnhtz
kube-system   dummy-2088944543-rtsfk                     1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   etcd-iz25beglnhtz                          1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-apiserver-iz25beglnhtz                1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-controller-manager-iz25beglnhtz       1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-discovery-1769846148-51wdk            1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-dns-2924299975-fhf5f                  4/4       Running   0          23m       192.168.91.1   iz25beglnhtz
kube-system   kube-proxy-2s7qc                           1/1       Running   0          4m        10.28.61.30    iz2ze39jeyizepdxhwqci6z
kube-system   kube-proxy-h2qds                           1/1       Running   0          23m       10.47.217.91   iz25beglnhtz
kube-system   kube-scheduler-iz25beglnhtz                1/1       Running   0          23m       10.47.217.91   iz25beglnhtz

所有组件都是ok的。似乎是好兆头!但跨node的pod network是否联通,还需进一步探究。

4、探究跨node的pod network联通性

我们依旧利用上面测试flannel网络的my-nginx-svc.yaml和run-my-nginx.yaml,创建my-nginx service和my-nginx deployment。注意:这之前要先在master node上执行一下”kubectl taint nodes –all dedicated-”,以让master node承载work load。

遗憾的是,结果和flannel很相似,分配到master node上http request得到了nginx的响应;minion node上的pod依旧无法联通。

这次我不想在calico这块过多耽搁,我要快速看看下一个候选者:weave net是否满足要求。

由于wordpress莫名其妙的问题,导致这篇文章无法发布完整,因此将其拆分为两个部分,本文为第一部分,第二部分请移步这里阅读。

Kubernetes集群DNS插件安装

上一篇关于Kubernetes集群安装的文章中,我们建立一个最小可用的k8s集群,不过k8s与1.12版本后的内置了集群管理的Docker不同,k8s是一组松耦合的组件组合而成对外提供服务的。除了核心组件,其他组件是以Add-on形式提供的,比如集群内kube-DNSK8s Dashboard等。kube-dns是k8s的重要插件,用于完成集群内部service的注册和发现。随着k8s安装和管理体验的进一步完善,DNS插件势必将成为k8s默认安装的一部分。本篇将在《一篇文章带你了解Kubernetes安装》一文的基础上,进一步探讨DNS组件的安装”套路”^_^以及问题的troubleshooting。

一、安装前提和原理

上文说过,K8s的安装根据Provider的不同而不同,我们这里是基于provider=ubuntu为前提的,使用的安装脚本是浙大团队维护的那套。因此如果你的provider是其他选项,那么这篇文章里所讲述的内容可能不适用。但了解provider=ubuntu下的DNS组件的安装原理,总体上对其他安装方式也是有一定帮助的。

在部署机k8s安装工作目录的cluster/ubuntu下面,除了安装核心组件所用的download-release.sh、util.sh外,我们看到了另外一个脚本deployAddons.sh,这个脚本内容不多,结构也很清晰,大致的执行步骤就是:

init
deploy_dns
deploy_dashboard

可以看出,这个脚本就是用来部署k8s的两个常用插件:dns和dashboard的。进一步分析,发现deployAddons.sh的执行也是基于./cluster/ubuntu/config-default.sh中的配置,相关的几个配置包括:

# Optional: Install cluster DNS.
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
# DNS_SERVER_IP must be a IP in SERVICE_CLUSTER_IP_RANGE
DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.3.10"}
DNS_DOMAIN=${DNS_DOMAIN:-"cluster.local"}
DNS_REPLICAS=${DNS_REPLICAS:-1}

deployAddons.sh首先会根据上述配置生成skydns-rc.yaml和skydns-svc.yaml两个k8s描述文件,再通过kubectl create创建dns service。

二、安装k8s DNS

1、试装

为了让deployAddons.sh脚本执行时只进行DNS组件安装,需要先设置一下环境变量:

export KUBE_ENABLE_CLUSTER_UI=false

执行安装脚本:

# KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
Creating kube-system namespace...
The namespace 'kube-system' is successfully created.

Deploying DNS on Kubernetes
replicationcontroller "kube-dns-v17.1" created
service "kube-dns" created
Kube-dns rc and service is successfully deployed.

似乎很顺利。我们通过kubectl来查看一下(注意:由于DNS服务被创建在了一个名为kube-system的namespace中,kubectl执行时要指定namespace名字,否则将无法查到dns service):

# kubectl --namespace=kube-system get services
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               192.168.3.10    <none>        53/UDP,53/TCP   1m

root@iZ25cn4xxnvZ:~/k8stest/1.3.7/kubernetes/cluster/ubuntu# kubectl --namespace=kube-system get pods
NAME                                    READY     STATUS              RESTARTS   AGE
kube-dns-v17.1-n4tnj                    0/3       ErrImagePull        0          4m

在查看DNS组件对应的Pod时,发现Ready为0/3,STATUS为”ErrImagePull”,DNS服务并没有真正起来。

2、修改skydns-rc.yaml

我们来修正上面的问题。在cluster/ubuntu下,我们发现多了两个文件:skydns-rc.yaml和skydns-svc.yaml,这两个文件就是deployAddons.sh执行时根据config-default.sh中的配置生成的两个k8s service描述文件,问题就出在skydns-rc.yaml中。在该文件中,我们看到了dns service启动的pod所含的三个容器对应的镜像名字:

gcr.io/google_containers/kubedns-amd64:1.5
gcr.io/google_containers/kube-dnsmasq-amd64:1.3
gcr.io/google_containers/exechealthz-amd64:1.1

在这次安装时,我并没有配置加速器(vpn)。因此在pull gcr.io上的镜像文件时出错了。在没有加速器的情况,我们在docker hub上可以很容易寻找到替代品(由于国内网络连接docker hub慢且经常无法连接,建议先手动pull出这三个替代镜像):

gcr.io/google_containers/kubedns-amd64:1.5
=> chasontang/kubedns-amd64:1.5

gcr.io/google_containers/kube-dnsmasq-amd64:1.3
=> chasontang/kube-dnsmasq-amd64:1.3

gcr.io/google_containers/exechealthz-amd64:1.1
=> chasontang/exechealthz-amd64:1.1

我们需要手工将skydns-rc.yaml中的三个镜像名进行替换。并且为了防止deployAddons.sh重新生成skydns-rc.yaml,我们需要注释掉deployAddons.sh中的下面两行:

#sed -e "s/\\\$DNS_REPLICAS/${DNS_REPLICAS}/g;s/\\\$DNS_DOMAIN/${DNS_DOMAIN}/g;" "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns/skydns-rc.yaml.sed" > skydns-rc.yaml
#sed -e "s/\\\$DNS_SERVER_IP/${DNS_SERVER_IP}/g" "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns/skydns-svc.yaml.sed" > skydns-svc.yaml

删除dns服务:

# kubectl --namespace=kube-system delete rc/kube-dns-v17.1 svc/kube-dns
replicationcontroller "kube-dns-v17.1" deleted
service "kube-dns" deleted

再次执行deployAddons.sh重新部署DNS组件(不赘述)。安装后,我们还是来查看一下是否安装ok,这次我们直接用docker ps查看pod内那三个容器是否都起来了:

# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
e8dc52cba2c7        chasontang/exechealthz-amd64:1.1           "/exechealthz '-cmd=n"   7 minutes ago       Up 7 minutes                            k8s_healthz.1a0d495a_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_b42e68fc
f1b83b442b15        chasontang/kube-dnsmasq-amd64:1.3          "/usr/sbin/dnsmasq --"   7 minutes ago       Up 7 minutes                            k8s_dnsmasq.f16970b7_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_da111cd4
d9f09b440c6e        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.a6b39ba7_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_b198b4a8

似乎kube-dns这个镜像的容器并没有启动成功。docker ps -a印证了这一点:

# docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS                       PORTS               NAMES
24387772a2a9        chasontang/kubedns-amd64:1.5               "/kube-dns --domain=c"   3 minutes ago       Exited (255) 2 minutes ago                       k8s_kubedns.cdbc8a07_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_473144a6
3b8bb401ac6f        chasontang/kubedns-amd64:1.5               "/kube-dns --domain=c"   5 minutes ago       Exited (255) 4 minutes ago                       k8s_kubedns.cdbc8a07_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_cdd57b87

查看一下stop状态下的kube-dns container的容器日志:

# docker logs 24387772a2a9
I1021 05:18:00.982731       1 server.go:91] Using https://192.168.3.1:443 for kubernetes master
I1021 05:18:00.982898       1 server.go:92] Using kubernetes API <nil>
I1021 05:18:00.983810       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I1021 05:18:00.984030       1 server.go:139] skydns: metrics enabled on :/metrics
I1021 05:18:00.984152       1 dns.go:166] Waiting for service: default/kubernetes
I1021 05:18:00.984672       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1021 05:18:00.984697       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1021 05:18:01.292557       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1021 05:18:01.293232       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
E1021 05:18:01.293361       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
I1021 05:18:01.483325       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1021 05:18:01.483390       1 dns.go:539] records:[], retval:[], path:[local cluster svc default kubernetes]
I1021 05:18:01.582598       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
... ...

I1021 05:19:07.458786       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1021 05:19:07.460465       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
E1021 05:19:07.462793       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
F1021 05:19:07.867746       1 server.go:127] Received signal: terminated

从日志上去看,应该是kube-dns去连接apiserver失败,重试一定次数后,退出了。从日志上看,kube-dns视角中的kubernetes api server的地址是:

I1021 05:18:00.982731       1 server.go:91] Using https://192.168.3.1:443 for kubernetes master

而实际上我们的k8s apiserver监听的insecure port是8080,secure port是6443(由于没有显式配置,6443是源码中的默认端口),通过https+443端口访问apiserver毫无疑问将以失败告终。问题找到了,接下来就是如何解决了。

3、指定–kube-master-url

我们看一下kube-dns命令都有哪些可以传入的命令行参数:

# docker run -it chasontang/kubedns-amd64:1.5 kube-dns --help
Usage of /kube-dns:
      --alsologtostderr[=false]: log to standard error as well as files
      --dns-port=53: port on which to serve DNS requests.
      --domain="cluster.local.": domain under which to create names
      --federations=: a comma separated list of the federation names and their corresponding domain names to which this cluster belongs. Example: "myfederation1=example.com,myfederation2=example2.com,myfederation3=example.com"
      --healthz-port=8081: port on which to serve a kube-dns HTTP readiness probe.
      --kube-master-url="": URL to reach kubernetes master. Env variables in this flag will be expanded.
      --kubecfg-file="": Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
      --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace
      --log-dir="": If non-empty, write log files in this directory
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --logtostderr[=true]: log to standard error instead of files
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --v=0: log level for V logs
      --version[=false]: Print version information and quit
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging

可以看出:–kube-master-url这个命令行选项可以实现我们的诉求。我们需要再次修改一下skydns-rc.yaml:

        args:
        # command = "/kube-dns"
        - --domain=cluster.local.
        - --dns-port=10053
        - --kube-master-url=http://10.47.136.60:8080   # 新增一行

再次重新部署DNS Addon,不赘述。部署后查看kube-dns服务信息:

# kubectl --namespace=kube-system  describe service/kube-dns
Name:            kube-dns
Namespace:        kube-system
Labels:            k8s-app=kube-dns
            kubernetes.io/cluster-service=true
            kubernetes.io/name=KubeDNS
Selector:        k8s-app=kube-dns
Type:            ClusterIP
IP:            192.168.3.10
Port:            dns    53/UDP
Endpoints:        172.16.99.3:53
Port:            dns-tcp    53/TCP
Endpoints:        172.16.99.3:53
Session Affinity:    None
No events

在通过docker logs直接查看kube-dns容器的日志:

docker logs 2f4905510cd2
I1023 11:44:12.997606       1 server.go:91] Using http://10.47.136.60:8080 for kubernetes master
I1023 11:44:13.090820       1 server.go:92] Using kubernetes API v1
I1023 11:44:13.091707       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I1023 11:44:13.091828       1 server.go:139] skydns: metrics enabled on :/metrics
I1023 11:44:13.091952       1 dns.go:166] Waiting for service: default/kubernetes
I1023 11:44:13.094592       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1023 11:44:13.094606       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1023 11:44:13.104789       1 server.go:101] Setting up Healthz Handler(/readiness, /cache) on port :8081
I1023 11:44:13.105912       1 dns.go:660] DNS Record:&{192.168.3.182 0 10 10  false 30 0  }, hash:6a8187e0
I1023 11:44:13.106033       1 dns.go:660] DNS Record:&{kubernetes-dashboard.kube-system.svc.cluster.local. 0 10 10  false 30 0  }, hash:529066a8
I1023 11:44:13.106120       1 dns.go:660] DNS Record:&{192.168.3.10 0 10 10  false 30 0  }, hash:bdfe50f8
I1023 11:44:13.106193       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10  false 30 0  }, hash:fdbb4e78
I1023 11:44:13.106268       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10  false 30 0  }, hash:fdbb4e78
I1023 11:44:13.106306       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 0 10 10  false 30 0  }, hash:d1247c4e
I1023 11:44:13.106329       1 dns.go:660] DNS Record:&{192.168.3.1 0 10 10  false 30 0  }, hash:2b11f462
I1023 11:44:13.106350       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10  false 30 0  }, hash:c3f6ae26
I1023 11:44:13.106377       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b9b7d845
I1023 11:44:13.106398       1 dns.go:660] DNS Record:&{192.168.3.179 0 10 10  false 30 0  }, hash:d7e0b1e
I1023 11:44:13.106422       1 dns.go:660] DNS Record:&{my-nginx.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b0f41a92
I1023 11:44:16.083653       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1023 11:44:16.083950       1 dns.go:539] records:[0xc8202c39d0], retval:[{192.168.3.1 0 10 10  false 30 0  /skydns/local/cluster/svc/default/kubernetes/3262313166343632}], path:[local cluster svc default kubernetes]
I1023 11:44:16.084474       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1023 11:44:16.084517       1 dns.go:539] records:[0xc8202c39d0], retval:[{192.168.3.1 0 10 10  false 30 0  /skydns/local/cluster/svc/default/kubernetes/3262313166343632}], path:[local cluster svc default kubernetes]
I1023 11:44:16.085024       1 dns.go:583] Received ReverseRecord Request:1.3.168.192.in-addr.arpa.

通过日志可以看到,apiserver的url是正确的,kube-dns组件没有再输出错误,安装似乎成功了,还需要测试验证一下。

三、测试验证k8s DNS

按照预期,k8s dns组件可以为k8s集群内的service做dns解析。当前k8s集群默认namespace已经部署的服务如下:

# kubectl get services
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   192.168.3.1     <none>        443/TCP   10d
my-nginx     192.168.3.179   <nodes>       80/TCP    6d

我们在k8s集群中的一个myclient容器中尝试去ping和curl my-nginx服务:

ping my-nginx解析成功(找到my-nginx的clusterip: 192.168.3.179):

root@my-nginx-2395715568-gpljv:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (192.168.3.179): 56 data bytes

curl my-nginx服务也得到如下成功结果:

# curl -v my-nginx
* Rebuilt URL to: my-nginx/
* Hostname was NOT found in DNS cache
*   Trying 192.168.3.179...
* Connected to my-nginx (192.168.3.179) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: my-nginx
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.10.1 is not blacklisted
< Server: nginx/1.10.1
< Date: Sun, 23 Oct 2016 12:14:01 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 31 May 2016 14:17:02 GMT
< Connection: keep-alive
< ETag: "574d9cde-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host my-nginx left intact

客户端容器的dns配置,这应该是k8s安装时采用的默认配置(与config-default.sh有关):

# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 192.168.3.10
options timeout:1 attempts:1 rotate
options ndots:5

到此,k8s dns组件就安装ok了。

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats