标签 DNS 下的文章

Kubernetes集群Dashboard插件安装

第一次利用kube-up.sh脚本方式安装Kubernetes 1.3.7集群时,我就已经顺利地将kubernetes dashboard addon安装ok了。至今在这个环境下运行十分稳定。但是毕竟是一个试验环境,有些配置是无法满足生产环境要求的,比如:安全问题。今天有时间对Dashboard的配置进行一些调整,顺带将之前Dashboard插件的安装和配置过程也记录下来,供大家参考。

一、Dashboard的默认安装步骤

1、基于默认配置项的安装

采用kube-up.sh在Ubuntu上安装dashboard的原理与安装DNS插件大同小异,主要涉及的脚本文件和配置项包括:

//  kubernetes/cluster/config-default.sh
... ...
# Optional: Install Kubernetes UI
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
... ...

// kubernetes/cluster/ubuntu/deployAddons.sh
... ...
function deploy_dashboard {
    if ${KUBECTL} get rc -l k8s-app=kubernetes-dashboard --namespace=kube-system | grep kubernetes-dashboard-v &> /dev/null; then
        echo "Kubernetes Dashboard replicationController already exists"
    else
        echo "Creating Kubernetes Dashboard replicationController"
        ${KUBECTL} create -f ${KUBE_ROOT}/cluster/addons/dashboard/dashboard-controller.yaml
    fi

    if ${KUBECTL} get service/kubernetes-dashboard --namespace=kube-system &> /dev/null; then
        echo "Kubernetes Dashboard service already exists"
    else
        echo "Creating Kubernetes Dashboard service"
        ${KUBECTL} create -f ${KUBE_ROOT}/cluster/addons/dashboard/dashboard-service.yaml
    fi

  echo
}

init

... ...

if [ "${ENABLE_CLUSTER_UI}" == true ]; then
  deploy_dashboard
fi

kube-up.sh会尝试创建”kube-system” namespace,并执行下面命令:

kubectl create -f kubernetes/cluster/addons/dashboard/dashboard-controller.yaml
kubectl create -f kubernetes/cluster/addons/dashboard/dashboard-service.yaml

这和我们在cluster中创建一个rc和service没有多大区别。

当然上面的安装方式是伴随着k8s cluster的安装进行的,如果要单独安装Dashboard,那么Dashboard主页上的安装方式显然更为简单:

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

2、调整Dashboard容器启动参数

dashboard-controller.yaml和dashboard-service.yaml两个文件内容如下:

//dashboard-controller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: kubernetes-dashboard-v1.1.1
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    version: v1.1.1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: v1.1.1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30

// dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

这两个文件的内容略微陈旧些,用的还是目前已不推荐使用的ReplicationController。

不过这样默认安装后,你可能还会遇到如下问题:

(1) Dashboard pod创建失败:这是由于kubernetes-dashboard-amd64:v1.1.1 image在墙外,pull image失败导致的。

可以通过使用加速器或使用替代image的方式来解决,比如:mritd/kubernetes-dashboard-amd64:v1.4.0。修改一下dashboard-controller.yaml中image那一行即可。

(2)Dashboard无法连接到master node上的api server

如果唯一的dashboard pod(由于replicas=1)被调度到minion node上,那么很可能无法连接上master node上api server(dashboard会在cluster中自动检测api server的存在,但有时候会失败),导致页面无法正常显示。因此,需要指定一下api server的url,比如:我们在dashboard-controller.yaml中为container启动增加一个启动参数–apiserver-host:

// dashboard-controller.yaml
... ...
spec:
      containers:
      - name: kubernetes-dashboard
        image: mritd/kubernetes-dashboard-amd64:v1.4.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
           - --apiserver-host=http://{api server host}:{api server insecure-port}
... ...

(3)增加nodeport,提供外部访问路径

dashboard以cluster service的角色运行在cluster中,我们虽然可以在Node上访问该service或直接访问pod,但要想在外部网络访问到dashboard,还需要另外设置,比如:设置nodeport。

在dashboard-service.yaml中,修改配置如下:

spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
    nodePort: 12345

这样你就可以通过node 的public ip+nodeport访问到dashboard了。

不过这时,你的dashboard算是在“裸奔”,没有任何安全可言:
- dashboard ui没有访问管理机制,任何access都可以全面接管dashboard;
- 同时在背后,dashboard通过insecure-port访问apiserver,没有使用加密机制。

二、dashboard通过kubeconfig文件信息访问apiserver

我们先来建立dashboard和apiserver之间的安全通信机制。

当前master上的kube-apiserver的启动参数如下:

// /etc/default/kube-apiserver

KUBE_APISERVER_OPTS=" --insecure-bind-address=0.0.0.0 --insecure-port=8080 --etcd-servers=http://127.0.0.1:4001 --logtostderr=true --service-cluster-ip-range=192.168.3.0/24 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota --service-node-port-range=80-32767 --advertise-address={master node local ip} --basic-auth-file=/srv/kubernetes/basic_auth_file --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key"

dashboard要与apiserver建立安全通信机制,务必不能使用insecure port。kubernetes apiserver默认情况下secure port也是开启的,端口为6443。同时,apiserver开启了basic auth(–basic-auth-file=/srv/kubernetes/basic_auth_file)。这样一来,dashboard光靠传入的–apiserver-host参数将无法正常访问apiserver的secure port并通过basic auth。我们需要找到另外一个option:

我们来看一下dashboard还支持哪些cmdline options:

# docker run mritd/kubernetes-dashboard-amd64:v1.4.0 /dashboard -help
Usage of /dashboard:
      --alsologtostderr value          log to standard error as well as files
      --apiserver-host string          The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside aKubernetes cluster and local discovery is attempted.
      --heapster-host string           The address of the Heapster Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8082. If not specified, the assumption is that the binary runs inside aKubernetes cluster and service proxy will be used.
      --kubeconfig string              Path to kubeconfig file with authorization and master location information.
      --log-flush-frequency duration   Maximum number of seconds between log flushes (default 5s)
      --log_backtrace_at value         when logging hits line file:N, emit a stack trace (default :0)
      --log_dir value                  If non-empty, write log files in this directory
      --logtostderr value              log to standard error instead of files (default true)
      --port int                       The port to listen to for incoming HTTP requests (default 9090)
      --stderrthreshold value          logs at or above this threshold go to stderr (default 2)
  -v, --v value                        log level for V logs
      --vmodule value                  comma-separated list of pattern=N settings for file-filtered logging

从输出的options来看,只有–kubeconfig这个能够满足需求。

1、kubeconfig文件介绍

采用kube-up.sh脚本进行kubernetes默认安装后,脚本会在每个Cluster node上创建一个~/.kube/config文件,该kubeconfig文件可为k8s cluster中的组件(比如kubectl等)、addons(比如dashboard等)提供跨全cluster的安全验证机制。

下面是我的minion node上的kubeconfig文件

# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /srv/kubernetes/ca.crt
    server: https://{master node local ip}:6443
  name: ubuntu
contexts:
- context:
    cluster: ubuntu
    namespace: default
    user: admin
  name: ubuntu
current-context: ubuntu
kind: Config
preferences: {}
users:
- name: admin
  user:
    password: {apiserver_password}
    username: {apiserver_username}
    client-certificate: /srv/kubernetes/kubecfg.crt
    client-key: /srv/kubernetes/kubecfg.key

kubeconfig中存储了clusters、users、contexts信息,以及其他一些杂项,并通过current-context指定当前context。通过该配置文件,类似kubectl这样的cluster操作工具可以很容易的在各个cluster之间切换context。一个context就是一个三元组:{cluster、user、namespace},current-context指定当前选定的context,比如上面的kubeconfig文件,当我们执行kubectl时,kubectl会读取该配置文件,并以current-context指定的那个context中的信息去查找user和cluster。这里current-context是ubuntu。

ubuntu这个context三元组中的信息是:

{
    cluster = ubuntu
    namespace = default
    user = admin
}

之后kubectl到clusters中找到name为ubuntu的cluster,发现其server为https://{master node local ip}:6443,以及其CA信息;到users中找到name为admin的user,并使用该user下的信息:

    password: {apiserver_password}
    username: {apiserver_username}
    client-certificate: /srv/kubernetes/kubecfg.crt
    client-key: /srv/kubernetes/kubecfg.key

通过kubectl config命令可以配置kubeconfig文件,具体命令可以参考这里

另外上面的/srv/kubernetes/ca.crt、/srv/kubernetes/kubecfg.crt和/srv/kubernetes/kubecfg.key都是kube-up.sh在安装k8s 1.3.7时在各个node上创建的,可以直接用来作为访问apiserver的参数传递给kubectl或其他要访问apiserver的组件或addons。

2、修改dashboard启动参数,使用kubeconfig文件

现在我们要让dashboard使用kubeconfig文件,我们需要修改dashboard-controller.yaml文件中涉及containers的配置信息:

spec:
      containers:
      - name: kubernetes-dashboard
        image: mritd/kubernetes-dashboard-amd64:v1.4.0
        volumeMounts:
          - mountPath: /srv/kubernetes
            name: auth
          - mountPath: /root/.kube
            name: config
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
           - --kubeconfig=/root/.kube/config
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: auth
        hostPath:
          path: /srv/kubernetes
      - name: config
        hostPath:
          path: /root/.kube

由于要用到各种证书以及kubeconfig,我们在pod里挂载了host主机的path: /root/.kube和/srv/kubernetes。

重新部署dashboard后,dashboard与kube-apiserver之间就有了安全保障了(https+basic_auth)。

三、实现dashboard UI login

虽然上面实现了dashboard与apiserver之间的安全通道和basic auth,但通过nodeport方式访问dashboard,我们依旧可以掌控dashboard,而dashboard依旧没有任何访问控制机制。而实际情况是dashboard目前还不支持identity and access management,不过在不久的将来,dashboard将添加这方面的支持

那么在当前版本下,如何实现一个简易的login流程呢?除了前面提到的nodeport方式访问dashboard UI外,官方在trouble shooting里还提供了另外两种访问dashboard的方法,我们一起来看看是否能满足我们的最低级需求^0^。

1、kubectl proxy方式

kubectl proxy的方式默认只允许local network访问,但是kubectl proxy提供了若干flag options可以设置,我们来试试:

我们在minion node上执行:

# kubectl proxy --address='0.0.0.0' --port=30099
Starting to serve on [::]:30099

我们在minion node上的30099端口提供外网服务。打开浏览器,访问: http://{minion node public ip}:30099/ui,得到如下结果:

<h3>Unauthorized</h3>

到底哪没授权呢?我们查看kubectl proxy的flag options发现下面一个疑点:

--accept-hosts='^localhost$,^127\.0\.0\.1$,^\[::1\]$': Regular expression for hosts that the proxy should accept.

显然–accept-hosts默认接受的host地址形式让我们的访问受限。重新调整配置再次执行:

# kubectl proxy --address='0.0.0.0' --port=30099 --accept-hosts='^*$'
Starting to serve on [::]:30099

再次打开浏览器,访问:http://{minion node public ip}:30099/ui

浏览器会跳转至下面的地址:

http://{minion node public ip}:30099/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default

dashboard ui访问成功!不过,这种方式依旧无需你输入user/password,这不符合我们的要求。

2、直接访问apiserver方式

trouble shooting文档提供的最后一种访问方式是直接访问apiserver方式:

打开浏览器访问:

https://{master node public ip}:6443

这时浏览器会提示你:证书问题。忽略之(由于apiserver采用的是自签署的私有证书,浏览器端无法验证apiserver的server.crt),继续访问,浏览器弹出登录对话框,让你输入用户名和密码,这里我们输入apiserver —basic-auth-file中的用户名和密码,就可以成功登录apiserver,并在浏览器页面看到如下内容:

{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/apps",
    "/apis/apps/v1alpha1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v2alpha1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/policy",
    "/apis/policy/v1alpha1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1alpha1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}

接下来,我们访问下面地址:

https://{master node public ip}:6443/ui

你会看到页面跳转到:

https://101.201.78.51:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

我们成功进入dashboard UI中! 显然这种访问方式满足了我们对dashboard UI采用登录访问的最低需求!

三、小结

到目前为止,dashboard已经可以使用。但它还缺少metric和类仪表盘图形展示功能,这两个功能需要额外安装Heapster才能实现,不过一般功能足以满足你对k8s cluster的管理需求。

Kubernetes集群DNS插件安装

上一篇关于Kubernetes集群安装的文章中,我们建立一个最小可用的k8s集群,不过k8s与1.12版本后的内置了集群管理的Docker不同,k8s是一组松耦合的组件组合而成对外提供服务的。除了核心组件,其他组件是以Add-on形式提供的,比如集群内kube-DNSK8s Dashboard等。kube-dns是k8s的重要插件,用于完成集群内部service的注册和发现。随着k8s安装和管理体验的进一步完善,DNS插件势必将成为k8s默认安装的一部分。本篇将在《一篇文章带你了解Kubernetes安装》一文的基础上,进一步探讨DNS组件的安装”套路”^_^以及问题的troubleshooting。

一、安装前提和原理

上文说过,K8s的安装根据Provider的不同而不同,我们这里是基于provider=ubuntu为前提的,使用的安装脚本是浙大团队维护的那套。因此如果你的provider是其他选项,那么这篇文章里所讲述的内容可能不适用。但了解provider=ubuntu下的DNS组件的安装原理,总体上对其他安装方式也是有一定帮助的。

在部署机k8s安装工作目录的cluster/ubuntu下面,除了安装核心组件所用的download-release.sh、util.sh外,我们看到了另外一个脚本deployAddons.sh,这个脚本内容不多,结构也很清晰,大致的执行步骤就是:

init
deploy_dns
deploy_dashboard

可以看出,这个脚本就是用来部署k8s的两个常用插件:dns和dashboard的。进一步分析,发现deployAddons.sh的执行也是基于./cluster/ubuntu/config-default.sh中的配置,相关的几个配置包括:

# Optional: Install cluster DNS.
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
# DNS_SERVER_IP must be a IP in SERVICE_CLUSTER_IP_RANGE
DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.3.10"}
DNS_DOMAIN=${DNS_DOMAIN:-"cluster.local"}
DNS_REPLICAS=${DNS_REPLICAS:-1}

deployAddons.sh首先会根据上述配置生成skydns-rc.yaml和skydns-svc.yaml两个k8s描述文件,再通过kubectl create创建dns service。

二、安装k8s DNS

1、试装

为了让deployAddons.sh脚本执行时只进行DNS组件安装,需要先设置一下环境变量:

export KUBE_ENABLE_CLUSTER_UI=false

执行安装脚本:

# KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
Creating kube-system namespace...
The namespace 'kube-system' is successfully created.

Deploying DNS on Kubernetes
replicationcontroller "kube-dns-v17.1" created
service "kube-dns" created
Kube-dns rc and service is successfully deployed.

似乎很顺利。我们通过kubectl来查看一下(注意:由于DNS服务被创建在了一个名为kube-system的namespace中,kubectl执行时要指定namespace名字,否则将无法查到dns service):

# kubectl --namespace=kube-system get services
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               192.168.3.10    <none>        53/UDP,53/TCP   1m

root@iZ25cn4xxnvZ:~/k8stest/1.3.7/kubernetes/cluster/ubuntu# kubectl --namespace=kube-system get pods
NAME                                    READY     STATUS              RESTARTS   AGE
kube-dns-v17.1-n4tnj                    0/3       ErrImagePull        0          4m

在查看DNS组件对应的Pod时,发现Ready为0/3,STATUS为”ErrImagePull”,DNS服务并没有真正起来。

2、修改skydns-rc.yaml

我们来修正上面的问题。在cluster/ubuntu下,我们发现多了两个文件:skydns-rc.yaml和skydns-svc.yaml,这两个文件就是deployAddons.sh执行时根据config-default.sh中的配置生成的两个k8s service描述文件,问题就出在skydns-rc.yaml中。在该文件中,我们看到了dns service启动的pod所含的三个容器对应的镜像名字:

gcr.io/google_containers/kubedns-amd64:1.5
gcr.io/google_containers/kube-dnsmasq-amd64:1.3
gcr.io/google_containers/exechealthz-amd64:1.1

在这次安装时,我并没有配置加速器(vpn)。因此在pull gcr.io上的镜像文件时出错了。在没有加速器的情况,我们在docker hub上可以很容易寻找到替代品(由于国内网络连接docker hub慢且经常无法连接,建议先手动pull出这三个替代镜像):

gcr.io/google_containers/kubedns-amd64:1.5
=> chasontang/kubedns-amd64:1.5

gcr.io/google_containers/kube-dnsmasq-amd64:1.3
=> chasontang/kube-dnsmasq-amd64:1.3

gcr.io/google_containers/exechealthz-amd64:1.1
=> chasontang/exechealthz-amd64:1.1

我们需要手工将skydns-rc.yaml中的三个镜像名进行替换。并且为了防止deployAddons.sh重新生成skydns-rc.yaml,我们需要注释掉deployAddons.sh中的下面两行:

#sed -e "s/\\\$DNS_REPLICAS/${DNS_REPLICAS}/g;s/\\\$DNS_DOMAIN/${DNS_DOMAIN}/g;" "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns/skydns-rc.yaml.sed" > skydns-rc.yaml
#sed -e "s/\\\$DNS_SERVER_IP/${DNS_SERVER_IP}/g" "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns/skydns-svc.yaml.sed" > skydns-svc.yaml

删除dns服务:

# kubectl --namespace=kube-system delete rc/kube-dns-v17.1 svc/kube-dns
replicationcontroller "kube-dns-v17.1" deleted
service "kube-dns" deleted

再次执行deployAddons.sh重新部署DNS组件(不赘述)。安装后,我们还是来查看一下是否安装ok,这次我们直接用docker ps查看pod内那三个容器是否都起来了:

# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
e8dc52cba2c7        chasontang/exechealthz-amd64:1.1           "/exechealthz '-cmd=n"   7 minutes ago       Up 7 minutes                            k8s_healthz.1a0d495a_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_b42e68fc
f1b83b442b15        chasontang/kube-dnsmasq-amd64:1.3          "/usr/sbin/dnsmasq --"   7 minutes ago       Up 7 minutes                            k8s_dnsmasq.f16970b7_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_da111cd4
d9f09b440c6e        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.a6b39ba7_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_b198b4a8

似乎kube-dns这个镜像的容器并没有启动成功。docker ps -a印证了这一点:

# docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS                       PORTS               NAMES
24387772a2a9        chasontang/kubedns-amd64:1.5               "/kube-dns --domain=c"   3 minutes ago       Exited (255) 2 minutes ago                       k8s_kubedns.cdbc8a07_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_473144a6
3b8bb401ac6f        chasontang/kubedns-amd64:1.5               "/kube-dns --domain=c"   5 minutes ago       Exited (255) 4 minutes ago                       k8s_kubedns.cdbc8a07_kube-dns-v17.1-0zhfp_kube-system_78728001-974c-11e6-ba01-00163e1625a9_cdd57b87

查看一下stop状态下的kube-dns container的容器日志:

# docker logs 24387772a2a9
I1021 05:18:00.982731       1 server.go:91] Using https://192.168.3.1:443 for kubernetes master
I1021 05:18:00.982898       1 server.go:92] Using kubernetes API <nil>
I1021 05:18:00.983810       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I1021 05:18:00.984030       1 server.go:139] skydns: metrics enabled on :/metrics
I1021 05:18:00.984152       1 dns.go:166] Waiting for service: default/kubernetes
I1021 05:18:00.984672       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1021 05:18:00.984697       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1021 05:18:01.292557       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1021 05:18:01.293232       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
E1021 05:18:01.293361       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
I1021 05:18:01.483325       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1021 05:18:01.483390       1 dns.go:539] records:[], retval:[], path:[local cluster svc default kubernetes]
I1021 05:18:01.582598       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
... ...

I1021 05:19:07.458786       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1021 05:19:07.460465       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
E1021 05:19:07.462793       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
F1021 05:19:07.867746       1 server.go:127] Received signal: terminated

从日志上去看,应该是kube-dns去连接apiserver失败,重试一定次数后,退出了。从日志上看,kube-dns视角中的kubernetes api server的地址是:

I1021 05:18:00.982731       1 server.go:91] Using https://192.168.3.1:443 for kubernetes master

而实际上我们的k8s apiserver监听的insecure port是8080,secure port是6443(由于没有显式配置,6443是源码中的默认端口),通过https+443端口访问apiserver毫无疑问将以失败告终。问题找到了,接下来就是如何解决了。

3、指定–kube-master-url

我们看一下kube-dns命令都有哪些可以传入的命令行参数:

# docker run -it chasontang/kubedns-amd64:1.5 kube-dns --help
Usage of /kube-dns:
      --alsologtostderr[=false]: log to standard error as well as files
      --dns-port=53: port on which to serve DNS requests.
      --domain="cluster.local.": domain under which to create names
      --federations=: a comma separated list of the federation names and their corresponding domain names to which this cluster belongs. Example: "myfederation1=example.com,myfederation2=example2.com,myfederation3=example.com"
      --healthz-port=8081: port on which to serve a kube-dns HTTP readiness probe.
      --kube-master-url="": URL to reach kubernetes master. Env variables in this flag will be expanded.
      --kubecfg-file="": Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
      --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace
      --log-dir="": If non-empty, write log files in this directory
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --logtostderr[=true]: log to standard error instead of files
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --v=0: log level for V logs
      --version[=false]: Print version information and quit
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging

可以看出:–kube-master-url这个命令行选项可以实现我们的诉求。我们需要再次修改一下skydns-rc.yaml:

        args:
        # command = "/kube-dns"
        - --domain=cluster.local.
        - --dns-port=10053
        - --kube-master-url=http://10.47.136.60:8080   # 新增一行

再次重新部署DNS Addon,不赘述。部署后查看kube-dns服务信息:

# kubectl --namespace=kube-system  describe service/kube-dns
Name:            kube-dns
Namespace:        kube-system
Labels:            k8s-app=kube-dns
            kubernetes.io/cluster-service=true
            kubernetes.io/name=KubeDNS
Selector:        k8s-app=kube-dns
Type:            ClusterIP
IP:            192.168.3.10
Port:            dns    53/UDP
Endpoints:        172.16.99.3:53
Port:            dns-tcp    53/TCP
Endpoints:        172.16.99.3:53
Session Affinity:    None
No events

在通过docker logs直接查看kube-dns容器的日志:

docker logs 2f4905510cd2
I1023 11:44:12.997606       1 server.go:91] Using http://10.47.136.60:8080 for kubernetes master
I1023 11:44:13.090820       1 server.go:92] Using kubernetes API v1
I1023 11:44:13.091707       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I1023 11:44:13.091828       1 server.go:139] skydns: metrics enabled on :/metrics
I1023 11:44:13.091952       1 dns.go:166] Waiting for service: default/kubernetes
I1023 11:44:13.094592       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1023 11:44:13.094606       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1023 11:44:13.104789       1 server.go:101] Setting up Healthz Handler(/readiness, /cache) on port :8081
I1023 11:44:13.105912       1 dns.go:660] DNS Record:&{192.168.3.182 0 10 10  false 30 0  }, hash:6a8187e0
I1023 11:44:13.106033       1 dns.go:660] DNS Record:&{kubernetes-dashboard.kube-system.svc.cluster.local. 0 10 10  false 30 0  }, hash:529066a8
I1023 11:44:13.106120       1 dns.go:660] DNS Record:&{192.168.3.10 0 10 10  false 30 0  }, hash:bdfe50f8
I1023 11:44:13.106193       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10  false 30 0  }, hash:fdbb4e78
I1023 11:44:13.106268       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10  false 30 0  }, hash:fdbb4e78
I1023 11:44:13.106306       1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 0 10 10  false 30 0  }, hash:d1247c4e
I1023 11:44:13.106329       1 dns.go:660] DNS Record:&{192.168.3.1 0 10 10  false 30 0  }, hash:2b11f462
I1023 11:44:13.106350       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10  false 30 0  }, hash:c3f6ae26
I1023 11:44:13.106377       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b9b7d845
I1023 11:44:13.106398       1 dns.go:660] DNS Record:&{192.168.3.179 0 10 10  false 30 0  }, hash:d7e0b1e
I1023 11:44:13.106422       1 dns.go:660] DNS Record:&{my-nginx.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b0f41a92
I1023 11:44:16.083653       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1023 11:44:16.083950       1 dns.go:539] records:[0xc8202c39d0], retval:[{192.168.3.1 0 10 10  false 30 0  /skydns/local/cluster/svc/default/kubernetes/3262313166343632}], path:[local cluster svc default kubernetes]
I1023 11:44:16.084474       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I1023 11:44:16.084517       1 dns.go:539] records:[0xc8202c39d0], retval:[{192.168.3.1 0 10 10  false 30 0  /skydns/local/cluster/svc/default/kubernetes/3262313166343632}], path:[local cluster svc default kubernetes]
I1023 11:44:16.085024       1 dns.go:583] Received ReverseRecord Request:1.3.168.192.in-addr.arpa.

通过日志可以看到,apiserver的url是正确的,kube-dns组件没有再输出错误,安装似乎成功了,还需要测试验证一下。

三、测试验证k8s DNS

按照预期,k8s dns组件可以为k8s集群内的service做dns解析。当前k8s集群默认namespace已经部署的服务如下:

# kubectl get services
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   192.168.3.1     <none>        443/TCP   10d
my-nginx     192.168.3.179   <nodes>       80/TCP    6d

我们在k8s集群中的一个myclient容器中尝试去ping和curl my-nginx服务:

ping my-nginx解析成功(找到my-nginx的clusterip: 192.168.3.179):

root@my-nginx-2395715568-gpljv:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (192.168.3.179): 56 data bytes

curl my-nginx服务也得到如下成功结果:

# curl -v my-nginx
* Rebuilt URL to: my-nginx/
* Hostname was NOT found in DNS cache
*   Trying 192.168.3.179...
* Connected to my-nginx (192.168.3.179) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: my-nginx
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.10.1 is not blacklisted
< Server: nginx/1.10.1
< Date: Sun, 23 Oct 2016 12:14:01 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 31 May 2016 14:17:02 GMT
< Connection: keep-alive
< ETag: "574d9cde-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host my-nginx left intact

客户端容器的dns配置,这应该是k8s安装时采用的默认配置(与config-default.sh有关):

# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 192.168.3.10
options timeout:1 attempts:1 rotate
options ndots:5

到此,k8s dns组件就安装ok了。

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats