标签 docker 下的文章

Kubernetes集群中Service的滚动更新

在移动互联网时代,消费者的消费行为已经“全天候化”,为此,商家的业务系统也要保持7×24小时不间断地提供服务以满足消费者的需求。很难想像如今还会有以“中断业务”为前提的服务系统更新升级。如果微信官方发布公告说:每周六晚23:00~次日凌晨2:00进行例行系统升级,不能提供服务,作为用户的你会怎么想、怎么做呢?因此,各个平台在最初设计时就要考虑到服务的更新升级问题,部署在Kubernetes集群中的Service也不例外。

一、预备知识

1、滚动更新Rolling-update

传统的升级更新,是先将服务全部下线,业务停止后再更新版本和配置,然后重新启动并提供服务。这样的模式已经完全不能满足“时代的需要”了。在并发化、高可用系统普及的今天,服务的升级更新至少要做到“业务不中断”。而滚动更新(Rolling-update)恰是满足这一需求的一种系统更新升级方案。

简单来说,滚动更新就是针对多实例服务的一种不中断服务的更新升级方式。一般情况,对于多实例服务,滚动更新采用对各个实例逐个进行单独更新而非同一时刻对所有实例进行全部更新的方式。“滚动更新”的先进之处在于“滚动”这个概念的引入,笔者觉得它至少有以下两点含义:

a) “滚动”给人一种“圆”的映像,表意:持续,不中断。“滚动”的理念是一种趋势,我们常见的“滚动发布”、“持续交付”都是“滚动”理念的应用。与传统的大版本周期性发布/更新相比,”滚动”可以让用户更快、更及时地使用上新Feature,缩短市场反馈周期,同时滚动式的发布和更新又会将对用户体验的影响降到最小化。

b) “滚动”可向前,也可向后。我们可以在更新过程中及时发现“更新”存在的问题,并“向后滚动”,实现更新的回退,可以最大程度上降低每次更新升级的风险。

对于在Kubernetes集群部署的Service来说,Rolling update就是指一次仅更新一个Pod,并逐个进行更新,而不是在同一时刻将该Service下面的所有Pod shutdown,避免将业务中断的尴尬。

2、Service、Deployment、Replica Set、Replication Controllers和Pod之间的关系

对于我们要部署的Application来说,一般是由多个抽象的Service组成。在Kubernetes中,一个Service通过label selector match出一个Pods集合,这些Pods作为Service的endpoint,是真正承载业务的实体。而Pod在集群内的部署、调度、副本数保持则是通过DeploymentReplicationControllers这些高level的抽象来管理的,下面是一幅示意图:

img{512x368}

新版本的Kubernetes推荐用Deployment替代ReplicationController,在Deployment这个概念下在保持Pod副本数上实际发挥作用的是隐藏在背后的Replica Set

因此,我们可以看到Kubernetes上Service的rolling update实质上是对Service所match出来的Pod集合的Rolling update,而控制Pod部署、调度和副本调度的却又恰恰是Deployment和replication controller,因此后两者才是kubernetes service rolling update真正要面对的实体。

二、kubectl rolling-update子命令

kubernetes在kubectl cli工具中仅提供了对Replication Controller的rolling-update支持,通过kubectl -help,我们可以查看到下面的命令usage描述:

# kubectl -help
... ...
Deploy Commands:
  rollout        Manage a deployment rollout
  rolling-update Perform a rolling update of the given ReplicationController
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController
... ...

# kubectl help rolling-update
... ...
Usage:
  kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f
NEW_CONTROLLER_SPEC) [options]
... ...

我们现在来看一个例子,看一下kubectl rolling-update是如何对service下的Pods进行滚动更新的。我们的kubernetes集群有两个版本的Nginx

# docker images|grep nginx
nginx                                                    1.11.9                     cc1b61406712        2 weeks ago         181.8 MB
nginx                                                    1.10.1                     bf2b4c2d7bf5        4 months ago        180.7 MB

在例子中我们将Service的Pod从nginx 1.10.1版本滚动升级到1.11.9版本。

我们的rc-demo-v0.1.yaml文件内容如下:

apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-demo-nginx-v0.1
spec:
  replicas: 4
  selector:
    app: rc-demo-nginx
    ver: v0.1
  template:
    metadata:
      labels:
        app: rc-demo-nginx
        ver: v0.1
    spec:
      containers:
        - name: rc-demo-nginx
          image: nginx:1.10.1
          ports:
            - containerPort: 80
              protocol: TCP
          env:
            - name: RC_DEMO_VER
              value: v0.1

创建这个replication controller:

# kubectl create -f rc-demo-v0.1.yaml
replicationcontroller "rc-demo-nginx-v0.1" created

# kubectl get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP             NODE
rc-demo-nginx-v0.1-2p7v0   1/1       Running   0          1m        172.30.192.9   iz2ze39jeyizepdxhwqci6z
rc-demo-nginx-v0.1-9pk3t   1/1       Running   0          1m        172.30.192.8   iz2ze39jeyizepdxhwqci6z
rc-demo-nginx-v0.1-hm6b9   1/1       Running   0          1m        172.30.0.9     iz25beglnhtz
rc-demo-nginx-v0.1-vbxpl   1/1       Running   0          1m        172.30.0.10    iz25beglnhtz

Service manifest文件rc-demo-svc.yaml的内容如下:

apiVersion: v1
kind: Service
metadata:
  name: rc-demo-svc
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: rc-demo-nginx

创建这个service:

# kubectl create -f rc-demo-svc.yaml
service "rc-demo-svc" created

# kubectl describe svc/rc-demo-svc
Name:            rc-demo-svc
Namespace:        default
Labels:            <none>
Selector:        app=rc-demo-nginx
Type:            ClusterIP
IP:            10.96.172.246
Port:            <unset>    80/TCP
Endpoints:        172.30.0.10:80,172.30.0.9:80,172.30.192.8:80 + 1 more...
Session Affinity:    None
No events.

可以看到之前replication controller创建的4个Pod都被置于rc-demo-svc这个service的下面了,我们来访问一下该服务:

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Wed, 08 Feb 2017 08:45:19 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 31 May 2016 14:17:02 GMT
Connection: keep-alive
ETag: "574d9cde-264"
Accept-Ranges: bytes

# kubectl exec rc-demo-nginx-v0.1-2p7v0  env
... ...
RC_DEMO_VER=v0.1
... ...

通过Response Header中的Server字段,我们可以看到当前Service pods中的nginx版本为1.10.1;通过打印Pod中环境变量,得到RC_DEMO_VER=v0.1。

接下来,我们来rolling-update rc-demo-nginx-v0.1这个rc,我们的新rc manifest文件rc-demo-v0.2.yaml内容如下:

apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-demo-nginx-v0.2
spec:
  replicas: 4
  selector:
    app: rc-demo-nginx
    ver: v0.2
  template:
    metadata:
      labels:
        app: rc-demo-nginx
        ver: v0.2
    spec:
      containers:
        - name: rc-demo-nginx
          image: nginx:1.11.9
          ports:
            - containerPort: 80
              protocol: TCP
          env:
            - name: RC_DEMO_VER
              value: v0.2

rc-demo-new.yaml与rc-demo-old.yaml有几点不同:rc的name、image的版本以及RC_DEMO_VER这个环境变量的值:

# diff rc-demo-v0.2.yaml rc-demo-v0.1.yaml
4c4
<   name: rc-demo-nginx-v0.2
---
>   name: rc-demo-nginx-v0.1
9c9
<     ver: v0.2
---
>     ver: v0.1
14c14
<         ver: v0.2
---
>         ver: v0.1
18c18
<           image: nginx:1.11.9
---
>           image: nginx:1.10.1
24c24
<               value: v0.2
---
>               value: v0.1

我们开始rolling-update,为了便于跟踪update过程,这里将update-period设为10s,即每隔10s更新一个Pod:

#  kubectl rolling-update rc-demo-nginx-v0.1 --update-period=10s -f rc-demo-v0.2.yaml
Created rc-demo-nginx-v0.2
Scaling up rc-demo-nginx-v0.2 from 0 to 4, scaling down rc-demo-nginx-v0.1 from 4 to 0 (keep 4 pods available, don't exceed 5 pods)
Scaling rc-demo-nginx-v0.2 up to 1
Scaling rc-demo-nginx-v0.1 down to 3
Scaling rc-demo-nginx-v0.2 up to 2
Scaling rc-demo-nginx-v0.1 down to 2
Scaling rc-demo-nginx-v0.2 up to 3
Scaling rc-demo-nginx-v0.1 down to 1
Scaling rc-demo-nginx-v0.2 up to 4
Scaling rc-demo-nginx-v0.1 down to 0
Update succeeded. Deleting rc-demo-nginx-v0.1
replicationcontroller "rc-demo-nginx-v0.1" rolling updated to "rc-demo-nginx-v0.2"

从日志可以看出:kubectl rolling-update逐渐增加 rc-demo-nginx-v0.2的scale并同时逐渐减小 rc-demo-nginx-v0.1的scale值直至减到0。

在升级过程中,我们不断访问rc-demo-svc,可以看到新旧Pod版本共存的状态,服务并未中断:

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

更新后的一些状态信息:

# kubectl get rc
NAME                 DESIRED   CURRENT   READY     AGE
rc-demo-nginx-v0.2   4         4         4         5m

# kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
rc-demo-nginx-v0.2-25b15   1/1       Running   0          5m
rc-demo-nginx-v0.2-3jlpk   1/1       Running   0          5m
rc-demo-nginx-v0.2-lcnf9   1/1       Running   0          6m
rc-demo-nginx-v0.2-s7pkc   1/1       Running   0          5m

# kubectl exec rc-demo-nginx-v0.2-25b15  env
... ...
RC_DEMO_VER=v0.2
... ...

官方文档说kubectl rolling-update是由client side实现的rolling-update,这是因为roll-update的逻辑都是由kubectl发出N条命令到APIServer完成的,在kubectl的代码中我们可以看到这点:

//https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/rollingupdate.go
... ...
func RunRollingUpdate(f cmdutil.Factory, out io.Writer, cmd *cobra.Command, args []string, options *resource.FilenameOptions) error {
    ... ...
    err = updater.Update(config)
    if err != nil {
        return err
    }
    ... ...
}

//https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/rolling_updater.go
func (r *RollingUpdater) Update(config *RollingUpdaterConfig) error {
    ... ...
    // Scale newRc and oldRc until newRc has the desired number of replicas and
    // oldRc has 0 replicas.
    progressDeadline := time.Now().UnixNano() + config.Timeout.Nanoseconds()
    for newRc.Spec.Replicas != desired || oldRc.Spec.Replicas != 0 {
        // Store the existing replica counts for progress timeout tracking.
        newReplicas := newRc.Spec.Replicas
        oldReplicas := oldRc.Spec.Replicas

        // Scale up as much as possible.
        scaledRc, err := r.scaleUp(newRc, oldRc, desired, maxSurge, maxUnavailable, scaleRetryParams, config)
        if err != nil {
            return err
        }
        newRc = scaledRc
    ... ...
}

在rolling_updater.go中Update方法使用一个for循环完成了逐步减少old rc的replicas和增加new rc的replicas的工作,直到new rc到达期望值,old rc的replicas变为0。

通过kubectl rolling-update实现的滚动更新有很多不足:
- 由kubectl实现,很可能因为网络原因导致update中断;
- 需要创建一个新的rc,名字与要更新的rc不能一样;虽然这个问题不大,但实施起来也蛮别扭的;
- 回滚还需要执行rolling-update,只是用的老版本的rc manifest文件;
- service执行的rolling-update在集群中没有记录,后续无法跟踪rolling-update历史。

不过,由于Replication Controller已被Deployment这个抽象概念所逐渐代替,下面我们来考虑如何实现Deployment的滚动更新以及deployment滚动更新的优势。

三、Deployment的rolling-update

kubernetes Deployment是一个更高级别的抽象,就像文章开头那幅示意图那样,Deployment会创建一个Replica Set,用来保证Deployment中Pod的副本数。由于kubectl rolling-update仅支持replication controllers,因此要想rolling-updata deployment中的Pod,你需要修改Deployment自己的manifest文件并应用。这个修改会创建一个新的Replica Set,在scale up这个Replica Set的Pod数的同时,减少原先的Replica Set的Pod数,直至zero。而这一切都发生在Server端,并不需要kubectl参与。

我们同样来看一个例子。我们建立第一个版本的deployment manifest文件:deployment-demo-v0.1.yaml。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: deployment-demo-nginx
  minReadySeconds: 10
  template:
    metadata:
      labels:
        app: deployment-demo-nginx
        version: v0.1
    spec:
      containers:
        - name: deployment-demo
          image: nginx:1.10.1
          ports:
            - containerPort: 80
              protocol: TCP
          env:
            - name: DEPLOYMENT_DEMO_VER
              value: v0.1

创建该deployment:

# kubectl create -f deployment-demo-v0.1.yaml --record
deployment "deployment-demo" created

# kubectl get deployments
NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment-demo   4         4         4            0           10s

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   4         4         4         13s

# kubectl get pods -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP             NODE
deployment-demo-1818355944-78spp   1/1       Running   0          24s       172.30.0.10    iz25beglnhtz
deployment-demo-1818355944-7wvxk   1/1       Running   0          24s       172.30.0.9     iz25beglnhtz
deployment-demo-1818355944-hb8tt   1/1       Running   0          24s       172.30.192.9   iz2ze39jeyizepdxhwqci6z
deployment-demo-1818355944-jtxs2   1/1       Running   0          24s       172.30.192.8   iz2ze39jeyizepdxhwqci6z

# kubectl exec deployment-demo-1818355944-78spp env
... ...
DEPLOYMENT_DEMO_VER=v0.1
... ...

deployment-demo创建了ReplicaSet:deployment-demo-1818355944,用于保证Pod的副本数。

我们再来创建使用了该deployment中Pods的Service:

# kubectl create -f deployment-demo-svc.yaml
service "deployment-demo-svc" created

# kubectl get service
NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
deployment-demo-svc   10.109.173.225   <none>        80/TCP    5s
kubernetes            10.96.0.1        <none>        443/TCP   42d

# kubectl describe service/deployment-demo-svc
Name:            deployment-demo-svc
Namespace:        default
Labels:            <none>
Selector:        app=deployment-demo-nginx
Type:            ClusterIP
IP:            10.109.173.225
Port:            <unset>    80/TCP
Endpoints:        172.30.0.10:80,172.30.0.9:80,172.30.192.8:80 + 1 more...
Session Affinity:    None
No events.

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

好了,我们看到该service下有四个pods,Service提供的服务也运行正常。

接下来,我们对该Service进行更新。为了方便说明,我们建立了deployment-demo-v0.2.yaml文件,其实你也大可不必另创建文件,直接再上面的deployment-demo-v0.1.yaml文件中修改也行:

# diff deployment-demo-v0.2.yaml deployment-demo-v0.1.yaml
15c15
<         version: v0.2
---
>         version: v0.1
19c19
<           image: nginx:1.11.9
---
>           image: nginx:1.10.1
25c25
<               value: v0.2
---
>               value: v0.1

我们用deployment-demo-v0.2.yaml文件来更新之前创建的deployments中的Pods:

# kubectl apply -f deployment-demo-v0.2.yaml --record
deployment "deployment-demo" configured

apply命令是瞬间接收到apiserver返回的Response并结束的。但deployment的rolling-update过程还在进行:

# kubectl describe deployment deployment-demo
Name:            deployment-demo
... ...
Replicas:        2 updated | 4 total | 3 available | 2 unavailable
StrategyType:        RollingUpdate
MinReadySeconds:    10
RollingUpdateStrategy:    1 max unavailable, 1 max surge
Conditions:
  Type        Status    Reason
  ----        ------    ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets:    deployment-demo-1818355944 (3/3 replicas created)
NewReplicaSet:    deployment-demo-2775967987 (2/2 replicas created)
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
  12m        12m        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled up replica set deployment-demo-1818355944 to 4
  11s        11s        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled up replica set deployment-demo-2775967987 to 1
  11s        11s        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled down replica set deployment-demo-1818355944 to 3
  11s        11s        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled up replica set deployment-demo-2775967987 to 2

# kubectl get pods
NAME                               READY     STATUS              RESTARTS   AGE
deployment-demo-1818355944-78spp   1/1       Terminating         0          12m
deployment-demo-1818355944-hb8tt   1/1       Terminating         0          12m
deployment-demo-1818355944-jtxs2   1/1       Running             0          12m
deployment-demo-2775967987-5s9qx   0/1       ContainerCreating   0          0s
deployment-demo-2775967987-lf5gw   1/1       Running             0          12s
deployment-demo-2775967987-lxbx8   1/1       Running             0          12s
deployment-demo-2775967987-pr0hl   0/1       ContainerCreating   0          0s

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   1         1         1         12m
deployment-demo-2775967987   4         4         4         17s

我们可以看到这个update过程中ReplicaSet的变化,同时这个过程中服务并未中断,只是新旧版本短暂地交错提供服务:

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

最终所有Pod被替换为了v0.2版本:

kubectl exec deployment-demo-2775967987-5s9qx env
... ...
DEPLOYMENT_DEMO_VER=v0.2
... ...

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

我们发现deployment的create和apply命令都带有一个–record参数,这是告诉apiserver记录update的历史。通过kubectl rollout history可以查看deployment的update history:

#  kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
1        kubectl create -f deployment-demo-v0.1.yaml --record
2        kubectl apply -f deployment-demo-v0.2.yaml --record

如果没有加“–record”,那么你得到的历史将会类似这样的结果:

#  kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
1        <none>

同时,我们会看到old ReplicaSet并未被删除:

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   0         0         0         25m
deployment-demo-2775967987   4         4         4         13m

这些信息都存储在server端,方便回退!

Deployment下Pod的回退操作异常简单,通过rollout undo即可完成。rollout undo会将Deployment回退到record中的上一个revision(见上面rollout history的输出中有revision列):

# kubectl rollout undo deployment deployment-demo
deployment "deployment-demo" rolled back

rs的状态又颠倒回来:

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   4         4         4         28m
deployment-demo-2775967987   0         0         0         15m

查看update历史:

# kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
2        kubectl apply -f deployment-demo-v0.2.yaml --record
3        kubectl create -f deployment-demo-v0.1.yaml --record

可以看到history中最多保存了两个revision记录(这个Revision保存的数量应该可以设置)。

四、通过API实现的deployment rolling-update

我们的最终目标是通过API来实现service的rolling-update。Kubernetes提供了针对deployment的Restful API,包括:create、read、replace、delete、patch、rollback等。从这些API的字面意义上看,patch和rollback很可能符合我们的需要,我们需要验证一下。

我们将deployment置为v0.1版本,即:image: nginx:1.10.1,DEPLOYMENT_DEMO_VER=v0.1。然后我们尝试通过patch API将deployment升级为v0.2版本,由于patch API仅接收json格式的body内容,我们将 deployment-demo-v0.2.yaml转换为json格式:deployment-demo-v0.2.json。patch是局部更新,这里偷个懒儿,直接将全部deployment manifest内容发给了APIServer,让server自己做merge^0^。

执行下面curl命令:

# curl -H 'Content-Type:application/strategic-merge-patch+json' -X PATCH --data @deployment-demo-v0.2.json http://localhost:8080/apis/extensions/v1beta1/namespaces/default/deployments/deployment-demo

这个命令输出一个merge后的Deployment json文件,由于内容太多,这里就不贴出来了,内容参见:patch-api-output.txt

跟踪命令执行时的deployment状态,我们可以看到该命令生效了:新旧两个rs的Scale值在此消彼长,两个版本的Pod在交替提供服务。

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   3         3         3         12h
deployment-demo-2775967987   2         2         2         12h

# curl  -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

# curl  -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

# curl  -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

不过通过这种方式update后,通过rollout history查看到的历史就有些“不那么精确了”:

#kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
8       kubectl create -f deployment-demo-v0.1.yaml --record
9        kubectl create -f deployment-demo-v0.1.yaml --record

目前尚无好的方法。但rolling update的确是ok了。

Patch API支持三种类型的Content-type:json-patch+json、strategic-merge-patch+json和merge-patch+json。对于后面两种,从测试效果来看,都一样。但json-patch+json这种类型在测试的时候一直报错:

# curl -H 'Content-Type:application/json-patch+json' -X PATCH --data @deployment-demo-v0.2.json http://localhost:8080/apis/extensions/v1beta1/namespaces/default/deployments/deployment-demo
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "json: cannot unmarshal object into Go value of type jsonpatch.Patch",
  "code": 500
}

kubectl patch子命令似乎使用的是strategic-merge-patch+json。源码中也没有过多说明三种方式的差别:

//pkg/kubectl/cmd/patch.go
func getPatchedJSON(patchType api.PatchType, originalJS, patchJS []byte, obj runtime.Object) ([]byte, error) {
    switch patchType {
    case api.JSONPatchType:
        patchObj, err := jsonpatch.DecodePatch(patchJS)
        if err != nil {
            return nil, err
        }
        return patchObj.Apply(originalJS)

    case api.MergePatchType:
        return jsonpatch.MergePatch(originalJS, patchJS)

    case api.StrategicMergePatchType:
        return strategicpatch.StrategicMergePatchData(originalJS, patchJS, obj)

    default:
        // only here as a safety net - go-restful filters content-type
        return nil, fmt.Errorf("unknown Content-Type header for patch: %v", patchType)
    }
}

// DecodePatch decodes the passed JSON document as an RFC 6902 patch.

// MergePatch merges the patchData into the docData.

// StrategicMergePatch applies a strategic merge patch. The patch and the original document
// must be json encoded content. A patch can be created from an original and a modified document
// by calling CreateStrategicMergePatch.

接下来,我们使用deployment rollback API实现deployment的rollback。我们创建一个deployment-demo-rollback.json文件作为请求的内容:

//deployment-demo-rollback.json
{
        "name" : "deployment-demo",
        "rollbackTo" : {
                "revision" : 0
        }
}

revision:0 表示回退到上一个revision。执行下面命令实现rollback:

# curl -H 'Content-Type:application/json' -X POST --data @deployment-demo-rollback.json http://localhost:8080/apis/extensions/v1beta1/namespaces/default/deployments/deployment-demo/rollback
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "rollback request for deployment \"deployment-demo\" succeeded",
  "code": 200
}

# kubectl describe deployment/deployment-demo
... ...
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
... ...
 27s        27s        1    {deployment-controller }            Normal        DeploymentRollback    Rolled back deployment "deployment-demo" to revision 1
... ...

通过查看deployment状态可以看出rollback成功了。但这个API的response似乎有些bug,明明是succeeded了(code:200),但status却是”Failure”。

如果你在patch或rollback过程中还遇到什么其他问题,可以通过kubectl describe deployment/deployment-demo 查看输出的Events中是否有异常提示。

五、小结

从上面的实验来看,通过Kubernetes提供的API是可以实现Service中Pods的rolling-update的,但这更适用于无状态的Service。对于那些有状态的Service(通过PetSet或是1.5版本后的Stateful Set实现的),这么做是否还能满足要求还不能确定。由于暂时没有环境,这方面尚未测试。

上述各个manifest的源码可以在这里下载到。

以Kubeadm方式安装的Kubernetes集群的探索

当前手上有两个Kubernetes cluster,一个是采用kube-up.sh安装的k8s 1.3.7版本,另外一个则是采用kubeadm安装的k8s 1.5.1版本。由于1.3.7版本安装在前,并且目前它也是承载了我们PaaS平台的环境,因此对于这个版本的Kubernetes安装环境、配置操作、日志查看、集群操作等相对较为熟悉。而Kubeadm安装的1.5.1版本K8s集群在组件部署、配置、日志等诸多方面与1.3.7版本有了较大差异。刚上手的时候,你会发现你原来所熟知的1.3.7的东西都不在原先的位置上了。估计很多和我一样,采用kubeadm将集群升级到1.5.1版本的朋友们都会遇到这类问题,于是这里打算对Kubeadm方式安装的Kubernetes集群进行一些小小的探索,把一些变动较大的点列出来,供大家参考。

一、环境

这里使用的依然是文章《使用Kubeadm安装Kubernetes》中安装完毕的Kubernetes 1.5.1集群环境,底层是阿里云ECS,操作系统是Ubuntu 16.04.1。网络用的是weave network

试验集群只有两个Node:一个master node和一个minion node。但Master node由于被taint了,因此它与minion node一样参与集群调度和承担负载。

二、核心组件的Pod化

Kubeadm安装的k8s集群与kube-up.sh安装集群相比,最大的不同应该算是kubernetes核心组件的Pod化,即:kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、kube-discovery以及etcd等核心组件都运行在集群中的Pod里的,这颇有些CoreOS的风格。只有一个组件是例外的,那就是负责在node上与本地容器引擎交互的Kubelet。

K8s的核心组件Pod均放在kube-system namespace中,通过kubectl(on master node)可以查看到:

# kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-iz25beglnhtz                       1/1       Running   2          26d
kube-apiserver-iz25beglnhtz             1/1       Running   3          26d
kube-controller-manager-iz25beglnhtz    1/1       Running   2          26d
kube-scheduler-iz25beglnhtz             1/1       Running   4          26d
... ...

另外细心的朋友可能会发现,这些核心组建的Pod名字均以所在Node的主机名为结尾,比如:kube-apiserver-iz25beglnhtz中的”iz25beglnhtz”就是其所在Node的主机名。

不过,即便这些核心组件是一个容器的形式运行在集群中,组件所使用网络依然是所在Node的主机网络,而不是Pod Network

# docker ps|grep apiserver
98ea64bbf6c8        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   10 days ago         Up 10 days                              k8s_kube-apiserver.6c2e367b_kube-apiserver-iz25beglnhtz_kube-system_033de1afc0844729cff5e100eb700a81_557d1fb2
4f87d22b8334        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 10 days ago         Up 10 days                              k8s_POD.d8dbe16c_kube-apiserver-iz25beglnhtz_kube-system_033de1afc0844729cff5e100eb700a81_5931e490

# docker inspect 98ea64bbf6c8
... ...
"HostConfig": {
"NetworkMode": "container:4f87d22b833425082be55851d72268023d41b50649e46c738430d9dfd3abea11",
}
... ...

# docker inspect 4f87d22b833425082be55851d72268023d41b50649e46c738430d9dfd3abea11
... ...
"HostConfig": {
"NetworkMode": "host",
}
... ...

从上面docker inspect的输出可以看出kube-apiserver pod里面的pause容器采用的网络模式是host网络,而以pause容器网络为基础的kube-apiserver 容器显然就继承了这一network namespace。因此从外面看,访问Kube-apiserver这样的组件和以前没什么两样:在Master node上可以通过localhost:8080访问;在Node外,可以通过master_node_ip:6443端口访问。

三、核心组件启动配置调整

在kube-apiserver等核心组件还是以本地程序运行在物理机上的时代,修改kube-apiserver的启动参数,比如修改一下–service-node-port-range的范围、添加一个–basic-auth-file等,我们都可以通过直接修改/etc/default/kube-apiserver(以Ubuntu 14.04为例)文件的内容并重启kube-apiserver service(service restart kube-apiserver)的方式实现。其他核心组件:诸如:kube-controller-manager、kube-proxy和kube-scheduler均是如此。

但在kubeadm时代,这些配置文件不再存在,取而代之的是和用户Pod描述文件类似的manifest文件(都放置在/etc/kubernetes/manifests下面):

/etc/kubernetes/manifests# ls
etcd.json  kube-apiserver.json  kube-controller-manager.json  kube-scheduler.json

我们以为kube-apiserver增加一个启动参数:”–service-node-port-range=80-32767″ 为例:

打开并编辑/etc/kubernetes/manifests/kube-apiserver.json,在“command字段对应的值中添加”–service-node-port-range=80-32767″:

"containers": [
      {
        "name": "kube-apiserver",
        "image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.1",
        "command": [
          "kube-apiserver",
          "--insecure-bind-address=127.0.0.1",
          "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
          "--service-cluster-ip-range=10.96.0.0/12",
          "--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
          "--client-ca-file=/etc/kubernetes/pki/ca.pem",
          "--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
          "--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
          "--token-auth-file=/etc/kubernetes/pki/tokens.csv",
          "--secure-port=6443",
          "--allow-privileged",
          "--advertise-address=10.47.217.91",
          "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
          "--anonymous-auth=false",
          "--etcd-servers=http://127.0.0.1:2379",
          "--service-node-port-range=80-32767"
        ],

注意:不要忘记在–etcd-servers这一行后面添加一个逗号,否则kube-apiserver会退出。

修改后,你会发现kube-apiserver会被自动重启。这是kubelet的功劳。kubelet在启动时监听/etc/kubernetes/manifests目录下的文件变化并做适当处理:

# ps -ef|grep kubelet
root      1633     1  5  2016 ?        1-09:24:47 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local

kubelet自身是一个systemd的service,它的启动配置可以通过下面文件修改:

# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

四、kubectl的配置

kube-up.sh安装的k8s集群会在每个Node上的~/.kube/下创建config文件,用于kubectl访问apiserver和操作集群使用。但在kubeadm模式下,~/.kube/下面的内容变成了:

~/.kube# ls
cache/  schema/

于是有了问题1:config哪里去了?

之所以在master node上我们的kubectl依旧可以工作,那是因为默认kubectl会访问localhost:8080来与kube-apiserver交互。如果kube-apiserver没有关闭–insecure-port,那么kubectl便可以正常与kube-apiserver交互,因为–insecure-port是没有任何校验机制的。

于是又了问题2:如果是其他node上的kubectl与kube-apiserver通信或者master node上的kubectl通过secure port与kube-apiserver通信,应该如何配置?

接下来,我们一并来回答上面两个问题。kubeadm在创建集群时,在master node的/etc/kubernetes下面创建了两个文件:

/etc/kubernetes# ls -l
total 32
-rw------- 1 root root 9188 Dec 28 17:32 admin.conf
-rw------- 1 root root 9188 Dec 28 17:32 kubelet.conf
... ...

这两个文件的内容是完全一样的,仅从文件名可以看出是谁在使用。比如kubelet.conf这个文件,我们就在kubelet程序的启动参数中看到过:–kubeconfig=/etc/kubernetes/kubelet.conf

# ps -ef|grep kubelet
root      1633     1  5  2016 ?        1-09:26:41 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local

打开这个文件,你会发现这就是一个kubeconfig文件,文件内容较长,我们通过kubectl config view来查看一下这个文件的结构:

# kubectl --kubeconfig /etc/kubernetes/kubelet.conf config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://{master node local ip}:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: admin@kubernetes
- context:
    cluster: kubernetes
    user: kubelet
  name: kubelet@kubernetes
current-context: admin@kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubelet
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

这和我们在《Kubernetes集群Dashboard插件安装》一文中介绍的kubeconfig文件内容并不二致。不同之处就是“REDACTED”这个字样的值,我们对应到kubelet.conf中,发现每个REDACTED字样对应的都是一段数据,这段数据是由对应的数字证书内容或密钥内容转换(base64)而来的,在访问apiserver时会用到。

我们在minion node上测试一下:

minion node:

# kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# kubectl --kubeconfig /etc/kubernetes/kubelet.conf get pods
NAME                         READY     STATUS    RESTARTS   AGE
my-nginx-1948696469-359d6    1/1       Running   2          26d
my-nginx-1948696469-3g0n7    1/1       Running   3          26d
my-nginx-1948696469-xkzsh    1/1       Running   2          26d
my-ubuntu-2560993602-5q7q5   1/1       Running   2          26d
my-ubuntu-2560993602-lrrh0   1/1       Running   2          26d

kubeadm创建k8s集群时,会在master node上创建一些用于组件间访问的证书、密钥和token文件,上面的kubeconfig中的“REDACTED”所代表的内容就是从这些文件转化而来的:

/etc/kubernetes/pki# ls
apiserver-key.pem  apiserver.pem  apiserver-pub.pem  ca-key.pem  ca.pem  ca-pub.pem  sa-key.pem  sa-pub.pem  tokens.csv
  • apiserver-key.pem:kube-apiserver的私钥文件
  • apiserver.pem:kube-apiserver的公钥证书
  • apiserver-pub.pem kube-apiserver的公钥文件
  • ca-key.pem:CA的私钥文件
  • ca.pem:CA的公钥证书
  • ca-pub.pem :CA的公钥文件
  • sa-key.pem :serviceaccount私钥文件
  • sa-pub.pem :serviceaccount的公钥文件
  • tokens.csv:kube-apiserver用于校验的token文件

在k8s各核心组件的启动参数中会看到上面文件的身影,比如:

 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address={master node local ip} --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379 --service-node-port-range=80-32767

我们还可以在minion node上通过curl还手工测试一下通过安全通道访问master node上的kube-apiserver。在《Kubernetes集群的安全配置》一文中,我们提到过k8s的authentication(包括:客户端证书认证、basic auth、static token等)只要通过其中一个即可。当前kube-apiserver开启了客户端证书认证(–client-ca-file)和static token验证(–token-auth-file),我们只要通过其中一个,就可以通过authentication,于是我们使用static token方式。static token file的内容格式:

token,user,uid,"group1,group2,group3"

对应到master node上的tokens.csv

# cat /etc/kubernetes/pki/tokens.csv
{token},{user},812ffe41-cce0-11e6-9bd3-00163e1001d7,system:kubelet-bootstrap

我们用这个token通过curl与apiserver交互:

# curl --cacert /etc/kubernetes/pki/ca.pem -H "Authorization: Bearer {token}"  https://{master node local ip}:6443
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/apps",
    "/apis/apps/v1beta1",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v2alpha1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1alpha1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1alpha1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/extensions/third-party-resources",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/logs",
    "/metrics",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}

交互成功!

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 Go语言编程指南
商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats