标签 image 下的文章

依赖Kafka的Go单元测试例解

本文永久链接 – https://tonybai.com/2024/01/08/go-unit-testing-deps-on-kafka

Kafka是Apache基金会开源的一个分布式事件流处理平台,是Java阵营(最初为Scala)中的一款杀手级应用,其提供的高可靠性、高吞吐量和低延迟的数据传输能力,让其到目前为止依旧是现代企业级应用系统以及云原生应用系统中使用的重要中间件。

在日常开发Go程序时,我们经常会遇到一些依赖Kafka的代码,如何对这些代码进行测试,尤其是单测是摆在Go开发者前面的一个现实问题!

有人说用mock,是个路子。但看过我的《单测时尽量用fake object》一文的童鞋估计已经走在了寻找kafka fake object的路上了!Kafka虽好,但身形硕大,不那么灵巧。找到一个合适的fake object不容易。在这篇文章中,我们就来聊聊如何测试那些依赖kafka的代码,再往本质一点说,就是和大家以找找那些合适的kafka fake object。

1. 寻找fake object的策略

在《单测时尽量用fake object》一文中,我们提到过,如果测试的依赖提供了tiny版本或某些简化版,我们可以直接使用这些版本作为fake object的候选,就像etcd提供了用于测试的自身简化版的实现(embed)那样。

但Kafka并没有提供tiny版本,我们也只能选择《单测时尽量用fake object》一文提到的另外一个策略,那就是利用容器来充当fake object,这是目前能搞到任意依赖的fake object的最简单路径了。也许以后WASI(WebAssembly System Interface)成熟了,让wasm脱离浏览器并可以在本地系统上飞起,到时候换用wasm也不迟。

下面我们就按照使用容器的策略来找一找适合的kafka container。

2. testcontainers-go

我们第一站就来到了testcontainers-go。testcontainers-go是一个Go语言开源项目,专门用于简化创建和清理基于容器的依赖项,常用于Go项目的单元测试、自动化集成或冒烟测试中。通过testcontainers-go提供的易于使用的API,开发人员能够以编程方式定义作为测试的一部分而运行的容器,并在测试完成时清理这些资源。

注:testcontainers不仅提供Go API,它还覆盖了主流的编程语言,包括:Java、.NET、Python、Node.js、Rust等。

在几个月之前,testcontainers-go项目还没有提供对Kafka的直接支持,我们需要自己使用testcontainers.GenericContainer来自定义并启动kafka容器。2023年9月,以KRaft模式运行的Kafka容器才被首次引入testcontainers-go项目

目前testcontainers-go使用的kafka镜像版本是confluentinc/confluent-local:7.5.0Confluent是在kafka背后的那家公司,基于kafka提供商业化支持。今年初,Confluent还收购了Immerok,将apache的另外一个明星项目Flink招致麾下。

confluent-local并不是一个流行的kafka镜像,它只是一个使用KRaft模式的零配置的、包含Confluent Community RestProxy的Apache Kafka,并且镜像是实验性的,仅应用于本地开发工作流,不应该用在支持生产工作负载。

生产中最常用的开源kafka镜像是confluentinc/cp-kafka镜像,它是基于开源Kafka项目构建的,但在此基础上添加了一些额外的功能和工具,以提供更丰富的功能和更易于部署和管理的体验。cp-kafka镜像的版本号并非kafka的版本号,其对应关系需要cp-kafka镜像官网查询。

另外一个开发领域常用的kafka镜像是bitnami的kafka镜像。Bitnami是一个提供各种开源软件的预打包镜像和应用程序栈的公司。Bitnami Kafka镜像是基于开源Kafka项目构建的,是一个可用于快速部署和运行Kafka的Docker镜像。Bitnami Kafka镜像与其内部的Kakfa的版本号保持一致。

下面我们就来看看如何使用testcontainers-go的kafka来作为依赖kafka的Go单元测试用例的fake object。

这第一个测试示例改编自testcontainers-go/kafka module的example_test.go:

// testcontainers/kafka_setup/kafka_test.go

package main

import (
    "context"
    "fmt"
    "testing"

    "github.com/testcontainers/testcontainers-go/modules/kafka"
)

func TestKafkaSetup(t *testing.T) {
    ctx := context.Background()

    kafkaContainer, err := kafka.RunContainer(ctx, kafka.WithClusterID("test-cluster"))
    if err != nil {
        panic(err)
    }

    // Clean up the container
    defer func() {
        if err := kafkaContainer.Terminate(ctx); err != nil {
            panic(err)
        }
    }()

    state, err := kafkaContainer.State(ctx)
    if err != nil {
        panic(err)
    }

    if kafkaContainer.ClusterID != "test-cluster" {
        t.Errorf("want test-cluster, actual %s", kafkaContainer.ClusterID)
    }
    if state.Running != true {
        t.Errorf("want true, actual %t", state.Running)
    }
    brokers, _ := kafkaContainer.Brokers(ctx)
    fmt.Printf("%q\n", brokers)
}

在这个例子中,我们直接调用kafka.RunContainer创建了一个名为test-cluster的kafka实例,如果没有通过WithImage向RunContainer传入自定义镜像,那么默认我们将启动一个confluentinc/confluent-local:7.5.0的容器(注意:随着时间变化,该默认容器镜像的版本也会随之改变)。

通过RunContainer返回的kafka.KafkaContainer我们可以获取到关于kafka容器的各种信息,比如上述代码中的ClusterID、kafka Broker地址信息等。有了这些信息,我们后续便可以与以容器形式启动的kafka建立连接并做数据的写入和读取操作了。

我们先来看这个测试的运行结果,与预期一致:

$ go test
2023/12/16 21:45:52 github.com/testcontainers/testcontainers-go - Connected to docker:
  ... ...
  Resolved Docker Host: unix:///var/run/docker.sock
  Resolved Docker Socket Path: /var/run/docker.sock
  Test SessionID: 19e47867b733f4da4f430d78961771ae3a1cc66c5deca083b4f6359c6d4b2468
  Test ProcessID: 41b9ef62-2617-4189-b23a-1bfa4c06dfec
2023/12/16 21:45:52 Creating container for image docker.io/testcontainers/ryuk:0.5.1
2023/12/16 21:45:53 Container created: 8f2240042c27
2023/12/16 21:45:53 Starting container: 8f2240042c27
2023/12/16 21:45:53 Container started: 8f2240042c27
2023/12/16 21:45:53 Waiting for container id 8f2240042c27 image: docker.io/testcontainers/ryuk:0.5.1. Waiting for: &{Port:8080/tcp timeout:<nil> PollInterval:100ms}
2023/12/16 21:45:53 Creating container for image confluentinc/confluent-local:7.5.0
2023/12/16 21:45:53 Container created: a39a495aed0b
2023/12/16 21:45:53 Starting container: a39a495aed0b
2023/12/16 21:45:53 Container started: a39a495aed0b
["localhost:1037"]
2023/12/16 21:45:58 Terminating container: a39a495aed0b
2023/12/16 21:45:58 Container terminated: a39a495aed0b
PASS
ok      demo    6.236s

接下来,在上面用例的基础上,我们再来做一个Kafka连接以及数据读写测试:

// testcontainers/kafka_consumer_and_producer/kafka_test.go

package main

import (
    "bytes"
    "context"
    "errors"
    "net"
    "strconv"
    "testing"
    "time"

    "github.com/testcontainers/testcontainers-go/modules/kafka"

    kc "github.com/segmentio/kafka-go" // kafka client
)

func createTopics(brokers []string, topics ...string) error {
    // to create topics when auto.create.topics.enable='false'
    conn, err := kc.Dial("tcp", brokers[0])
    if err != nil {
        return err
    }
    defer conn.Close()

    controller, err := conn.Controller()
    if err != nil {
        return err
    }
    var controllerConn *kc.Conn
    controllerConn, err = kc.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
    if err != nil {
        return err
    }
    defer controllerConn.Close()

    var topicConfigs []kc.TopicConfig
    for _, topic := range topics {
        topicConfig := kc.TopicConfig{
            Topic:             topic,
            NumPartitions:     1,
            ReplicationFactor: 1,
        }
        topicConfigs = append(topicConfigs, topicConfig)
    }

    err = controllerConn.CreateTopics(topicConfigs...)
    if err != nil {
        return err
    }

    return nil
}

func newWriter(brokers []string, topic string) *kc.Writer {
    return &kc.Writer{
        Addr:                   kc.TCP(brokers...),
        Topic:                  topic,
        Balancer:               &kc.LeastBytes{},
        AllowAutoTopicCreation: true,
        RequiredAcks:           0,
    }
}

func newReader(brokers []string, topic string) *kc.Reader {
    return kc.NewReader(kc.ReaderConfig{
        Brokers:  brokers,
        Topic:    topic,
        GroupID:  "test-group",
        MaxBytes: 10e6, // 10MB
    })
}

func TestProducerAndConsumer(t *testing.T) {
    ctx := context.Background()

    kafkaContainer, err := kafka.RunContainer(ctx, kafka.WithClusterID("test-cluster"))
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }

    // Clean up the container
    defer func() {
        if err := kafkaContainer.Terminate(ctx); err != nil {
            t.Fatalf("want nil, actual %v\n", err)
        }
    }()

    state, err := kafkaContainer.State(ctx)
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }

    if state.Running != true {
        t.Errorf("want true, actual %t", state.Running)
    }

    brokers, err := kafkaContainer.Brokers(ctx)
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }

    topic := "test-topic"
    w := newWriter(brokers, topic)
    defer w.Close()
    r := newReader(brokers, topic)
    defer r.Close()

    err = createTopics(brokers, topic)
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }
    time.Sleep(5 * time.Second)

    messages := []kc.Message{
        {
            Key:   []byte("Key-A"),
            Value: []byte("Value-A"),
        },
        {
            Key:   []byte("Key-B"),
            Value: []byte("Value-B"),
        },
        {
            Key:   []byte("Key-C"),
            Value: []byte("Value-C"),
        },
        {
            Key:   []byte("Key-D"),
            Value: []byte("Value-D!"),
        },
    }

    const retries = 3
    for i := 0; i < retries; i++ {
        ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
        defer cancel()

        // attempt to create topic prior to publishing the message
        err = w.WriteMessages(ctx, messages...)
        if errors.Is(err, kc.LeaderNotAvailable) || errors.Is(err, context.DeadlineExceeded) {
            time.Sleep(time.Millisecond * 250)
            continue
        }

        if err != nil {
            t.Fatalf("want nil, actual %v\n", err)
        }
        break
    }

    var getMessages []kc.Message
    for i := 0; i < len(messages); i++ {
        m, err := r.ReadMessage(context.Background())
        if err != nil {
            t.Fatalf("want nil, actual %v\n", err)
        }
        getMessages = append(getMessages, m)
    }

    for i := 0; i < len(messages); i++ {
        if !bytes.Equal(getMessages[i].Key, messages[i].Key) {
            t.Errorf("want %s, actual %s\n", string(messages[i].Key), string(getMessages[i].Key))
        }
        if !bytes.Equal(getMessages[i].Value, messages[i].Value) {
            t.Errorf("want %s, actual %s\n", string(messages[i].Value), string(getMessages[i].Value))
        }
    }
}

我们使用segmentio/kafka-go这个客户端来实现kafka的读写。关于如何使用segmentio/kafka-go这个客户端,可以参考我之前写的《Go社区主流Kafka客户端简要对比》。

这里我们在TestProducerAndConsumer这个用例中,先通过testcontainers-go的kafka.RunContainer启动一个Kakfa实例,然后创建了一个topic: “test-topic”。我们在写入消息前也可以不单独创建这个“test-topic”,Kafka默认启用topic自动创建,并且segmentio/kafka-go的高级API:Writer也支持AllowAutoTopicCreation的设置。不过topic的创建需要一些时间,如果要在首次写入消息时创建topic,此次写入可能会失败,需要retry。

向topic写入一条消息(实际上是一个批量Message,包括四个key-value pair)后,我们调用ReadMessage从上述topic中读取消息,并将读取的消息与写入的消息做比较。

注:近期发现kafka-go的一个可能导致内存暴涨的问题,在kafka ack返回延迟变大的时候,可能触发该问题。

下面是执行该用例的输出结果:

$ go test
2023/12/17 17:43:54 github.com/testcontainers/testcontainers-go - Connected to docker:
  Server Version: 24.0.7
  API Version: 1.43
  Operating System: CentOS Linux 7 (Core)
  Total Memory: 30984 MB
  Resolved Docker Host: unix:///var/run/docker.sock
  Resolved Docker Socket Path: /var/run/docker.sock
  Test SessionID: f76fe611c753aa4ef1456285503b0935a29795e7c0fab2ea2588029929215a08
  Test ProcessID: 27f531ee-9b5f-4e4f-b5f0-468143871004
2023/12/17 17:43:54 Creating container for image docker.io/testcontainers/ryuk:0.5.1
2023/12/17 17:43:54 Container created: 577309098f4c
2023/12/17 17:43:54 Starting container: 577309098f4c
2023/12/17 17:43:54 Container started: 577309098f4c
2023/12/17 17:43:54 Waiting for container id 577309098f4c image: docker.io/testcontainers/ryuk:0.5.1. Waiting for: &{Port:8080/tcp timeout:<nil> PollInterval:100ms}
2023/12/17 17:43:54 Creating container for image confluentinc/confluent-local:7.5.0
2023/12/17 17:43:55 Container created: 1ee11e11742b
2023/12/17 17:43:55 Starting container: 1ee11e11742b
2023/12/17 17:43:55 Container started: 1ee11e11742b
2023/12/17 17:44:15 Terminating container: 1ee11e11742b
2023/12/17 17:44:15 Container terminated: 1ee11e11742b
PASS
ok      demo    21.505s

我们看到默认情况下,testcontainer能满足与kafka交互的基本需求,并且testcontainer提供了一系列Option(WithXXX)可以对container进行定制,以满足一些扩展性的要求,但是这需要你对testcontainer提供的API有更全面的了解。

除了开箱即用的testcontainer之外,我们还可以使用另外一种方便的基于容器的技术:docker-compose来定制和启停我们需要的kafka image。接下来,我们就来看看如何使用docker-compose建立fake kafka object。

3. 使用docker-compose建立fake kafka

3.1 一个基础的基于docker-compose的fake kafka实例模板

这次我们使用bitnami提供的kafka镜像,我们先建立一个“等价”于上面“testcontainers-go”提供的kafka module的kafka实例,下面是docker-compose.yml:

// docker-compose/bitnami/plaintext/docker-compose.yml

version: "2"

services:
  kafka:
    image: docker.io/bitnami/kafka:3.6
    network_mode: "host"
    volumes:
      - "kafka_data:/bitnami"
    environment:
      # KRaft settings
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@localhost:9093
      # Listeners
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT
      # borrow from testcontainer
      - KAFKA_CFG_BROKER_ID=0
      - KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=1
      - KAFKA_CFG_OFFSETS_TOPIC_NUM_PARTITIONS=1
      - KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR=1
      - KAFKA_CFG_GROUP_INITIAL_REBALANCE_DELAY_MS=0
      - KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES=9223372036854775807
volumes:
  kafka_data:
    driver: local

我们看到其中一些配置“借鉴”了testcontainers-go的kafka module,我们启动一下该容器:

$ docker-compose up -d
[+] Running 2/2
 ✔ Volume "plaintext_kafka_data"  Created                                                                                    0.0s
 ✔ Container plaintext-kafka-1    Started                                                                                    0.1s

依赖该容器的go测试代码与前面的TestProducerAndConsumer差不多,只是在开始处去掉了container的创建过程:

// docker-compose/bitnami/plaintext/kafka_test.go

func TestProducerAndConsumer(t *testing.T) {
    brokers := []string{"localhost:9092"}
    topic := "test-topic"
    w := newWriter(brokers, topic)
    defer w.Close()
    r := newReader(brokers, topic)
    defer r.Close()

    err := createTopics(brokers, topic)
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }
    time.Sleep(5 * time.Second)
    ... ...
}

运行该测试用例,我们看到预期的结果:

go test
write message ok  Value-A
write message ok  Value-B
write message ok  Value-C
write message ok  Value-D!
PASS
ok      demo    15.143s

不过对于单元测试来说,显然我们不能手动来启动和停止kafka container,我们需要为每个用例填上setup和teardown,这样也能保证用例间的相互隔离,于是我们增加了一个docker_compose_helper.go文件,在这个文件中我们提供了一些帮助testcase启停kafka的helper函数:

// docker-compose/bitnami/plaintext/docker_compose_helper.go

package main

import (
    "fmt"
    "os/exec"
    "strings"
    "time"
)

// helpler function for operating docker container through docker-compose command

const (
    defaultCmd     = "docker-compose"
    defaultCfgFile = "docker-compose.yml"
)

func execCliCommand(cmd string, opts ...string) ([]byte, error) {
    cmds := cmd + " " + strings.Join(opts, " ")
    fmt.Println("exec command:", cmds)
    return exec.Command(cmd, opts...).CombinedOutput()
}

func execDockerComposeCommand(cmd string, cfgFile string, opts ...string) ([]byte, error) {
    var allOpts = []string{"-f", cfgFile}
    allOpts = append(allOpts, opts...)
    return execCliCommand(cmd, allOpts...)
}

func UpKakfa(composeCfgFile string) ([]byte, error) {
    b, err := execDockerComposeCommand(defaultCmd, composeCfgFile, "up", "-d")
    if err != nil {
        return nil, err
    }
    time.Sleep(10 * time.Second)
    return b, nil
}

func UpDefaultKakfa() ([]byte, error) {
    return UpKakfa(defaultCfgFile)
}

func DownKakfa(composeCfgFile string) ([]byte, error) {
    b, err := execDockerComposeCommand(defaultCmd, composeCfgFile, "down", "-v")
    if err != nil {
        return nil, err
    }
    time.Sleep(10 * time.Second)
    return b, nil
}

func DownDefaultKakfa() ([]byte, error) {
    return DownKakfa(defaultCfgFile)
}

眼尖的童鞋可能看到:在UpKakfa和DownKafka函数中我们使用了硬编码的“time.Sleep”来等待10s,通常在镜像已经pull到本地后这是有效的,但却不是最精确地等待方式,testcontainers-go/wait中提供了等待容器内程序启动完毕的多种策略,如果你想用更精确的等待方式,可以了解一下wait包。

基于helper函数,我们改造一下TestProducerAndConsumer用例:

// docker-compose/bitnami/plaintext/kafka_test.go
func TestProducerAndConsumer(t *testing.T) {
    _, err := UpDefaultKakfa()
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }

    t.Cleanup(func() {
        DownDefaultKakfa()
    })
    ... ...
}

我们在用例开始处通过UpDefaultKakfa使用docker-compose将kafka实例启动起来,然后注册了Cleanup函数,用于在test case执行结束后销毁kafka实例。

下面是新版用例的执行结果:

$ go test
exec command: docker-compose -f docker-compose.yml up -d
write message ok  Value-A
write message ok  Value-B
write message ok  Value-C
write message ok  Value-D!
exec command: docker-compose -f docker-compose.yml down -v
PASS
ok      demo    36.402s

使用docker-compose的最大好处就是可以通过docker-compose.yml文件对要fake的object进行灵活的定制,这种定制与testcontainers-go的差别就是你无需去研究testcontiners-go的API。

下面是使用tls连接与kafka建立连接并实现读写的示例。

3.2 建立一个基于TLS连接的fake kafka实例

Kafka的配置复杂是有目共睹的,为了建立一个基于TLS连接,我也是花了不少时间做“试验”,尤其是listeners以及证书的配置,不下点苦功夫读文档还真是配不出来。

下面是一个基于bitnami/kafka镜像配置出来的基于TLS安全通道上的kafka实例:

// docker-compose/bitnami/tls/docker-compose.yml

# config doc:  https://github.com/bitnami/containers/blob/main/bitnami/kafka/README.md

version: "2"

services:
  kafka:
    image: docker.io/bitnami/kafka:3.6
    network_mode: "host"
    #ports:
      #- "9092:9092"
    environment:
      # KRaft settings
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@localhost:9094
      # Listeners
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,SECURED://:9093,CONTROLLER://:9094
      - KAFKA_CFG_ADVERTISED_LISTENERS=SECURED://:9093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,SECURED:SSL,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SECURED
      # SSL settings
      - KAFKA_TLS_TYPE=PEM
      - KAFKA_TLS_CLIENT_AUTH=none
      - KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
      # borrow from testcontainer
      - KAFKA_CFG_BROKER_ID=0
      - KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=1
      - KAFKA_CFG_OFFSETS_TOPIC_NUM_PARTITIONS=1
      - KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR=1
      - KAFKA_CFG_GROUP_INITIAL_REBALANCE_DELAY_MS=0
      - KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES=9223372036854775807
    volumes:
      # server.cert, server.key and ca.crt
      - "kafka_data:/bitnami"
      - "./kafka.keystore.pem:/opt/bitnami/kafka/config/certs/kafka.keystore.pem:ro"
      - "./kafka.keystore.key:/opt/bitnami/kafka/config/certs/kafka.keystore.key:ro"
      - "./kafka.truststore.pem:/opt/bitnami/kafka/config/certs/kafka.truststore.pem:ro"
volumes:
  kafka_data:
    driver: local

这里我们使用pem格式的证书和key,在上面配置中,volumes下面挂载的kafka.keystore.pem、kafka.keystore.key和kafka.truststore.pem分别对应了以前在Go中常用的名字:server-cert.pem(服务端证书), server-key.pem(服务端私钥)和ca-cert.pem(CA证书)。

这里整理了一个一键生成的脚本docker-compose/bitnami/tls/kafka-generate-cert.sh,我们执行该脚本生成所有需要的证书并放到指定位置(遇到命令行提示,只需要一路回车即可):

$bash kafka-generate-cert.sh
.........++++++
.............................++++++
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting Private key
.....................++++++
.........++++++
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting CA Private Key

接下来,我们来改造用例,使之支持以tls方式建立到kakfa的连接:

//docker-compose/bitnami/tls/kafka_test.go

func createTopics(brokers []string, tlsConfig *tls.Config, topics ...string) error {
    dialer := &kc.Dialer{
        Timeout:   10 * time.Second,
        DualStack: true,
        TLS:       tlsConfig,
    }

    conn, err := dialer.DialContext(context.Background(), "tcp", brokers[0])
    if err != nil {
        fmt.Println("creating topic: dialer dial error:", err)
        return err
    }
    defer conn.Close()
    fmt.Println("creating topic: dialer dial ok")
    ... ...
}

func newWriter(brokers []string, tlsConfig *tls.Config, topic string) *kc.Writer {
    w := &kc.Writer{
        Addr:                   kc.TCP(brokers...),
        Topic:                  topic,
        Balancer:               &kc.LeastBytes{},
        AllowAutoTopicCreation: true,
        Async:                  true,
        //RequiredAcks:           0,
        Completion: func(messages []kc.Message, err error) {
            for _, message := range messages {
                if err != nil {
                    fmt.Println("write message fail", err)
                } else {
                    fmt.Println("write message ok", string(message.Topic), string(message.Value))
                }
            }
        },
    }

    if tlsConfig != nil {
        w.Transport = &kc.Transport{
            TLS: tlsConfig,
        }
    }
    return w
}

func newReader(brokers []string, tlsConfig *tls.Config, topic string) *kc.Reader {
    dialer := &kc.Dialer{
        Timeout:   10 * time.Second,
        DualStack: true,
        TLS:       tlsConfig,
    }

    return kc.NewReader(kc.ReaderConfig{
        Dialer:   dialer,
        Brokers:  brokers,
        Topic:    topic,
        GroupID:  "test-group",
        MaxBytes: 10e6, // 10MB
    })
}

func TestProducerAndConsumer(t *testing.T) {
    var err error
    _, err = UpDefaultKakfa()
    if err != nil {
        t.Fatalf("want nil, actual %v\n", err)
    }

    t.Cleanup(func() {
        DownDefaultKakfa()
    })

    brokers := []string{"localhost:9093"}
    topic := "test-topic"

    tlsConfig, _ := newTLSConfig()
    w := newWriter(brokers, tlsConfig, topic)
    defer w.Close()
    r := newReader(brokers, tlsConfig, topic)
    defer r.Close()
    err = createTopics(brokers, tlsConfig, topic)
    if err != nil {
        fmt.Printf("create topic error: %v, but it may not affect the later action, just ignore it\n", err)
    }
    time.Sleep(5 * time.Second)
    ... ...
}

func newTLSConfig() (*tls.Config, error) {
    /*
       // 加载 CA 证书
       caCert, err := ioutil.ReadFile("/path/to/ca.crt")
       if err != nil {
               return nil, err
       }

       // 加载客户端证书和私钥
       cert, err := tls.LoadX509KeyPair("/path/to/client.crt", "/path/to/client.key")
       if err != nil {
               return nil, err
       }

       // 创建 CertPool 并添加 CA 证书
       caCertPool := x509.NewCertPool()
       caCertPool.AppendCertsFromPEM(caCert)
    */
    // 创建并返回 TLS 配置
    return &tls.Config{
        //RootCAs:      caCertPool,
        //Certificates: []tls.Certificate{cert},
        InsecureSkipVerify: true,
    }, nil
}

在上述代码中,我们按照segmentio/kafka-go为createTopics、newWriter和newReader都加上了tls.Config参数,此外在测试用例中,我们用newTLSConfig创建一个tls.Config的实例,在这里我们一切简化处理,采用InsecureSkipVerify=true的方式与kafka broker服务端进行握手,既不验证服务端证书,也不做双向认证(mutual TLS)。

下面是修改代码后的测试用例执行结果:

$ go test
exec command: docker-compose -f docker-compose.yml up -d
creating topic: dialer dial ok
creating topic: get controller ok
creating topic: dial control listener ok
create topic error: EOF, but it may not affect the later action, just ignore it
write message error: [3] Unknown Topic Or Partition: the request is for a topic or partition that does not exist on this broker
write message ok  Value-A
write message ok  Value-B
write message ok  Value-C
write message ok  Value-D!
exec command: docker-compose -f docker-compose.yml down -v
PASS
ok      demo    38.473s

这里我们看到:createTopics虽然连接kafka的各个listener都ok,但调用topic创建时,返回EOF,但这的确不影响后续action的执行,不确定这是segmentio/kafka-go的问题,还是kafka实例的问题。另外首次写入消息时,也因为topic或partition未建立而失败,retry后消息正常写入。

通过这个例子我们看到,基于docker-compose建立fake object有着更广泛的灵活性,如果做好容器启动和停止的精准wait机制的话,我可能会更多选择这种方式。

4. 小结

本文介绍了如何在Go编程中进行依赖Kafka的单元测试,并探讨了寻找适合的Kafka fake object的策略。

对于Kafka这样的复杂系统来说,找到合适的fake object并不容易。因此,本文推荐使用容器作为fake object的策略,并分别介绍了使用testcontainers-go项目和使用docker-compose作为简化创建和清理基于容器的依赖项的工具。相对于刚刚加入testcontainers-go项目没多久的kafka module而言,使用docker-compose自定义fake object更加灵活一些。但无论哪种方法,开发人员都需要对kafka的配置有一个较为整体和深入的理解。

文中主要聚焦使用testcontainers-go和docker-compose建立fake kafka的过程,而用例并没有建立明确的sut(被测目标),比如针对某个函数的白盒单元测试。

文本涉及的源码可以在这里下载。


“Gopher部落”知识星球旨在打造一个精品Go学习和进阶社群!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!2024年,Gopher部落将进一步聚焦于如何编写雅、地道、可读、可测试的Go代码,关注代码质量并深入理解Go核心技术,并继续加强与星友的互动。欢迎大家加入!

img{512x368}
img{512x368}

img{512x368}
img{512x368}

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻) – https://gopherdaily.tonybai.com

我的联系方式:

  • 微博(暂不可用):https://weibo.com/bigwhite20xx
  • 微博2:https://weibo.com/u/6484441286
  • 博客:tonybai.com
  • github: https://github.com/bigwhite
  • Gopher Daily归档 – https://github.com/bigwhite/gopherdaily

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

使用Go开发Kubernetes Operator:基本结构

本文永久链接 – https://tonybai.com/2022/08/15/developing-kubernetes-operators-in-go-part1

注:文章首图基于《Kubernetes Operators Explained》修改

几年前,我还称Kubernetes为服务编排和容器调度领域的事实标准,如今K8s已经是这个领域的“霸主”,地位无可撼动。不过,虽然Kubernetes发展演化到今天已经变得非常复杂,但是Kubernetes最初的数据模型、应用模式与扩展方式却依然有效。并且像Operator这样的应用模式和扩展方式日益受到开发者与运维者的欢迎。

我们的平台内部存在有状态(stateful)的后端服务,对有状态的服务的部署和运维是k8s operator的拿手好戏,是时候来研究一下operator了。

一. Operator的优点

kubernetes operator的概念最初来自CoreOS – 一家被红帽(redhat)收购的容器技术公司。

CoreOS在引入Operator概念的同时,也给出了Operator的第一批参考实现:etcd operatorprometheus operator

注:etcd于2013年由CoreOS以开源形式发布;prometheus作为首款面向云原生服务的时序数据存储与监控系统,由SoundCloud公司于2012年以开源的形式发布。

下面是CoreOS对Operator这一概念的诠释:Operator在软件中代表了人类的运维操作知识,通过它可以可靠地管理一个应用程序


图:CoreOS对operator的诠释(截图来自CoreOS官方博客归档)

Operator出现的初衷就是用来解放运维人员的,如今Operator也越来越受到云原生运维开发人员的青睐。

那么operator好处究竟在哪里呢?下面示意图对使用Operator和不使用Operator进行了对比:

通过这张图,即便对operator不甚了解,你也能大致感受到operator的优点吧。

我们看到在使用operator的情况下,对有状态应用的伸缩操作(这里以伸缩操作为例,也可以是其他诸如版本升级等对于有状态应用来说的“复杂”操作),运维人员仅需一个简单的命令即可,运维人员也无需知道k8s内部对有状态应用的伸缩操作的原理是什么。

在没有使用operator的情况下,运维人员需要对有状态应用的伸缩的操作步骤有深刻的认知,并按顺序逐个执行一个命令序列中的命令并检查命令响应,遇到失败的情况时还需要进行重试,直到伸缩成功。

我们看到operator就好比一个内置于k8s中的经验丰富运维人员,时刻监控目标对象的状态,把复杂性留给自己,给运维人员一个简洁的交互接口,同时operator也能降低运维人员因个人原因导致的操作失误的概率。

不过,operator虽好,但开发门槛却不低。开发门槛至少体现在如下几个方面:

  • 对operator概念的理解是基于对k8s的理解的基础之上的,而k8s自从2014年开源以来,变的日益复杂,理解起来需要一定时间投入;
  • 从头手撸operator很verbose,几乎无人这么做,大多数开发者都会去学习相应的开发框架与工具,比如:kubebuilderoperator framework sdk等;
  • operator的能力也有高低之分,operator framework就提出了一个包含五个等级的operator能力模型(CAPABILITY MODEL),见下图。使用Go开发高能力等级的operator需要对client-go这个kubernetes官方go client库中的API有深入的了解。


图:operator能力模型(截图来自operator framework官网)

当然在这些门槛当中,对operator概念的理解既是基础也是前提,而理解operator的前提又是对kubernetes的诸多概念要有深入理解,尤其是resource、resource type、API、controller以及它们之间的关系。接下来我们就来快速介绍一下这些概念。

二. Kubernetes resource、resource type、API和controller介绍

Kubernetes发展到今天,其本质已经显现:

  • Kubernetes就是一个“数据库”(数据实际持久存储在etcd中);
  • 其API就是“sql语句”;
  • API设计采用基于resource的Restful风格, resource type是API的端点(endpoint);
  • 每一类resource(即Resource Type)是一张“表”,Resource Type的spec对应“表结构”信息(schema);
  • 每张“表”里的一行记录就是一个resource,即该表对应的Resource Type的一个实例(instance);
  • Kubernetes这个“数据库”内置了很多“表”,比如Pod、Deployment、DaemonSet、ReplicaSet等;

下面是一个Kubernetes API与resource关系的示意图:

我们看到resource type有两类,一类的namespace相关的(namespace-scoped),我们通过下面形式的API操作这类resource type的实例:

VERB /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE - 操作某特定namespace下面的resouce type中的resource实例集合
VERB /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME - 操作某特定namespace下面的resource type中的某个具体的resource实例

另外一类则是namespace无关,即cluster范围(cluster-scoped)的,我们通过下面形式的API对这类resource type的实例进行操作:

VERB /apis/GROUP/VERSION/RESOURCETYPE - 操作resouce type中的resource实例集合
VERB /apis/GROUP/VERSION/RESOURCETYPE/NAME - 操作resource type中的某个具体的resource实例

我们知道Kubernetes并非真的只是一个“数据库”,它是服务编排和容器调度的平台标准,它的基本调度单元是Pod(也是一个resource type),即一组容器的集合。那么Pod又是如何被创建、更新和删除的呢?这就离不开控制器(controller)了。每一类resource type都有自己对应的控制器(controller)。以pod这个resource type为例,它的controller为ReplicasSet的实例。

控制器的运行逻辑如下图所示:


图:控制器运行逻辑(引自《Kubernetes Operators Explained》一文)

控制器一旦启动,将尝试获得resource的当前状态(current state),并与存储在k8s中的resource的期望状态(desired state,即spec)做比对,如果不一致,controller就会调用相应API进行调整,尽力使得current state与期望状态达成一致。这个达成一致的过程被称为协调(reconciliation),协调过程的伪代码逻辑如下:

for {
    desired := getDesiredState()
    current := getCurrentState()
    makeChanges(desired, current)
}

注:k8s中有一个object的概念?那么object是什么呢?它类似于Java Object基类或Ruby中的Object超类。不仅resource type的实例resource是一个(is-a)object,resource type本身也是一个object,它是kubernetes concept的实例。

有了上面对k8s这些概念的初步理解,我们下面就来理解一下Operator究竟是什么!

三. Operator模式 = 操作对象(CRD) + 控制逻辑(controller)

如果让运维人员直面这些内置的resource type(如deployment、pod等),也就是前面“使用operator vs. 不使用operator”对比图中的第二种情况, 运维人员面临的情况将会很复杂,且操作易错。

那么如果不直面内置的resource type,那么我们如何自定义resource type呢, Kubernetes提供了Custom Resource Definition,CRD(在coreos刚提出operator概念的时候,crd的前身是Third Party Resource, TPR)可以用于自定义resource type。

根据前面我们对resource type理解,定义CRD相当于建立新“表”(resource type),一旦CRD建立,k8s会为我们自动生成对应CRD的API endpoint,我们就可以通过yaml或API来操作这个“表”。我们可以向“表”中“插入”数据,即基于CRD创建Custom Resource(CR),这就好比我们创建Deployment实例,向Deployment“表”中插入数据一样。

和原生内置的resource type一样,光有存储对象状态的CR还不够,原生resource type有对应controller负责协调(reconciliation)实例的创建、伸缩与删除,CR也需要这样的“协调者”,即我们也需要定义一个controller来负责监听CR状态并管理CR创建、伸缩、删除以及保持期望状态(spec)与当前状态(current state)的一致。这个controller不再是面向原生Resource type的实例,而是面向CRD的实例CR的controller

有了自定义的操作对象类型(CRD),有了面向操作对象类型实例的controller,我们将其打包为一个概念:“Operator模式”,operator模式中的controller也被称为operator,它是在集群中对CR进行维护操作的主体。

四. 使用kubebuilder开发webserver operator

假设:此时你的本地开发环境已经具备访问实验用k8s环境的一切配置,通过kubectl工具可以任意操作k8s。

再深入浅出的概念讲解都不如一次实战对理解概念更有帮助,下面我们就来开发一个简单的Operator。

前面提过operator开发非常verbose,因此社区提供了开发工具和框架来帮助开发人员简化开发过程,目前主流的包括operator framework sdk和kubebuilder,前者是redhat开源并维护的一套工具,支持使用go、ansible、helm进行operator开发(其中只有go可以开发到能力级别5的operator,其他两种则不行);而kubebuilder则是kubernetes官方的一个sig(特别兴趣小组)维护的operator开发工具。目前基于operator framework sdk和go进行operator开发时,operator sdk底层使用的也是kubebuilder,所以这里我们就直接使用kubebuilder来开发operator。

按照operator能力模型,我们这个operator差不多处于2级这个层次,我们定义一个Webserver的resource type,它代表的是一个基于nginx的webserver集群,我们的operator支持创建webserver示例(一个nginx集群),支持nginx集群伸缩,支持集群中nginx的版本升级。

下面我们就用kubebuilder来实现这个operator!

1. 安装kubebuilder

这里我们采用源码构建方式安装,步骤如下:

$git clone git@github.com:kubernetes-sigs/kubebuilder.git
$cd kubebuilder
$make
$cd bin
$./kubebuilder version
Version: main.version{KubeBuilderVersion:"v3.5.0-101-g5c949c2e",
KubernetesVendor:"unknown",
GitCommit:"5c949c2e50ca8eec80d64878b88e1b2ee30bf0bc",
BuildDate:"2022-08-06T09:12:50Z", GoOs:"linux", GoArch:"amd64"}

然后将bin/kubebuilder拷贝到你的PATH环境变量中的某个路径下即可。

2. 创建webserver-operator工程

接下来,我们就可以使用kubebuilder创建webserver-operator工程了:

$mkdir webserver-operator
$cd webserver-operator
$kubebuilder init  --repo github.com/bigwhite/webserver-operator --project-name webserver-operator

Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.12.2
go: downloading k8s.io/client-go v0.24.2
go: downloading k8s.io/component-base v0.24.2
Update dependencies:
$ go mod tidy
Next: define a resource with:
kubebuilder create api

注:–repo指定go.mod中的module root path,你可以定义你自己的module root path。

3. 创建API,生成初始CRD

Operator包括CRD和controller,这里我们就来建立自己的CRD,即自定义的resource type,也就是API的endpoint,我们使用下面kubebuilder create命令来完成这个步骤:

$kubebuilder create api --version v1 --kind WebServer
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/webserver_types.go
controllers/webserver_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
mkdir -p /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen || GOBIN=/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.9.2
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests

之后,我们执行make manifests来生成最终CRD对应的yaml文件:

$make manifests
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases

此刻,整个工程的目录文件布局如下:

$tree -F .
.
├── api/
│   └── v1/
│       ├── groupversion_info.go
│       ├── webserver_types.go
│       └── zz_generated.deepcopy.go
├── bin/
│   └── controller-gen*
├── config/
│   ├── crd/
│   │   ├── bases/
│   │   │   └── my.domain_webservers.yaml
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches/
│   │       ├── cainjection_in_webservers.yaml
│   │       └── webhook_in_webservers.yaml
│   ├── default/
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager/
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus/
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac/
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── role.yaml
│   │   ├── service_account.yaml
│   │   ├── webserver_editor_role.yaml
│   │   └── webserver_viewer_role.yaml
│   └── samples/
│       └── _v1_webserver.yaml
├── controllers/
│   ├── suite_test.go
│   └── webserver_controller.go
├── Dockerfile
├── go.mod
├── go.sum
├── hack/
│   └── boilerplate.go.txt
├── main.go
├── Makefile
├── PROJECT
└── README.md

14 directories, 40 files

4. webserver-operator的基本结构

忽略我们此次不关心的诸如leader election、auth_proxy等,我将这个operator例子的主要部分整理到下面这张图中:

图中的各个部分就是使用kubebuilder生成的operator的基本结构

webserver operator主要由CRD和controller组成:

  • CRD

图中的左下角的框框就是上面生成的CRD yaml文件:config/crd/bases/my.domain_webservers.yaml。CRD与api/v1/webserver_types.go密切相关。我们在api/v1/webserver_types.go中为CRD定义spec相关字段,之后make manifests命令可以解析webserver_types.go中的变化并更新CRD的yaml文件。

  • controller

从图的右侧部分可以看出,controller自身就是作为一个deployment部署在k8s集群中运行的,它监视CRD的实例CR的运行状态,并在Reconcile方法中检查预期状态与当前状态是否一致,如果不一致,则执行相关操作。

  • 其它

图中左上角是有关controller的权限的设置,controller通过serviceaccount访问k8s API server,通过role.yaml和role_binding.yaml设置controller的角色和权限。

5. 为CRD spec添加字段(field)

为了实现Webserver operator的功能目标,我们需要为CRD spec添加一些状态字段。前面说过,CRD与api中的webserver_types.go文件是同步的,我们只需修改webserver_types.go文件即可。我们在WebServerSpec结构体中增加Replicas和Image两个字段,它们分别用于表示webserver实例的副本数量以及使用的容器镜像:

// api/v1/webserver_types.go

// WebServerSpec defines the desired state of WebServer
type WebServerSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // The number of replicas that the webserver should have
    Replicas int `json:"replicas,omitempty"`

    // The container image of the webserver
    Image string `json:"image,omitempty"`

    // Foo is an example field of WebServer. Edit webserver_types.go to remove/update
    Foo string `json:"foo,omitempty"`
}

保存修改后,执行make manifests重新生成config/crd/bases/my.domain_webservers.yaml

$cat my.domain_webservers.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.9.2
  creationTimestamp: null
  name: webservers.my.domain
spec:
  group: my.domain
  names:
    kind: WebServer
    listKind: WebServerList
    plural: webservers
    singular: webserver
  scope: Namespaced
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        description: WebServer is the Schema for the webservers API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: WebServerSpec defines the desired state of WebServer
            properties:
              foo:
                description: Foo is an example field of WebServer. Edit webserver_types.go
                  to remove/update
                type: string
              image:
                description: The container image of the webserver
                type: string
              replicas:
                description: The number of replicas that the webserver should have
                type: integer
            type: object
          status:
            description: WebServerStatus defines the observed state of WebServer
            type: object
        type: object
    served: true
    storage: true
    subresources:
      status: {}

一旦定义完CRD,我们就可以将其安装到k8s中:

$make install
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize || { curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin; }
{Version:kustomize/v3.8.7 GitCommit:ad092cc7a91c07fdf63a2e4b7f13fa588a39af4f BuildDate:2020-11-11T23:14:14Z GoOs:linux GoArch:amd64}
kustomize installed to /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/webservers.my.domain created

检查安装情况:

$kubectl get crd|grep webservers
webservers.my.domain                                             2022-08-06T21:55:45Z

6. 修改role.yaml

在开始controller开发之前,我们先来为controller后续的运行“铺平道路”,即设置好相应权限。

我们在controller中会为CRD实例创建对应deployment和service,这样就要求controller有操作deployments和services的权限,这样就需要我们修改role.yaml,增加service account: controller-manager 操作deployments和services的权限:

// config/rbac/role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: manager-role
rules:
- apiGroups:
  - my.domain
  resources:
  - webservers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - my.domain
  resources:
  - webservers/finalizers
  verbs:
  - update
- apiGroups:
  - my.domain
  resources:
  - webservers/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - apps
  - ""
  resources:
  - services
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

修改后的role.yaml先放在这里,后续与controller一并部署到k8s上。

7. 实现controller的Reconcile(协调)逻辑

kubebuilder为我们搭好了controller的代码架子,我们只需要在controllers/webserver_controller.go中实现WebServerReconciler的Reconcile方法即可。下面是Reconcile的一个简易流程图,结合这幅图理解代码就容易的多了:

下面是对应的Reconcile方法的代码:

// controllers/webserver_controller.go

func (r *WebServerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    log := r.Log.WithValues("Webserver", req.NamespacedName)

    instance := &mydomainv1.WebServer{}
    err := r.Get(ctx, req.NamespacedName, instance)
    if err != nil {
        if errors.IsNotFound(err) {
            // Request object not found, could have been deleted after reconcile request.
            // Return and don't requeue
            log.Info("Webserver resource not found. Ignoring since object must be deleted")
            return ctrl.Result{}, nil
        }

        // Error reading the object - requeue the request.
        log.Error(err, "Failed to get Webserver")
        return ctrl.Result{RequeueAfter: time.Second * 5}, err
    }

    // Check if the webserver deployment already exists, if not, create a new one
    found := &appsv1.Deployment{}
    err = r.Get(ctx, types.NamespacedName{Name: instance.Name, Namespace: instance.Namespace}, found)
    if err != nil && errors.IsNotFound(err) {
        // Define a new deployment
        dep := r.deploymentForWebserver(instance)
        log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
        err = r.Create(ctx, dep)
        if err != nil {
            log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
            return ctrl.Result{RequeueAfter: time.Second * 5}, err
        }
        // Deployment created successfully - return and requeue
        return ctrl.Result{Requeue: true}, nil
    } else if err != nil {
        log.Error(err, "Failed to get Deployment")
        return ctrl.Result{RequeueAfter: time.Second * 5}, err
    }

    // Ensure the deployment replicas and image are the same as the spec
    var replicas int32 = int32(instance.Spec.Replicas)
    image := instance.Spec.Image

    var needUpd bool
    if *found.Spec.Replicas != replicas {
        log.Info("Deployment spec.replicas change", "from", *found.Spec.Replicas, "to", replicas)
        found.Spec.Replicas = &replicas
        needUpd = true
    }

    if (*found).Spec.Template.Spec.Containers[0].Image != image {
        log.Info("Deployment spec.template.spec.container[0].image change", "from", (*found).Spec.Template.Spec.Containers[0].Image, "to", image)
        found.Spec.Template.Spec.Containers[0].Image = image
        needUpd = true
    }

    if needUpd {
        err = r.Update(ctx, found)
        if err != nil {
            log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
            return ctrl.Result{RequeueAfter: time.Second * 5}, err
        }
        // Spec updated - return and requeue
        return ctrl.Result{Requeue: true}, nil
    }

    // Check if the webserver service already exists, if not, create a new one
    foundService := &corev1.Service{}
    err = r.Get(ctx, types.NamespacedName{Name: instance.Name + "-service", Namespace: instance.Namespace}, foundService)
    if err != nil && errors.IsNotFound(err) {
        // Define a new service
        srv := r.serviceForWebserver(instance)
        log.Info("Creating a new Service", "Service.Namespace", srv.Namespace, "Service.Name", srv.Name)
        err = r.Create(ctx, srv)
        if err != nil {
            log.Error(err, "Failed to create new Servie", "Service.Namespace", srv.Namespace, "Service.Name", srv.Name)
            return ctrl.Result{RequeueAfter: time.Second * 5}, err
        }
        // Service created successfully - return and requeue
        return ctrl.Result{Requeue: true}, nil
    } else if err != nil {
        log.Error(err, "Failed to get Service")
        return ctrl.Result{RequeueAfter: time.Second * 5}, err
    }

    // Tbd: Ensure the service state is the same as the spec, your homework

    // reconcile webserver operator in again 10 seconds
    return ctrl.Result{RequeueAfter: time.Second * 10}, nil
}

这里大家可能发现了:原来CRD的controller最终还是将CR翻译为k8s原生Resource,比如service、deployment等。CR的状态变化(比如这里的replicas、image等)最终都转换成了deployment等原生resource的update操作,这就是operator的精髓!理解到这一层,operator对大家来说就不再是什么密不可及的概念了。

有些朋友可能也会发现,上面流程图中似乎没有考虑CR实例被删除时对deployment、service的操作,的确如此。不过对于一个7×24小时运行于后台的服务来说,我们更多关注的是其变更、伸缩、升级等操作,删除是优先级最低的需求。

8. 构建controller image

controller代码写完后,我们就来构建controller的image。通过前文我们知道,这个controller其实就是运行在k8s中的一个deployment下的pod。我们需要构建其image并通过deployment部署到k8s中。

kubebuilder创建的operator工程中包含了Makefile,通过make docker-build即可构建controller image。docker-build使用golang builder image来构建controller源码,不过如果不对Dockerfile稍作修改,你很难编译过去,因为默认GOPROXY在国内无法访问。这里最简单的改造方式是使用vendor构建,下面是改造后的Dockerfile:

# Build the manager binary
FROM golang:1.18 as builder

ENV GOPROXY https://goproxy.cn
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
COPY vendor/ vendor/
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
#RUN go mod download

# Copy the go source
COPY main.go main.go
COPY api/ api/
COPY controllers/ controllers/

# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -mod=vendor -a -o manager main.go

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
#FROM gcr.io/distroless/static:nonroot
FROM katanomi/distroless-static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532

ENTRYPOINT ["/manager"]

下面是构建的步骤:

$go mod vendor
$make docker-build

test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen || GOBIN=/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.9.2
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
KUBEBUILDER_ASSETS="/home/tonybai/.local/share/kubebuilder-envtest/k8s/1.24.2-linux-amd64" go test ./... -coverprofile cover.out
?       github.com/bigwhite/webserver-operator    [no test files]
?       github.com/bigwhite/webserver-operator/api/v1    [no test files]
ok      github.com/bigwhite/webserver-operator/controllers    4.530s    coverage: 0.0% of statements
docker build -t bigwhite/webserver-controller:latest .
Sending build context to Docker daemon  47.51MB
Step 1/15 : FROM golang:1.18 as builder
 ---> 2d952adaec1e
Step 2/15 : ENV GOPROXY https://goproxy.cn
 ---> Using cache
 ---> db2b06a078e3
Step 3/15 : WORKDIR /workspace
 ---> Using cache
 ---> cc3c613c19c6
Step 4/15 : COPY go.mod go.mod
 ---> Using cache
 ---> 5fa5c0d89350
Step 5/15 : COPY go.sum go.sum
 ---> Using cache
 ---> 71669cd0fe8e
Step 6/15 : COPY vendor/ vendor/
 ---> Using cache
 ---> 502b280a0e67
Step 7/15 : COPY main.go main.go
 ---> Using cache
 ---> 0c59a69091bb
Step 8/15 : COPY api/ api/
 ---> Using cache
 ---> 2b81131c681f
Step 9/15 : COPY controllers/ controllers/
 ---> Using cache
 ---> e3fd48c88ccb
Step 10/15 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -mod=vendor -a -o manager main.go
 ---> Using cache
 ---> 548ac10321a2
Step 11/15 : FROM katanomi/distroless-static:nonroot
 ---> 421f180b71d8
Step 12/15 : WORKDIR /
 ---> Running in ea7cb03027c0
Removing intermediate container ea7cb03027c0
 ---> 9d3c0ea19c3b
Step 13/15 : COPY --from=builder /workspace/manager .
 ---> a4387fe33ab7
Step 14/15 : USER 65532:65532
 ---> Running in 739a32d251b6
Removing intermediate container 739a32d251b6
 ---> 52ae8742f9c5
Step 15/15 : ENTRYPOINT ["/manager"]
 ---> Running in 897893b0c9df
Removing intermediate container 897893b0c9df
 ---> e375cc2adb08
Successfully built e375cc2adb08
Successfully tagged bigwhite/webserver-controller:latest

注:执行make命令之前,先将Makefile中的IMG变量初值改为IMG ?= bigwhite/webserver-controller:latest

构建成功后,执行make docker-push将image推送到镜像仓库中(这里使用了docker公司提供的公共仓库)。

9. 部署controller

之前我们已经通过make install将CRD安装到k8s中了,接下来再把controller部署到k8s上,我们的operator就算部署完毕了。执行make deploy即可实现部署:

$make deploy
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen || GOBIN=/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.9.2
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
test -s /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize || { curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin; }
cd config/manager && /home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize edit set image controller=bigwhite/webserver-controller:latest
/home/tonybai/test/go/operator/kubebuilder/webserver-operator/bin/kustomize build config/default | kubectl apply -f -
namespace/webserver-operator-system created
customresourcedefinition.apiextensions.k8s.io/webservers.my.domain unchanged
serviceaccount/webserver-operator-controller-manager created
role.rbac.authorization.k8s.io/webserver-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/webserver-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/webserver-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/webserver-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/webserver-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/webserver-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/webserver-operator-proxy-rolebinding created
configmap/webserver-operator-manager-config created
service/webserver-operator-controller-manager-metrics-service created
deployment.apps/webserver-operator-controller-manager created

我们看到deploy不仅会安装controller、serviceaccount、role、rolebinding,它还会创建namespace,也会将crd安装一遍。也就是说deploy是一个完整的operator安装命令。

注:使用make undeploy可以完整卸载operator相关resource。

我们用kubectl logs查看一下controller的运行日志:

$kubectl logs -f deployment.apps/webserver-operator-controller-manager -n webserver-operator-system
1.6600280818476188e+09    INFO    controller-runtime.metrics    Metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
1.6600280818478029e+09    INFO    setup    starting manager
1.6600280818480284e+09    INFO    Starting server    {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8080"}
1.660028081848097e+09    INFO    Starting server    {"kind": "health probe", "addr": "[::]:8081"}
I0809 06:54:41.848093       1 leaderelection.go:248] attempting to acquire leader lease webserver-operator-system/63e5a746.my.domain...
I0809 06:54:57.072336       1 leaderelection.go:258] successfully acquired lease webserver-operator-system/63e5a746.my.domain
1.6600280970724037e+09    DEBUG    events    Normal    {"object": {"kind":"Lease","namespace":"webserver-operator-system","name":"63e5a746.my.domain","uid":"e05aaeb5-4a3a-4272-b036-80d61f0b6788","apiVersion":"coordination.k8s.io/v1","resourceVersion":"5238800"}, "reason": "LeaderElection", "message": "webserver-operator-controller-manager-6f45bc88f7-ptxlc_0e960015-9fbe-466d-a6b1-ff31af63a797 became leader"}
1.6600280970724993e+09    INFO    Starting EventSource    {"controller": "webserver", "controllerGroup": "my.domain", "controllerKind": "WebServer", "source": "kind source: *v1.WebServer"}
1.6600280970725305e+09    INFO    Starting Controller    {"controller": "webserver", "controllerGroup": "my.domain", "controllerKind": "WebServer"}
1.660028097173026e+09    INFO    Starting workers    {"controller": "webserver", "controllerGroup": "my.domain", "controllerKind": "WebServer", "worker count": 1}

可以看到,controller已经成功启动,正在等待一个WebServer CR的相关事件(比如创建)!下面我们就来创建一个WebServer CR!

10. 创建WebServer CR

webserver-operator项目中有一个CR sample,位于config/samples下面,我们对其进行改造,添加我们在spec中加入的字段:

// config/samples/_v1_webserver.yaml 

apiVersion: my.domain/v1
kind: WebServer
metadata:
  name: webserver-sample
spec:
  # TODO(user): Add fields here
  image: nginx:1.23.1
  replicas: 3

我们通过kubectl创建该WebServer CR:

$cd config/samples
$kubectl apply -f _v1_webserver.yaml
webserver.my.domain/webserver-sample created

观察controller的日志:

1.6602084232243123e+09  INFO    controllers.WebServer   Creating a new Deployment   {"Webserver": "default/webserver-sample", "Deployment.Namespace": "default", "Deployment.Name": "webserver-sample"}
1.6602084233446114e+09  INFO    controllers.WebServer   Creating a new Service  {"Webserver": "default/webserver-sample", "Service.Namespace": "default", "Service.Name": "webserver-sample-service"}

我们看到当CR被创建后,controller监听到相关事件,创建了对应的Deployment和service,我们查看一下为CR创建的Deployment、三个Pod以及service:

$kubectl get service
NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes                 ClusterIP   172.26.0.1     <none>        443/TCP        22d
webserver-sample-service   NodePort    172.26.173.0   <none>        80:30010/TCP   2m58s

$kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
webserver-sample   3/3     3            3           4m44s

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
webserver-sample-bc698b9fb-8gq2h   1/1     Running   0          4m52s
webserver-sample-bc698b9fb-vk6gw   1/1     Running   0          4m52s
webserver-sample-bc698b9fb-xgrgb   1/1     Running   0          4m52s

我们访问一下该服务:

$curl http://192.168.10.182:30010
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

服务如预期返回响应!

11. 伸缩、变更版本和Service自愈

接下来我们来对CR做一些常见的运维操作。

  • 副本数由3变为4

我们将CR的replicas由3改为4,对容器实例做一次扩展操作:

// config/samples/_v1_webserver.yaml 

apiVersion: my.domain/v1
kind: WebServer
metadata:
  name: webserver-sample
spec:
  # TODO(user): Add fields here
  image: nginx:1.23.1
  replicas: 4

然后通过kubectl apply使之生效:

$kubectl apply -f _v1_webserver.yaml
webserver.my.domain/webserver-sample configured

上述命令执行后,我们观察到operator的controller日志如下:

1.660208962767797e+09   INFO    controllers.WebServer   Deployment spec.replicas change {"Webserver": "default/webserver-sample", "from": 3, "to": 4}

稍后,查看pod数量:

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
webserver-sample-bc698b9fb-8gq2h   1/1     Running   0          9m41s
webserver-sample-bc698b9fb-v9gvg   1/1     Running   0          42s
webserver-sample-bc698b9fb-vk6gw   1/1     Running   0          9m41s
webserver-sample-bc698b9fb-xgrgb   1/1     Running   0          9m41s

webserver pod副本数量成功从3扩为4。

  • 变更webserver image版本

我们将CR的image的版本从nginx:1.23.1改为nginx:1.23.0,然后执行kubectl apply使之生效。

我们查看controller的响应日志如下:

1.6602090494113188e+09  INFO    controllers.WebServer   Deployment spec.template.spec.container[0].image change {"Webserver": "default/webserver-sample", "from": "nginx:1.23.1", "to": "nginx:1.23.0"}

controller会更新deployment,导致所辖pod进行滚动升级:

$kubectl get pods
NAME                               READY   STATUS              RESTARTS   AGE
webserver-sample-bc698b9fb-8gq2h   1/1     Running             0          10m
webserver-sample-bc698b9fb-vk6gw   1/1     Running             0          10m
webserver-sample-bc698b9fb-xgrgb   1/1     Running             0          10m
webserver-sample-ffcf549ff-g6whk   0/1     ContainerCreating   0          12s
webserver-sample-ffcf549ff-ngjz6   0/1     ContainerCreating   0          12s

耐心等一小会儿,最终的pod列表为:

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
webserver-sample-ffcf549ff-g6whk   1/1     Running   0          6m22s
webserver-sample-ffcf549ff-m6z24   1/1     Running   0          3m12s
webserver-sample-ffcf549ff-ngjz6   1/1     Running   0          6m22s
webserver-sample-ffcf549ff-t7gvc   1/1     Running   0          4m16s
  • service自愈:恢复被无删除的Service

我们来一次“误操作”,将webserver-sample-service删除,看看controller能否帮助service自愈:

$kubectl delete service/webserver-sample-service
service "webserver-sample-service" deleted

查看controller日志:

1.6602096994710526e+09  INFO    controllers.WebServer   Creating a new Service  {"Webserver": "default/webserver-sample", "Service.Namespace": "default", "Service.Name": "webserver-sample-service"}

我们看到controller检测到了service被删除的状态,并重建了一个新service!

访问新建的service:

$curl http://192.168.10.182:30010
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

可以看到service在controller的帮助下完成了自愈!

五. 小结

本文对Kubernetes Operator的概念以及优点做了初步的介绍,并基于kubebuilder这个工具开发了一个具有2级能力的operator。当然这个operator离完善还有很远的距离,其主要目的还是帮助大家理解operator的概念以及实现套路。

相信你阅读完本文后,对operator,尤其是其基本结构会有一个较为清晰的了解,并具备开发简单operator的能力!

文中涉及的源码可以在这里下载 – https://github.com/bigwhite/experiments/tree/master/webserver-operator。

六. 参考资料

  • kubernetes operator 101, Part 1: Overview and key features – https://developers.redhat.com/articles/2021/06/11/kubernetes-operators-101-part-1-overview-and-key-features
  • Kubernetes Operators 101, Part 2: How operators work – https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work
  • Operator SDK: Build Kubernetes Operators – https://developers.redhat.com/blog/2020/04/28/operator-sdk-build-kubernetes-operators-and-deploy-them-on-openshift
  • kubernetes doc: Custom Resources – https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
  • kubernetes doc: Operator pattern – https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
  • kubernetes doc: API concepts – https://kubernetes.io/docs/reference/using-api/api-concepts/
  • Introducing Operators: Putting Operational Knowledge into Software 第一篇有关operator的文章 by coreos – https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html
  • CNCF Operator白皮书v1.0 – https://github.com/cncf/tag-app-delivery/blob/main/operator-whitepaper/v1/Operator-WhitePaper_v1-0.md
  • Best practices for building Kubernetes Operators and stateful apps – https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
  • A deep dive into Kubernetes controllers – https://docs.bitnami.com/tutorials/a-deep-dive-into-kubernetes-controllers
  • Kubernetes Operators Explained – https://blog.container-solutions.com/kubernetes-operators-explained
  • 书籍《Kubernetes Operator》 – https://book.douban.com/subject/34796009/
  • 书籍《Programming Kubernetes》 – https://book.douban.com/subject/35498478/
  • Operator SDK Reaches v1.0 – https://cloud.redhat.com/blog/operator-sdk-reaches-v1.0
  • What is the difference between kubebuilder and operator-sdk – https://github.com/operator-framework/operator-sdk/issues/1758
  • Kubernetes Operators in Depth – https://www.infoq.com/articles/kubernetes-operators-in-depth/
  • Get started using Kubernetes Operators – https://developer.ibm.com/learningpaths/kubernetes-operators/
  • Use Kubernetes operators to extend Kubernetes’ functionality – https://developer.ibm.com/learningpaths/kubernetes-operators/operators-extend-kubernetes/
  • memcached operator – https://github.com/operator-framework/operator-sdk-samples/tree/master/go/memcached-operator

“Gopher部落”知识星球旨在打造一个精品Go学习和进阶社群!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!2022年,Gopher部落全面改版,将持续分享Go语言与Go应用领域的知识、技巧与实践,并增加诸多互动形式。欢迎大家加入!

img{512x368}
img{512x368}

img{512x368}
img{512x368}

我爱发短信:企业级短信平台定制开发专家 https://tonybai.com/。smspush : 可部署在企业内部的定制化短信平台,三网覆盖,不惧大并发接入,可定制扩展; 短信内容你来定,不再受约束, 接口丰富,支持长短信,签名可选。2020年4月8日,中国三大电信运营商联合发布《5G消息白皮书》,51短信平台也会全新升级到“51商用消息平台”,全面支持5G RCS消息。

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻)归档仓库 – https://github.com/bigwhite/gopherdaily

我的联系方式:

  • 微博:https://weibo.com/bigwhite20xx
  • 博客:tonybai.com
  • github: https://github.com/bigwhite

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats