标签 docker 下的文章

gRPC客户端的那些事儿

本文永久链接 – https://tonybai.com/2021/09/17/those-things-about-grpc-client

在云原生与微服务主导架构模式的时代,内部服务间交互所采用的通信协议选型无非就是两类:HTTP API(RESTful API)和RPC。在如今的硬件配置与网络条件下,现代RPC实现的性能一般都是好于HTTP API的。我们以json over http与gRPC(insecure)作比较,分别使用ghzhey压测gRPC和json over http的实现,gRPC的性能(Requests/sec: 59924.34)要比http api性能(Requests/sec: 49969.9234)高出20%。实测gPRC使用的protobuf的编解码性能更是最快的json编解码的2-3倍,是Go标准库json包编解码性能的10倍以上(具体数据见本文附录)。

对于性能敏感并且内部通信协议较少变动的系统来说,内部服务使用RPC可能是多数人的选择。而gRPC虽然不是性能最好的RPC实现,但作为有谷歌大厂背书且是CNCF唯一的RPC项目,gRPC自然得到了开发人员最广泛的关注与使用。

本文也来说说gRPC,不过我们更多关注一下gRPC的客户端,我们来看看使用gRPC客户端时都会考虑的那些事情(本文所有代码基于gRPC v1.40.0版本,Go 1.17版本)。

1. 默认的gRPC的客户端

gRPC支持四种通信模式,它们是(以下四张图截自《gRPC: Up and Running》一书):

  • 简单RPC(Simple RPC):最简单的,也是最常用的gRPC通信模式,简单来说就是一请求一应答

  • 服务端流RPC(Server-streaming RPC):一请求,多应答

  • 客户端流RPC(Client-streaming RPC):多请求,一应答

  • 双向流RPC(Bidirectional-Streaming RPC):多请求,多应答

我们以最常用的Simple RPC(也称Unary RPC)为例来看一下如何实现一个gRPC版的helloworld。

我们无需自己从头来编写helloworld.proto并生成相应的gRPC代码,gRPC官方提供了一个helloworld的例子,我们仅需对其略微改造一下即可。

helloworld例子的IDL文件helloworld.proto如下:

// https://github.com/grpc/grpc-go/tree/master/examples/helloworld/helloworld/helloworld.proto

syntax = "proto3";

option go_package = "google.golang.org/grpc/examples/helloworld/helloworld";
option java_multiple_files = true;
option java_package = "io.grpc.examples.helloworld";
option java_outer_classname = "HelloWorldProto";

package helloworld;

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

// The request message containing the user's name.
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings
message HelloReply {
  string message = 1;
}

对.proto文件的规范讲解大家可以参考grpc官方文档,这里不赘述。显然上面这个IDL是极致简单的。这里定义了一个service:Greeter,它仅包含一个方法SayHello,并且这个方法的参数与返回值都是一个仅包含一个string字段的结构体。

我们无需手工执行protoc命令来基于该.proto文件生成对应的Greeter service的实现以及HelloRequest、HelloReply的protobuf编解码实现,因为gRPC在example下已经放置了生成后的Go源文件,我们直接引用即可。这里要注意,最新的grpc-go项目仓库采用了多module的管理模式,examples作为一个独立的go module而存在,因此我们需要将其单独作为一个module导入到其使用者的项目中。以gRPC客户端greeter_client为例,它的go.mod要这样来写:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo1/greeter_client/go.mod
module github.com/bigwhite/grpc-client/demo1

go 1.17

require (
    google.golang.org/grpc v1.40.0
    google.golang.org/grpc/examples v1.40.0
)

require (
    github.com/golang/protobuf v1.4.3 // indirect
    golang.org/x/net v0.0.0-20201021035429-f5854403a974 // indirect
    golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f // indirect
    golang.org/x/text v0.3.3 // indirect
    google.golang.org/genproto v0.0.0-20200806141610-86f49bd18e98 // indirect
    google.golang.org/protobuf v1.25.0 // indirect
)

replace google.golang.org/grpc v1.40.0 => /Users/tonybai/Go/src/github.com/grpc/grpc-go

replace google.golang.org/grpc/examples v1.40.0 => /Users/tonybai/Go/src/github.com/grpc/grpc-go/examples

注:grpc-go项目的标签(tag)似乎打的有问题,由于没有打grpc/examples/v1.40.0标签,go命令在grpc-go的v1.40.0标签中找不到examples,因此上面的go.mod中使用了一个replace trick(example module的v1.40.0版本是假的哦),将examples module指向本地的代码。

gRPC通信的两端我们也稍作改造。原greeter_client仅发送一个请求便退出,这里我们将其改为每隔2s发送请求(便于后续观察),如下面代码所示:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo1/greeter_client/main.go
... ...
func main() {
    // Set up a connection to the server.
    ctx, cf1 := context.WithTimeout(context.Background(), time.Second*3)
    defer cf1()
    conn, err := grpc.DialContext(ctx, address, grpc.WithInsecure(), grpc.WithBlock())
    if err != nil {
        log.Fatalf("did not connect: %v", err)
    }
    defer conn.Close()
    c := pb.NewGreeterClient(conn)

    // Contact the server and print out its response.
    name := defaultName
    if len(os.Args) > 1 {
        name = os.Args[1]
    }

    for i := 0; ; i++ {
        ctx, _ := context.WithTimeout(context.Background(), time.Second)
        r, err := c.SayHello(ctx, &pb.HelloRequest{Name: fmt.Sprintf("%s-%d", name, i+1)})
        if err != nil {
            log.Fatalf("could not greet: %v", err)
        }
        log.Printf("Greeting: %s", r.GetMessage())
        time.Sleep(2 * time.Second)
    }
}

greeter_server加了一个命令行选项-port并支持gRPC server的优雅退出

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo1/greeter_server/main.go
... ...

var port int

func init() {
    flag.IntVar(&port, "port", 50051, "listen port")
}

func main() {
    flag.Parse()
    lis, err := net.Listen("tcp", fmt.Sprintf("localhost:%d", port))
    if err != nil {
        log.Fatalf("failed to listen: %v", err)
    }
    s := grpc.NewServer()
    pb.RegisterGreeterServer(s, &server{})

    go func() {
        if err := s.Serve(lis); err != nil {
            log.Fatalf("failed to serve: %v", err)
        }
    }()

    var c = make(chan os.Signal)
    signal.Notify(c, os.Interrupt, os.Kill)
    <-c
    s.Stop()
    fmt.Println("exit")
}

搞定go.mod以及对client和server进行改造ok后,我们就可以来构建和运行greeter_client和greeter_server了:

编译和启动server:

$cd grpc-client/demo1/greeter_server
$make
$./demo1-server -port 50051
2021/09/11 12:10:33 Received: world-1
2021/09/11 12:10:35 Received: world-2
2021/09/11 12:10:37 Received: world-3
... ...

编译和启动client:
$cd grpc-client/demo1/greeter_client
$make
$./demo1-client
2021/09/11 12:10:33 Greeting: Hello world-1
2021/09/11 12:10:35 Greeting: Hello world-2
2021/09/11 12:10:37 Greeting: Hello world-3
... ...

我们看到:greeter_client和greeter_server启动后可以正常的通信!我们重点看一下greeter_client。

greeter_client在Dial服务端时传给DialContext的target参数是一个静态的服务地址:

const (
      address     = "localhost:50051"
)

这个形式的target经过google.golang.org/grpc/internal/grpcutil.ParseTarget的解析后返回一个值为nil的resolver.Target。于是gRPC采用默认的scheme:”passthrough”(github.com/grpc/grpc-go/resolver/resolver.go),默认的”passthrough” scheme下,gRPC将使用内置的passthrough resolver(google.golang.org/grpc/internal/resolver/passthrough)。默认的这个passthrough resolver是如何设置要连接的service地址的呢?下面是passthrough resolver的代码摘录:

// github.com/grpc/grpc-go/internal/resolver/passthrough/passthrough.go

func (r *passthroughResolver) start() {
    r.cc.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: r.target.Endpoint}}})
}

我们看到它将target.Endpoint,即localhost:50051直接传给了ClientConnection(上面代码的r.cc),后者将向这个地址建立tcp连接。这正应了该resolver的名字:passthrough

上面greeter_client连接的仅仅是service的一个实例(instance),如果我们同时启动了该service的三个实例,比如使用goreman通过加载脚本文件来启动多个service实例:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo1/greeter_server/Procfile

# Use goreman to run `go get github.com/mattn/goreman`
demo1-server1: ./demo1-server -port 50051
demo1-server2: ./demo1-server -port 50052
demo1-server3: ./demo1-server -port 50053

同时启动多实例:

$goreman start
15:22:12 demo1-server3 | Starting demo1-server3 on port 5200
15:22:12 demo1-server2 | Starting demo1-server2 on port 5100
15:22:12 demo1-server1 | Starting demo1-server1 on port 5000

那么我们应该如何告诉greeter_client去连接这三个实例呢?是否可以将address改为下面这样就可以了呢:

const (
    address     = "localhost:50051,localhost:50052,localhost:50053"
    defaultName = "world"
)

我们来改改试试,修改后重新编译greeter_client,启动greeter_client,我们看到下面结果:

$./demo1-client
2021/09/11 15:26:32 did not connect: context deadline exceeded

greeter_client连接server超时!也就是说像上面这样简单的传入多个实例的地址是不行的!那问题来了!我们该怎么让greeter_client去连接一个service的多个实例呢?我们继续向下看。

2. 连接一个Service的多个实例(instance)

grpc.Dial/grpc.DialContext的参数target可不仅仅是service实例的服务地址这么简单,它的实参(argument)形式决定了gRPC client将采用哪一个resolver来确定service实例的地址集合

下面我们以一个返回service实例地址静态集合(即service的实例数量固定且服务地址固定)的StaticResolver为例,来看如何让gRPC client连接一个Service的多个实例。

1) StaticResolver

我们首先来设计一下传给grpc.DialContext的target形式。关于gRPC naming resolution,gRPC有专门文档说明。在这里,我们也创建一个新的scheme:static,多个service instance的服务地址通过逗号分隔的字符串传入,如下面代码:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo2/greeter_client/main.go

const (
      address = "static:///localhost:50051,localhost:50052,localhost:50053"
)

当address被作为target的实参传入grpc.DialContext后,它会被grpcutil.ParseTarget解析为一个resolver.Target结构体,该结构体包含三个字段:

// github.com/grpc/grpc-go/resolver/resolver.go
type Target struct {
    Scheme    string
    Authority string
    Endpoint  string
}

其中Scheme为”static”,Authority为空,Endpoint为”localhost:50051,localhost:50052,localhost:50053″。

接下来,gRPC会根据Target.Scheme的值到resolver包中的builder map中查找是否有对应的Resolver Builder实例。到目前为止gRPC内置的的resolver Builder都无法匹配该Scheme值。是时候自定义一个StaticResolver的Builder了!

grpc的resolve包定义了一个Builder实例需要实现的接口:

// github.com/grpc/grpc-go/resolver/resolver.go 

// Builder creates a resolver that will be used to watch name resolution updates.
type Builder interface {
    // Build creates a new resolver for the given target.
    //
    // gRPC dial calls Build synchronously, and fails if the returned error is
    // not nil.
    Build(target Target, cc ClientConn, opts BuildOptions) (Resolver, error)
    // Scheme returns the scheme supported by this resolver.
    // Scheme is defined at https://github.com/grpc/grpc/blob/master/doc/naming.md.
    Scheme() string
}

Scheme方法返回这个Builder对应的scheme,而Build方法则是真正用于构建Resolver实例的方法,我们来看一下StaticBuilder的实现:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo2/greeter_client/builder.go

func init() {
    resolver.Register(&StaticBuilder{}) //在init函数中将StaticBuilder实例注册到resolver包的Resolver map中
}

type StaticBuilder struct{}

func (sb *StaticBuilder) Build(target resolver.Target, cc resolver.ClientConn,
    opts resolver.BuildOptions) (resolver.Resolver, error) {

    // 解析target.Endpoint (例如:localhost:50051,localhost:50052,localhost:50053)
    endpoints := strings.Split(target.Endpoint, ",")

    r := &StaticResolver{
        endpoints: endpoints,
        cc:        cc,
    }
    r.ResolveNow(resolver.ResolveNowOptions{})
    return r, nil
}

func (sb *StaticBuilder) Scheme() string {
    return "static" // 返回StaticBuilder对应的scheme字符串
}

在这个StaticBuilder实现中,init函数在包初始化是就将一个StaticBuilder实例注册到resolver包的Resolver map中。这样gRPC在Dial时就能通过target中的scheme找到该builder。Build方法是StaticBuilder的关键,在这个方法中,它首先解析传入的target.Endpoint,得到三个service instance的服务地址并存到新创建的StaticResolver实例中,并调用StaticResolver实例的ResolveNow方法确定即将连接的service instance集合。

和Builder一样,grpc的resolver包也定义了每个resolver需要实现的Resolver接口:

// github.com/grpc/grpc-go/resolver/resolver.go 

// Resolver watches for the updates on the specified target.
// Updates include address updates and service config updates.
type Resolver interface {
    // ResolveNow will be called by gRPC to try to resolve the target name
    // again. It's just a hint, resolver can ignore this if it's not necessary.
    //
    // It could be called multiple times concurrently.
    ResolveNow(ResolveNowOptions)
    // Close closes the resolver.
    Close()
}

从这个接口注释我们也能看出,Resolver的实现负责监视(watch)服务测的地址与配置变化,并将变化更新给grpc的ClientConn。我们来看看我们的StaticResolver的实现:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo2/greeter_client/resolver.go

type StaticResolver struct {
    endpoints []string
    cc        resolver.ClientConn
    sync.Mutex
}

func (r *StaticResolver) ResolveNow(opts resolver.ResolveNowOptions) {
    r.Lock()
    r.doResolve()
    r.Unlock()
}

func (r *StaticResolver) Close() {
}

func (r *StaticResolver) doResolve() {
    var addrs []resolver.Address
    for i, addr := range r.endpoints {
        addrs = append(addrs, resolver.Address{
            Addr:       addr,
            ServerName: fmt.Sprintf("instance-%d", i+1),
        })
    }

    newState := resolver.State{
        Addresses: addrs,
    }

    r.cc.UpdateState(newState)
}

注:resolver.Resolver接口的注释要求ResolveNow方法是要支持并发安全的,所以这里我们通过sync.Mutex来实现同步。

由于服务侧的服务地址数量与信息都是不变的,因此这里并没有watch和update的过程,而只是在实现了ResolveNow(并在Builder中的Build方法中调用),在ResolveNow中将service instance的地址集合更新给ClientConnection(r.cc)。

接下来我们来编译与运行一下demo2的client与server:

$cd grpc-client/demo2/greeter_server
$make
$goreman start
22:58:21 demo2-server1 | Starting demo2-server1 on port 5000
22:58:21 demo2-server2 | Starting demo2-server2 on port 5100
22:58:21 demo2-server3 | Starting demo2-server3 on port 5200

$cd grpc-client/demo2/greeter_client
$make
$./demo2-client

执行一段时间后,你会在server端的日志中发现一个问题,如下日志所示:

22:57:16 demo2-server1 | 2021/09/11 22:57:16 Received: world-1
22:57:18 demo2-server1 | 2021/09/11 22:57:18 Received: world-2
22:57:20 demo2-server1 | 2021/09/11 22:57:20 Received: world-3
22:57:22 demo2-server1 | 2021/09/11 22:57:22 Received: world-4
22:57:24 demo2-server1 | 2021/09/11 22:57:24 Received: world-5
22:57:26 demo2-server1 | 2021/09/11 22:57:26 Received: world-6
22:57:28 demo2-server1 | 2021/09/11 22:57:28 Received: world-7
22:57:30 demo2-server1 | 2021/09/11 22:57:30 Received: world-8
22:57:32 demo2-server1 | 2021/09/11 22:57:32 Received: world-9

我们的Service instance集合中明明有三个地址,为何只有server1收到了rpc请求,其他两个server都处于空闲状态呢?这是客户端的负载均衡策略在作祟!默认情况下,grpc会为客户端选择内置的“pick_first”负载均衡策略,即在service instance集合中选择第一个intance进行请求。在这个例子中,在pick_first策略的作用下,grpc总是会选择demo2-server1发起rpc请求。

如果要将请求发到各个server上,我们可以将负载均衡策略改为另外一个内置的策略:round_robin,就像下面代码这样:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo2/greeter_client/main.go

conn, err := grpc.DialContext(ctx, address, grpc.WithInsecure(), grpc.WithBlock(), grpc.WithBalancerName("round_robin"))

重新编译运行greeter_client后,在server测我们就可以看到rpc请求被轮询地发到了每个server instance上了。

2) Resolver原理

我们再来用一幅图来梳理一下Builder以及Resolver的工作原理:

图中的SchemeResolver泛指实现了某一特定scheme的resolver。如图所示,service instance集合resolve过程的步骤大致如下:

    1. SchemeBuilder将自身实例注册到resolver包的map中;
    1. grpc.Dial/DialContext时使用特定形式的target参数
    1. 对target解析后,根据target.Scheme到resolver包的map中查找Scheme对应的Buider;
    1. 调用Buider的Build方法
    1. Build方法构建出SchemeResolver实例;
    1. 后续由SchemeResolver实例监视service instance变更状态并在有变更的时候更新ClientConnection。

3) NacosResolver

在生产环境中,考虑到服务的高可用、可伸缩等,我们很少使用固定地址、固定数量的服务实例集合,更多是通过服务注册和发现机制自动实现服务实例集合的更新。这里我们再来实现一个基于nacos的NacosResolver,实现服务实例变更时grpc Client的自动调整(注:nacos的本地单节点安装方案见文本附录),让示例具实战意义^_^。

由于有了上面关于Resolver原理的描述,这里简化了一些描述。

首先和StaticResolver一样,我们也来设计一下target的形式。nacos有namespace, group的概念,因此我们将target设计为如下形式:

nacos://[authority]/host:port/namespace/group/serviceName

具体到我们的greeter_client中,其address为:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo3/greeter_client/main.go

const (
      address = "nacos:///localhost:8848/public/group-a/demo3-service" //no authority
)

接下来我们来看NacosBuilder:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo3/greeter_client/builder.go

func (nb *NacosBuilder) Build(target resolver.Target,
    cc resolver.ClientConn,
    opts resolver.BuildOptions) (resolver.Resolver, error) {

    // use info in target to access naming service
    // parse the target.endpoint
    // target.Endpoint - localhost:8848/public/DEFAULT_GROUP/serviceName, the addr of naming service :nacos endpoint
    sl := strings.Split(target.Endpoint, "/")
    nacosAddr := sl[0]
    namespace := sl[1]
    group := sl[2]
    serviceName := sl[3]
    sl1 := strings.Split(nacosAddr, ":")
    host := sl1[0]
    port := sl1[1]
    namingClient, err := initNamingClient(host, port, namespace, group)
    if err != nil {
        return nil, err
    }

    r := &NacosResolver{
        namingClient: namingClient,
        cc:           cc,
        namespace:    namespace,
        group:        group,
        serviceName:  serviceName,
    }

    // initialize the cc's states
    r.ResolveNow(resolver.ResolveNowOptions{})

    // subscribe and watch
    r.watch()
    return r, nil
}

func (nb *NacosBuilder) Scheme() string {
    return "nacos"
}

NacosBuilder的Build方法流程也StaticBuilder并无二致,首先我们也是解析传入的target的Endpoint,即”localhost:8848/public/group-a/demo3-service”,并将解析后的各段信息存入新创建的NacosResolver实例中备用。NacosResolver还需要一个信息,那就是与nacos的连接,这里用initNamingClient创建一个nacos client端实例(调用nacos提供的go sdk)。

接下来我们调用NacosResolver的ResolveNow获取一次nacos上demo3-service的服务实例列表并初始化ClientConn,最后我们调用NacosResolver的watch方法来订阅并监视demo3-service的实例变化。下面是NacosResolver的部分实现:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo3/greeter_client/resolver.go

func (r *NacosResolver) doResolve(opts resolver.ResolveNowOptions) {
    instances, err := r.namingClient.SelectAllInstances(vo.SelectAllInstancesParam{
        ServiceName: r.serviceName,
        GroupName:   r.group,
    })
    if err != nil {
        fmt.Println(err)
        return
    }

    if len(instances) == 0 {
        fmt.Printf("service %s has zero instance\n", r.serviceName)
        return
    }

    // update cc.States
    var addrs []resolver.Address
    for i, inst := range instances {
        if (!inst.Enable) || (inst.Weight == 0) {
            continue
        }

        addrs = append(addrs, resolver.Address{
            Addr:       fmt.Sprintf("%s:%d", inst.Ip, inst.Port),
            ServerName: fmt.Sprintf("instance-%d", i+1),
        })
    }

    if len(addrs) == 0 {
        fmt.Printf("service %s has zero valid instance\n", r.serviceName)
    }

    newState := resolver.State{
        Addresses: addrs,
    }

    r.Lock()
    r.cc.UpdateState(newState)
    r.Unlock()
}

func (r *NacosResolver) ResolveNow(opts resolver.ResolveNowOptions) {
    r.doResolve(opts)
}

func (r *NacosResolver) Close() {
    r.namingClient.Unsubscribe(&vo.SubscribeParam{
        ServiceName: r.serviceName,
        GroupName:   r.group,
    })
}

func (r *NacosResolver) watch() {
    r.namingClient.Subscribe(&vo.SubscribeParam{
        ServiceName: r.serviceName,
        GroupName:   r.group,
        SubscribeCallback: func(services []model.SubscribeService, err error) {
            fmt.Printf("subcallback: %#v\n", services)
            r.doResolve(resolver.ResolveNowOptions{})
        },
    })
}

这里的一个重要实现是ResolveNow和watch都调用的doResolve方法,该方法通过nacos-go sdk中的SelectAllInstances获取demo-service3的所有实例,并将得到的enabled(=true)和权重(weight)不为0的合法实例集合更新给ClientConn(r.cc.UpdateState)。

在NacosResolver的watch方法中,我们通过nacos-go sdk中的Subscribe方法订阅demo3-service并提供了一个回调函数。这样每当demo3-service的实例发生变化时,该回调会被调用。在该回调中我们可以基于传回的最新的service实例集合(services []model.SubscribeService)来更新ClientConn,但在这里我们复用了doResolve方法,即又去nacos获取一次demo-service3的实例。

编译运行demo3下greeter_server:

$cd grpc-client/demo3/greeter_server
$make
$goreman start
06:06:02 demo3-server3 | Starting demo3-server3 on port 5200
06:06:02 demo3-server1 | Starting demo3-server1 on port 5000
06:06:02 demo3-server2 | Starting demo3-server2 on port 5100
06:06:02 demo3-server3 | 2021-09-12T06:06:02.913+0800   INFO    nacos_client/nacos_client.go:87 logDir:</tmp/nacos/log/50053>   cacheDir:</tmp/nacos/cache/50053>
06:06:02 demo3-server2 | 2021-09-12T06:06:02.913+0800   INFO    nacos_client/nacos_client.go:87 logDir:</tmp/nacos/log/50052>   cacheDir:</tmp/nacos/cache/50052>
06:06:02 demo3-server1 | 2021-09-12T06:06:02.913+0800   INFO    nacos_client/nacos_client.go:87 logDir:</tmp/nacos/log/50051>   cacheDir:</tmp/nacos/cache/50051>

运行greeter_server后,我们在nacos dashboard上会看到demo-service3的所有实例信息:


编译运行demo3下greeter_client:

$cd grpc-client/demo3/greeter_client
$make
$./demo3-client
2021-09-12T06:08:25.551+0800    INFO    nacos_client/nacos_client.go:87 logDir:</Users/tonybai/go/src/github.com/bigwhite/experiments/grpc-client/demo3/greeter_client/log>   cacheDir:</Users/tonybai/go/src/github.com/bigwhite/experiments/grpc-client/demo3/greeter_client/cache>
2021/09/12 06:08:25 Greeting: Hello world-1
2021/09/12 06:08:27 Greeting: Hello world-2
2021/09/12 06:08:29 Greeting: Hello world-3
2021/09/12 06:08:31 Greeting: Hello world-4
2021/09/12 06:08:33 Greeting: Hello world-5
2021/09/12 06:08:35 Greeting: Hello world-6
... ...

由于采用了round robin负载策略,greeter_server侧每个server(权重都为1)都会平等的收到rpc请求:

06:06:36 demo3-server1 | 2021/09/12 06:06:36 Received: world-1
06:06:38 demo3-server3 | 2021/09/12 06:06:38 Received: world-2
06:06:40 demo3-server2 | 2021/09/12 06:06:40 Received: world-3
06:06:42 demo3-server1 | 2021/09/12 06:06:42 Received: world-4
06:06:44 demo3-server3 | 2021/09/12 06:06:44 Received: world-5
06:06:46 demo3-server2 | 2021/09/12 06:06:46 Received: world-6
... ...

这时我们可以通过nacos dashboard调整demo3-service的实例权重或下线某个实例,比如下线service instance-2(端口50052),之后我们会看到greeter_client回调函数执行,之后greeter_server侧将只有实例1和实例3收到rpc请求。重新上线service instance-2后,一切会恢复正常。

3. 自定义客户端balancer

现实中服务端的实例所部署的主机(虚拟机/容器)算力可能不同,如果所有实例都使用相同权重1,那么肯定是不科学且存在算力浪费。但grpc-go内置的balancer实现有限,不能满足我们需求,我们就需要自定义一个可以满足我们需求的balancer了。

这里我们以自定义一个Weighted Round Robin(wrr) Balancer为例,看看自定义balancer的步骤(我们参考grpc-go中内置round_robin的实现)。

和resolver包相似,balancer也是通过一个Builder(创建模式)来实例化的,并且balancer的Balancer接口与resolver.Balancer差不多:

// github.com/grpc/grpc-go/balancer/balancer.go 

// Builder creates a balancer.
type Builder interface {
    // Build creates a new balancer with the ClientConn.
    Build(cc ClientConn, opts BuildOptions) Balancer
    // Name returns the name of balancers built by this builder.
    // It will be used to pick balancers (for example in service config).
    Name() string
}

通过Builder.Build方法我们构建一个Balancer接口的实现,Balancer接口定义如下:

// github.com/grpc/grpc-go/balancer/balancer.go 

type Balancer interface {
    // UpdateClientConnState is called by gRPC when the state of the ClientConn
    // changes.  If the error returned is ErrBadResolverState, the ClientConn
    // will begin calling ResolveNow on the active name resolver with
    // exponential backoff until a subsequent call to UpdateClientConnState
    // returns a nil error.  Any other errors are currently ignored.
    UpdateClientConnState(ClientConnState) error
    // ResolverError is called by gRPC when the name resolver reports an error.
    ResolverError(error)
    // UpdateSubConnState is called by gRPC when the state of a SubConn
    // changes.
    UpdateSubConnState(SubConn, SubConnState)
    // Close closes the balancer. The balancer is not required to call
    // ClientConn.RemoveSubConn for its existing SubConns.
    Close()
}

可以看到,Balancer要比Resolver要复杂很多。gRPC的核心开发者们也看到了这一点,于是他们提供了一个可简化自定义Balancer创建的包:google.golang.org/grpc/balancer/base。gRPC内置的round_robin Balancer也是基于base包实现的。

base包提供了NewBalancerBuilder可以快速返回一个balancer.Builder的实现:

// github.com/grpc/grpc-go/balancer/base/base.go 

// NewBalancerBuilder returns a base balancer builder configured by the provided config.
func NewBalancerBuilder(name string, pb PickerBuilder, config Config) balancer.Builder {
    return &baseBuilder{
        name:          name,
        pickerBuilder: pb,
        config:        config,
    }
}

我们看到,这个函数接收一个参数:pb,它的类型是PikcerBuilder,这个接口类型则比较简单:

// github.com/grpc/grpc-go/balancer/base/base.go 

// PickerBuilder creates balancer.Picker.
type PickerBuilder interface {
    // Build returns a picker that will be used by gRPC to pick a SubConn.
    Build(info PickerBuildInfo) balancer.Picker
}

我们仅需要提供一个PickerBuilder的实现以及一个balancer.Picker的实现即可,而Picker则是仅有一个方法的接口类型:

// github.com/grpc/grpc-go/balancer/balancer.go 

type Picker interface {
    Pick(info PickInfo) (PickResult, error)
}

嵌套的有些多,我们用下面这幅图来直观看一下balancer的创建和使用流程:

再简述一下大致流程:

  • 首先要注册一个名为”my_weighted_round_robin”的balancer Builder:wrrBuilder,该Builder由base包的NewBalancerBuilder构建;
  • base包的NewBalancerBuilder函数需要传入一个PickerBuilder实现,于是我们需要自定义一个返回Picker接口实现的PickerBuilder。
  • grpc.Dial调用时传入一个WithBalancerName(“my_weighted_round_robin”),grpc通过balancer Name从已注册的balancer builder中选出我们实现的wrrBuilder,并调用wrrBuilder创建Picker:wrrPicker。
  • 在grpc实施rpc调用SayHello时,wrrPicker的Pick方法会被调用,选出一个Connection,并在该connection上发送rpc请求。

由于用到的权重值,我们的resolver实现需要做一些变动,主要是在doResolve方法时将service instance的权重(weight)通过Attribute设置到ClientConnection中:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo4/greeter_client/resolver.go

func (r *NacosResolver) doResolve(opts resolver.ResolveNowOptions) {
    instances, err := r.namingClient.SelectAllInstances(vo.SelectAllInstancesParam{
        ServiceName: r.serviceName,
        GroupName:   r.group,
    })
    if err != nil {
        fmt.Println(err)
        return
    }

    if len(instances) == 0 {
        fmt.Printf("service %s has zero instance\n", r.serviceName)
        return
    }

    // update cc.States
    var addrs []resolver.Address
    for i, inst := range instances {
        if (!inst.Enable) || (inst.Weight == 0) {
            continue
        }

        addr := resolver.Address{
            Addr:       fmt.Sprintf("%s:%d", inst.Ip, inst.Port),
            ServerName: fmt.Sprintf("instance-%d", i+1),
        }
        addr.Attributes = addr.Attributes.WithValues("weight", int(inst.Weight)) //考虑权重并纳入cc的状态中
        addrs = append(addrs, addr)
    }

    if len(addrs) == 0 {
        fmt.Printf("service %s has zero valid instance\n", r.serviceName)
    }

    newState := resolver.State{
        Addresses: addrs,
    }

    r.Lock()
    r.cc.UpdateState(newState)
    r.Unlock()
}

接下来我们重点看看greeter_client中wrrPickerBuilder与wrrPicker的实现:

// https://github.com/bigwhite/experiments/tree/master/grpc-client/demo4/greeter_client/balancer.go

type wrrPickerBuilder struct{}

func (*wrrPickerBuilder) Build(info base.PickerBuildInfo) balancer.Picker {
    if len(info.ReadySCs) == 0 {
        return base.NewErrPicker(balancer.ErrNoSubConnAvailable)
    }

    var scs []balancer.SubConn
    // 提取已经就绪的connection的权重信息,作为Picker实例的输入
    for subConn, addr := range info.ReadySCs {
        weight := addr.Address.Attributes.Value("weight").(int)
        if weight <= 0 {
            weight = 1
        }
        for i := 0; i < weight; i++ {
            scs = append(scs, subConn)
        }
    }

    return &wrrPicker{
        subConns: scs,
        // Start at a random index, as the same RR balancer rebuilds a new
        // picker when SubConn states change, and we don't want to apply excess
        // load to the first server in the list.
        next: rand.Intn(len(scs)),
    }
}

type wrrPicker struct {
    // subConns is the snapshot of the roundrobin balancer when this picker was
    // created. The slice is immutable. Each Get() will do a round robin
    // selection from it and return the selected SubConn.
    subConns []balancer.SubConn

    mu   sync.Mutex
    next int
}

// 选出一个Connection
func (p *wrrPicker) Pick(info balancer.PickInfo) (balancer.PickResult, error) {
    p.mu.Lock()
    sc := p.subConns[p.next]
    p.next = (p.next + 1) % len(p.subConns)
    p.mu.Unlock()
    return balancer.PickResult{SubConn: sc}, nil
}

这是一个简单的Weighted Round Robin实现,加权算法十分简单,如果一个conn的权重为n,那么就在加权结果集中加入n个conn,这样在后续Pick时不需要考虑加权的问题,只需向普通Round Robin那样逐个Pick出来即可。

运行demo4 greeter_server后,我们在nacos将instance-1的权重改为5,我们后续就会看到如下输出:

$goreman start
09:20:18 demo4-server3 | Starting demo4-server3 on port 5200
09:20:18 demo4-server2 | Starting demo4-server2 on port 5100
09:20:18 demo4-server1 | Starting demo4-server1 on port 5000
09:20:18 demo4-server2 | 2021-09-12T09:20:18.633+0800   INFO    nacos_client/nacos_client.go:87 logDir:</tmp/nacos/log/50052>   cacheDir:</tmp/nacos/cache/50052>
09:20:18 demo4-server1 | 2021-09-12T09:20:18.633+0800   INFO    nacos_client/nacos_client.go:87 logDir:</tmp/nacos/log/50051>   cacheDir:</tmp/nacos/cache/50051>
09:20:18 demo4-server3 | 2021-09-12T09:20:18.633+0800   INFO    nacos_client/nacos_client.go:87 logDir:</tmp/nacos/log/50053>   cacheDir:</tmp/nacos/cache/50053>
09:20:23 demo4-server2 | 2021/09/12 09:20:23 Received: world-1
09:20:25 demo4-server3 | 2021/09/12 09:20:25 Received: world-2
09:20:27 demo4-server1 | 2021/09/12 09:20:27 Received: world-3
09:20:29 demo4-server2 | 2021/09/12 09:20:29 Received: world-4
09:20:31 demo4-server3 | 2021/09/12 09:20:31 Received: world-5
09:20:33 demo4-server1 | 2021/09/12 09:20:33 Received: world-6
09:20:35 demo4-server2 | 2021/09/12 09:20:35 Received: world-7
09:20:37 demo4-server3 | 2021/09/12 09:20:37 Received: world-8
09:20:39 demo4-server1 | 2021/09/12 09:20:39 Received: world-9
09:20:41 demo4-server2 | 2021/09/12 09:20:41 Received: world-10
09:20:43 demo4-server1 | 2021/09/12 09:20:43 Received: world-11
09:20:45 demo4-server2 | 2021/09/12 09:20:45 Received: world-12
09:20:47 demo4-server3 | 2021/09/12 09:20:47 Received: world-13
//这里将权重改为5后
09:20:49 demo4-server1 | 2021/09/12 09:20:49 Received: world-14
09:20:51 demo4-server1 | 2021/09/12 09:20:51 Received: world-15
09:20:53 demo4-server1 | 2021/09/12 09:20:53 Received: world-16
09:20:55 demo4-server1 | 2021/09/12 09:20:55 Received: world-17
09:20:57 demo4-server1 | 2021/09/12 09:20:57 Received: world-18
09:20:59 demo4-server2 | 2021/09/12 09:20:59 Received: world-19
09:21:01 demo4-server3 | 2021/09/12 09:21:01 Received: world-20
09:21:03 demo4-server1 | 2021/09/12 09:21:03 Received: world-21

注意:每次nacos的service instance发生变化后,balancer都会重新build一个新Picker实例,后续会使用新Picker实例在其Connection集合中Pick出一个conn。

4. 小结

在本文中我们了解了gRPC的四种通信模式。我们重点关注了在最常用的simple RPC(unary RPC)模式下gRPC Client侧需要考虑的事情,包括:

  • 如何实现一个helloworld的一对一的通信
  • 如何实现一个自定义的Resolver以实现一个client到一个静态服务实例集合的通信
  • 如何实现一个自定义的Resolver以实现一个client到一个动态服务实例集合的通信
  • 如何自定义客户端Balancer

本文代码仅做示例使用,并未考虑太多异常处理。

本文涉及的所有代码可以从这里下载:https://github.com/bigwhite/experiments/tree/master/grpc-client

5. 参考资料

  • gRPC Name Resolution – https://github.com/grpc/grpc/blob/master/doc/naming.md
  • Load Balancing in gRPC – https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
  • 基于 gRPC的服务发现与负载均衡(基础篇)- https://pandaychen.github.io/2019/07/11/GRPC-SERVICE-DISCOVERY/
  • 比较 gRPC服务和HTTP API – https://docs.microsoft.com/zh-cn/aspnet/core/grpc/comparison

6. 附录

1) json vs. protobuf编解码性能基准测试结果

测试源码位于这里:https://github.com/bigwhite/experiments/tree/master/grpc-client/grpc-vs-httpjson/codec

我们使用了Go标准库json编解码、字节开源的sonic json编解码包以及minio开源的simdjson-go高性能json解析库与protobuf作对比的结果如下:

$go test -bench .
goos: darwin
goarch: amd64
pkg: github.com/bigwhite/codec
cpu: Intel(R) Core(TM) i5-8257U CPU @ 1.40GHz
BenchmarkSimdJsonUnmarshal-8           43304         28177 ns/op      113209 B/op         19 allocs/op
BenchmarkJsonUnmarshal-8              153214          7187 ns/op        1024 B/op          6 allocs/op
BenchmarkJsonMarshal-8                601590          2057 ns/op        2688 B/op          2 allocs/op
BenchmarkSonicJsonUnmarshal-8        1394211           861.1 ns/op      2342 B/op          2 allocs/op
BenchmarkSonicJsonMarshal-8          1592898           765.2 ns/op      2239 B/op          4 allocs/op
BenchmarkProtobufUnmarshal-8         3823441           317.0 ns/op      1208 B/op          3 allocs/op
BenchmarkProtobufMarshal-8           4461583           274.8 ns/op      1152 B/op          1 allocs/op
PASS
ok      github.com/bigwhite/codec   10.901s

benchmark测试结果印证了protobuf的编解码性能要远高于json编解码。但是在benchmark结果中,一个结果让我很意外,那就是号称高性能的simdjson-go的数据难看到离谱。谁知道为什么吗?simd指令没生效?字节开源的sonic的确性能很好,与pb也就2-3倍的差距,没有数量级的差距。

2) gRPC(insecure) vs. json over http

测试源码位于这里:https://github.com/bigwhite/experiments/tree/master/grpc-client/grpc-vs-httpjson/protocol

使用ghz对gRPC实现的server进行压测结果如下:

$ghz --insecure -n 100000 -c 500 --proto publish.proto --call proto.PublishService.Publish -D data.json localhost:10000

Summary:
  Count:    100000
  Total:    1.67 s
  Slowest:    48.49 ms
  Fastest:    0.13 ms
  Average:    6.34 ms
  Requests/sec:    59924.34

Response time histogram:
  0.133  [1]     |
  4.968  [40143] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  9.803  [47335] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  14.639 [11306] |∎∎∎∎∎∎∎∎∎∎
  19.474 [510]   |
  24.309 [84]    |
  29.144 [89]    |
  33.980 [29]    |
  38.815 [3]     |
  43.650 [8]     |
  48.485 [492]   |

Latency distribution:
  10 % in 3.07 ms
  25 % in 4.12 ms
  50 % in 5.49 ms
  75 % in 7.94 ms
  90 % in 10.24 ms
  95 % in 11.28 ms
  99 % in 15.52 ms

Status code distribution:
  [OK]   100000 responses

使用hey对使用fasthttp与sonic实现的http server进行压测结果如下:

$hey -n 100000 -c 500  -m POST -D ./data.json http://127.0.0.1:10001/

Summary:
  Total:    2.0012 secs
  Slowest:    0.1028 secs
  Fastest:    0.0001 secs
  Average:    0.0038 secs
  Requests/sec:    49969.9234

Response time histogram:
  0.000 [1]     |
  0.010 [96287] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.021 [2639]  |■
  0.031 [261]   |
  0.041 [136]   |
  0.051 [146]   |
  0.062 [128]   |
  0.072 [43]    |
  0.082 [24]    |
  0.093 [10]    |
  0.103 [4]     |

Latency distribution:
  10% in 0.0013 secs
  25% in 0.0020 secs
  50% in 0.0031 secs
  75% in 0.0040 secs
  90% in 0.0062 secs
  95% in 0.0089 secs
  99% in 0.0179 secs

Details (average, fastest, slowest):
  DNS+dialup:    0.0000 secs, 0.0001 secs, 0.1028 secs
  DNS-lookup:    0.0000 secs, 0.0000 secs, 0.0000 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0202 secs
  resp wait:    0.0031 secs, 0.0000 secs, 0.0972 secs
  resp read:    0.0005 secs, 0.0000 secs, 0.0575 secs

Status code distribution:
  [200]    99679 responses

我们看到:gRPC的性能(Requests/sec: 59924.34)要比http api性能(Requests/sec: 49969.9234)高出20%。

3) nacos docker安装

单机容器版nacos安装步骤如下:

$git clone https://github.com/nacos-group/nacos-docker.git
$cd nacos-docker
$docker-compose -f example/standalone-derby.yaml up

nacos相关容器启动成功后,可以打开浏览器访问http://localhost:8848/nacos,打开nacos仪表盘登录页面,输入nacos/nacos即可进入nacos web操作界面。


“Gopher部落”知识星球正式转正(从试运营星球变成了正式星球)!“gopher部落”旨在打造一个精品Go学习和进阶社群!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!部落目前虽小,但持续力很强。在2021年上半年,部落将策划两个专题系列分享,并且是部落独享哦:

  • Go技术书籍的书摘和读书体会系列
  • Go与eBPF系列

欢迎大家加入!

Go技术专栏“改善Go语⾔编程质量的50个有效实践”正在慕课网火热热销中!本专栏主要满足广大gopher关于Go语言进阶的需求,围绕如何写出地道且高质量Go代码给出50条有效实践建议,上线后收到一致好评!欢迎大家订
阅!

img{512x368}

我的网课“Kubernetes实战:高可用集群搭建、配置、运维与应用”在慕课网热卖中,欢迎小伙伴们订阅学习!

img{512x368}

我爱发短信:企业级短信平台定制开发专家 https://tonybai.com/。smspush : 可部署在企业内部的定制化短信平台,三网覆盖,不惧大并发接入,可定制扩展; 短信内容你来定,不再受约束, 接口丰富,支持长短信,签名可选。2020年4月8日,中国三大电信运营商联合发布《5G消息白皮书》,51短信平台也会全新升级到“51商用消息平台”,全面支持5G RCS消息。

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻)归档仓库 – https://github.com/bigwhite/gopherdaily

我的联系方式:

  • 微博:https://weibo.com/bigwhite20xx
  • 微信公众号:iamtonybai
  • 博客:tonybai.com
  • github: https://github.com/bigwhite
  • “Gopher部落”知识星球:https://public.zsxq.com/groups/51284458844544

微信赞赏:
img{512x368}

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

minikube v1.20.0版本的一个bug

img{512x368}

本文永久链接 – https://tonybai.com/2021/05/14/a-bug-of-minikube-1-20

近期在研究dapr(分布式应用运行时),这是一个很朴素却很棒的想法,目前大厂,如阿里鹅厂都有大牛在研究该项目,甚至是利用dapr落地了部分应用。关于dapr,后续我也会用单独的文章详细说说。

dapr不仅支持k8s部署,还支持本地部署,并可以对接多个世界知名的公有云厂商的服务,比如:aws、azure、阿里云等。为了体验dapr对云原生应用的支持,我选择了将其部署于k8s中,同时我选择使用minikube来构建本地k8s开发环境。而本文要说的就是将dapr安装到minikube时遇到的问题。

1. 安装minikube

Kubernetes在4月份发布了最新的1.21版本,但目前minikube的最新版依然为1.20版本

minikube是k8s项目自己维护的一个k8s本地开发环境项目,它与k8s的api接口兼容,我们可以快速搭建一个minikube来进行k8s学习和实践。minikube官网上有关于它的安装、使用和维护的详尽资料。

我这里在一个ubuntu 18.04的腾讯云主机上(1 vcpu, 2g mem)上安装minikube v1.20,minikube是一个单体二进制文件,我们先将这个文件下载到本地:

# curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 60.9M  100 60.9M    0     0  7764k      0  0:00:08  0:00:08 --:--:-- 11.5M
# install minikube-linux-amd64 /usr/local/bin/minikube

验证是否下载ok:

# minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae

接下来我们就利用minikube启动一个k8s cluster用作本地开发环境。由于minikube默认的最低安装要求为2核cpu,而我的虚机仅为1核,我们需要为minikube传递一些命令行参数以让其在单核CPU上也能顺利地启动一个k8s cluster。另外minikube会从gcr.io这个国内被限制访问的站点下载一些控制平面的容器镜像,为了能让此过程顺利进行下去,我们还需要告诉minikube从哪个gcr.io的mirror站点下载容器镜像:

# minikube start --extra-config=kubeadm.ignore-preflight-errors=NumCPU --force --cpus 1 --memory=1024mb --image-mirror-country='cn'
  minikube v1.20.0 on Ubuntu 18.04 (amd64)
  minikube skips various validations when --force is supplied; this may lead to unexpected behavior
  Automatically selected the docker driver. Other choices: ssh, none
  Requested cpu count 1 is less than the minimum allowed of 2
   has less than 2 CPUs available, but Kubernetes requires at least 2 to be available

  Your cgroup does not allow setting memory.
    ▪ More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities

  Requested memory allocation 1024MiB is less than the usable minimum of 1800MB
  Requested memory allocation (1024MB) is less than the recommended minimum 1900MB. Deployments may fail.

  The requested memory allocation of 1024MiB does not leave room for system overhead (total system memory: 1833MiB). You may face stability issues.
  Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1833mb'

  The "docker" driver should not be used with root privileges.
  If you are running minikube within a VM, consider using --driver=none:

https://minikube.sigs.k8s.io/docs/reference/drivers/none/

  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
  Starting control plane node minikube in cluster minikube
  Pulling base image ...
    > registry.cn-hangzhou.aliyun...: 20.48 MiB / 358.10 MiB  5.72% 2.89 MiB p/
> registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 3.50 MiB
    > registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 3.50 MiB
    > registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 3.50 MiB
    > registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 6.83 MiB
  Creating docker container (CPUs=1, Memory=1024MB) ...
  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
    ▪ kubeadm.ignore-preflight-errors=NumCPU
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
  Verifying Kubernetes components...
    ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 (global image repository)
  Enabled addons: default-storageclass, storage-provisioner

  /usr/local/bin/kubectl is version 1.17.9, which may have incompatibilites with Kubernetes 1.20.2.
    ▪ Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A'
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

查看启动的k8s集群状态:

# minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

我们看到minikube似乎成功启动了一个k8s cluster。

2. pod storage-provisioner处于ErrImagePull状态

在后续使用helm安装redis作为state store组件(components)时,发现安装后的redis处于下面的状态:

# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     0/1     Pending   0          7m48s
redis-replicas-0   0/1     Pending   0          7m48s

通过kubectl describe命令详细查看redis-master-0这个pod:

# kubectl describe pod redis-master-0
Name:           redis-master-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/component=master
                app.kubernetes.io/instance=redis
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=redis
                controller-revision-hash=redis-master-694655df77
                helm.sh/chart=redis-14.1.1
                statefulset.kubernetes.io/pod-name=redis-master-0
Annotations:    checksum/configmap: 0898a3adcb5d0cdd6cc60108d941d105cc240250ba6c7f84ed8b5337f1edd470
                checksum/health: 1b44d34c6c39698be89b2127b9fcec4395a221cff84aeab4fbd93ff4a636c210
                checksum/scripts: 465f195e1bffa9700282b017abc50056099e107d7ce8927fb2b97eb348907484
                checksum/secret: cd7ff82a84f998f50b11463c299c1200585036defc7cbbd9c141cc992ad80963
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/redis-master
Containers:
  redis:
    Image:      docker.io/bitnami/redis:6.2.3-debian-10-r0
    Port:       6379/TCP
    Host Port:  0/TCP
    Command:
      /bin/bash
    Args:
      -c
      /opt/bitnami/scripts/start-scripts/start-master.sh
    Liveness:   exec [sh -c /health/ping_liveness_local.sh 5] delay=5s timeout=6s period=5s #success=1 #failure=5
    Readiness:  exec [sh -c /health/ping_readiness_local.sh 1] delay=5s timeout=2s period=5s #success=1 #failure=5
    Environment:
      BITNAMI_DEBUG:           false
      REDIS_REPLICATION_MODE:  master
      ALLOW_EMPTY_PASSWORD:    no
      REDIS_PASSWORD:          <set to the key 'redis-password' in secret 'redis'>  Optional: false
      REDIS_TLS_ENABLED:       no
      REDIS_PORT:              6379
    Mounts:
      /data from redis-data (rw)
      /health from health (rw)
      /opt/bitnami/redis/etc/ from redis-tmp-conf (rw)
      /opt/bitnami/redis/mounted-etc from config (rw)
      /opt/bitnami/scripts/start-scripts from start-scripts (rw)
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from redis-token-rtxk2 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  redis-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  redis-data-redis-master-0
    ReadOnly:   false
  start-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-scripts
    Optional:  false
  health:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-health
    Optional:  false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-configuration
    Optional:  false
  redis-tmp-conf:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  redis-token-rtxk2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  redis-token-rtxk2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  18s (x6 over 5m7s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

我们发现是该pod的PersistentVolumeClaims没有得到满足,没有绑定到适当PV(persistent volume)上。查看pvc的状态,也都是pending:

# kubectl get pvc
NAME                          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-data-redis-master-0     Pending                                      standard       35m
redis-data-redis-replicas-0   Pending                                      standard       35m

详细查看其中一个pvc的状态:

# kubectl describe  pvc redis-data-redis-master-0
Name:          redis-data-redis-master-0
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:
Labels:        app.kubernetes.io/component=master
               app.kubernetes.io/instance=redis
               app.kubernetes.io/name=redis
Annotations:   volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    redis-master-0
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  ExternalProvisioning  55s (x143 over 35m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator

我们看到该pvc在等待绑定一个volume,而k8s cluster当前在default命名空间中没有任何pv资源。问题究竟出在哪里?

我们回到minikube自身上来,在minikube文档中,负责自动创建HostPath类型pv的是storage-provisioner插件:

img{512x368}

图:minikube插件使能情况

我们看到storage-provisioner插件的状态为enabled,那么为什么该插件没能为redis提供需要的pv资源呢?我顺便查看了一下当前k8s cluster的控制平面组件的运行情况:

# kubectl get po -n kube-system
NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE
kube-system   coredns-54d67798b7-n6vw4                1/1     Running            0          20h
kube-system   etcd-minikube                           1/1     Running            0          20h
kube-system   kube-apiserver-minikube                 1/1     Running            0          20h
kube-system   kube-controller-manager-minikube        1/1     Running            0          20h
kube-system   kube-proxy-rtvvj                        1/1     Running            0          20h
kube-system   kube-scheduler-minikube                 1/1     Running            0          20h
kube-system   storage-provisioner                     0/1     ImagePullBackOff   0          20h

我们惊奇的发现:storage-provisioner这个pod居然处于ImagePullBackOff状态,即下载镜像有误!

3. 发现真相

还记得在minikube start命令的输出信息的末尾,我们看到这样一行内容:

Using image registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 (global image repository)

也就是说我们从registry.cn-hangzhou.aliyuncs.com下载storage-provisioner:v5有错误!我手动在本地执行了一下下面命令:

# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5

Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

居然真的无法下载成功!

究竟是什么地方出现问题了呢?从提示来看,要么是该镜像不存在,要么是docker login被拒绝,由于registry.cn-hangzhou.aliyuncs.com是公共仓库,因此不存在docker login的问题,那么就剩下一个原因了:镜像不存在!

于是我在minikube官方的issue试着搜索了一下有关registry.cn-hangzhou.aliyuncs.com作为mirror的问题,还真让我捕捉到了蛛丝马迹。

在https://github.com/kubernetes/minikube/pull/10770这PR中,有人提及当–image-mirror-country使用cn时,minikube使用了错误的storage-provisioner镜像,镜像的地址不应该是registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5,而应该是registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5。

我在本地试了一下registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5,的确可以下载成功:

# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
v5: Pulling from google_containers/storage-provisioner
Digest: sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5

4. 解决问题

发现问题真相:当–image-mirror-country使用cn时,minikube使用了错误的storage-provisioner镜像。那我们如何修正这个问题呢?

我们查看一下storage-provisioner pod的imagePullPolicy:

# kubectl get pod storage-provisioner  -n kube-system -o yaml
... ...
spec:
  containers:
  - command:
    - /storage-provisioner
    image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5
    imagePullPolicy: IfNotPresent
    name: storage-provisioner

我们发现storage-provisioner的imagePullPolicy为ifNotPresent,这意味着如果本地有storage-provisioner:v5这个镜像的话,minikube不会再去远端下载该image。这样我们可以先将storage-provisioner:v5下载到本地并重新tag为registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5。

下面我们就来操作一下:

# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5

一旦有了image,通过minikube addons子命令重新enable对应pod,可以重启storage-provisioner pod,让其进入正常状态:

# minikube addons enable storage-provisioner

    ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 (global image repository)
  The 'storage-provisioner' addon is enabled

# kubectl get po -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-54d67798b7-n6vw4           1/1     Running   0          25h
etcd-minikube                      1/1     Running   0          25h
kube-apiserver-minikube            1/1     Running   0          25h
kube-controller-manager-minikube   1/1     Running   0          25h
kube-proxy-rtvvj                   1/1     Running   0          25h
kube-scheduler-minikube            1/1     Running   0          25h
storage-provisioner                1/1     Running   0          69m

当storgae-provisioner恢复正常后,之前安装的dapr state component组件redis也自动恢复正常了:

# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          18h
redis-replicas-0   1/1     Running   1          18h
redis-replicas-1   1/1     Running   0          16h
redis-replicas-2   1/1     Running   0          16h

“Gopher部落”知识星球正式转正(从试运营星球变成了正式星球)!“gopher部落”旨在打造一个精品Go学习和进阶社群!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!部落目前虽小,但持续力很强。在2021年上半年,部落将策划两个专题系列分享,并且是部落独享哦:

  • Go技术书籍的书摘和读书体会系列
  • Go与eBPF系列

欢迎大家加入!

Go技术专栏“改善Go语⾔编程质量的50个有效实践”正在慕课网火热热销中!本专栏主要满足广大gopher关于Go语言进阶的需求,围绕如何写出地道且高质量Go代码给出50条有效实践建议,上线后收到一致好评!欢迎大家订
阅!

img{512x368}

我的网课“Kubernetes实战:高可用集群搭建、配置、运维与应用”在慕课网热卖中,欢迎小伙伴们订阅学习!

img{512x368}

我爱发短信:企业级短信平台定制开发专家 https://tonybai.com/。smspush : 可部署在企业内部的定制化短信平台,三网覆盖,不惧大并发接入,可定制扩展; 短信内容你来定,不再受约束, 接口丰富,支持长短信,签名可选。2020年4月8日,中国三大电信运营商联合发布《5G消息白皮书》,51短信平台也会全新升级到“51商用消息平台”,全面支持5G RCS消息。

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻)归档仓库 – https://github.com/bigwhite/gopherdaily

我的联系方式:

  • 微博:https://weibo.com/bigwhite20xx
  • 微信公众号:iamtonybai
  • 博客:tonybai.com
  • github: https://github.com/bigwhite
  • “Gopher部落”知识星球:https://public.zsxq.com/groups/51284458844544

微信赞赏:
img{512x368}

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 Go语言编程指南
商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats