标签 redis 下的文章

minikube v1.20.0版本的一个bug

img{512x368}

本文永久链接 – https://tonybai.com/2021/05/14/a-bug-of-minikube-1-20

近期在研究dapr(分布式应用运行时),这是一个很朴素却很棒的想法,目前大厂,如阿里鹅厂都有大牛在研究该项目,甚至是利用dapr落地了部分应用。关于dapr,后续我也会用单独的文章详细说说。

dapr不仅支持k8s部署,还支持本地部署,并可以对接多个世界知名的公有云厂商的服务,比如:aws、azure、阿里云等。为了体验dapr对云原生应用的支持,我选择了将其部署于k8s中,同时我选择使用minikube来构建本地k8s开发环境。而本文要说的就是将dapr安装到minikube时遇到的问题。

1. 安装minikube

Kubernetes在4月份发布了最新的1.21版本,但目前minikube的最新版依然为1.20版本

minikube是k8s项目自己维护的一个k8s本地开发环境项目,它与k8s的api接口兼容,我们可以快速搭建一个minikube来进行k8s学习和实践。minikube官网上有关于它的安装、使用和维护的详尽资料。

我这里在一个ubuntu 18.04的腾讯云主机上(1 vcpu, 2g mem)上安装minikube v1.20,minikube是一个单体二进制文件,我们先将这个文件下载到本地:

# curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 60.9M  100 60.9M    0     0  7764k      0  0:00:08  0:00:08 --:--:-- 11.5M
# install minikube-linux-amd64 /usr/local/bin/minikube

验证是否下载ok:

# minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae

接下来我们就利用minikube启动一个k8s cluster用作本地开发环境。由于minikube默认的最低安装要求为2核cpu,而我的虚机仅为1核,我们需要为minikube传递一些命令行参数以让其在单核CPU上也能顺利地启动一个k8s cluster。另外minikube会从gcr.io这个国内被限制访问的站点下载一些控制平面的容器镜像,为了能让此过程顺利进行下去,我们还需要告诉minikube从哪个gcr.io的mirror站点下载容器镜像:

# minikube start --extra-config=kubeadm.ignore-preflight-errors=NumCPU --force --cpus 1 --memory=1024mb --image-mirror-country='cn'
  minikube v1.20.0 on Ubuntu 18.04 (amd64)
  minikube skips various validations when --force is supplied; this may lead to unexpected behavior
  Automatically selected the docker driver. Other choices: ssh, none
  Requested cpu count 1 is less than the minimum allowed of 2
   has less than 2 CPUs available, but Kubernetes requires at least 2 to be available

  Your cgroup does not allow setting memory.
    ▪ More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities

  Requested memory allocation 1024MiB is less than the usable minimum of 1800MB
  Requested memory allocation (1024MB) is less than the recommended minimum 1900MB. Deployments may fail.

  The requested memory allocation of 1024MiB does not leave room for system overhead (total system memory: 1833MiB). You may face stability issues.
  Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1833mb'

  The "docker" driver should not be used with root privileges.
  If you are running minikube within a VM, consider using --driver=none:

https://minikube.sigs.k8s.io/docs/reference/drivers/none/

  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
  Starting control plane node minikube in cluster minikube
  Pulling base image ...
    > registry.cn-hangzhou.aliyun...: 20.48 MiB / 358.10 MiB  5.72% 2.89 MiB p/
> registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 3.50 MiB
    > registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 3.50 MiB
    > registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 3.50 MiB
    > registry.cn-hangzhou.aliyun...: 358.10 MiB / 358.10 MiB  100.00% 6.83 MiB
  Creating docker container (CPUs=1, Memory=1024MB) ...
  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
    ▪ kubeadm.ignore-preflight-errors=NumCPU
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
  Verifying Kubernetes components...
    ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 (global image repository)
  Enabled addons: default-storageclass, storage-provisioner

  /usr/local/bin/kubectl is version 1.17.9, which may have incompatibilites with Kubernetes 1.20.2.
    ▪ Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A'
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

查看启动的k8s集群状态:

# minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

我们看到minikube似乎成功启动了一个k8s cluster。

2. pod storage-provisioner处于ErrImagePull状态

在后续使用helm安装redis作为state store组件(components)时,发现安装后的redis处于下面的状态:

# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     0/1     Pending   0          7m48s
redis-replicas-0   0/1     Pending   0          7m48s

通过kubectl describe命令详细查看redis-master-0这个pod:

# kubectl describe pod redis-master-0
Name:           redis-master-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/component=master
                app.kubernetes.io/instance=redis
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=redis
                controller-revision-hash=redis-master-694655df77
                helm.sh/chart=redis-14.1.1
                statefulset.kubernetes.io/pod-name=redis-master-0
Annotations:    checksum/configmap: 0898a3adcb5d0cdd6cc60108d941d105cc240250ba6c7f84ed8b5337f1edd470
                checksum/health: 1b44d34c6c39698be89b2127b9fcec4395a221cff84aeab4fbd93ff4a636c210
                checksum/scripts: 465f195e1bffa9700282b017abc50056099e107d7ce8927fb2b97eb348907484
                checksum/secret: cd7ff82a84f998f50b11463c299c1200585036defc7cbbd9c141cc992ad80963
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/redis-master
Containers:
  redis:
    Image:      docker.io/bitnami/redis:6.2.3-debian-10-r0
    Port:       6379/TCP
    Host Port:  0/TCP
    Command:
      /bin/bash
    Args:
      -c
      /opt/bitnami/scripts/start-scripts/start-master.sh
    Liveness:   exec [sh -c /health/ping_liveness_local.sh 5] delay=5s timeout=6s period=5s #success=1 #failure=5
    Readiness:  exec [sh -c /health/ping_readiness_local.sh 1] delay=5s timeout=2s period=5s #success=1 #failure=5
    Environment:
      BITNAMI_DEBUG:           false
      REDIS_REPLICATION_MODE:  master
      ALLOW_EMPTY_PASSWORD:    no
      REDIS_PASSWORD:          <set to the key 'redis-password' in secret 'redis'>  Optional: false
      REDIS_TLS_ENABLED:       no
      REDIS_PORT:              6379
    Mounts:
      /data from redis-data (rw)
      /health from health (rw)
      /opt/bitnami/redis/etc/ from redis-tmp-conf (rw)
      /opt/bitnami/redis/mounted-etc from config (rw)
      /opt/bitnami/scripts/start-scripts from start-scripts (rw)
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from redis-token-rtxk2 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  redis-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  redis-data-redis-master-0
    ReadOnly:   false
  start-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-scripts
    Optional:  false
  health:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-health
    Optional:  false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-configuration
    Optional:  false
  redis-tmp-conf:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  redis-token-rtxk2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  redis-token-rtxk2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  18s (x6 over 5m7s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

我们发现是该pod的PersistentVolumeClaims没有得到满足,没有绑定到适当PV(persistent volume)上。查看pvc的状态,也都是pending:

# kubectl get pvc
NAME                          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-data-redis-master-0     Pending                                      standard       35m
redis-data-redis-replicas-0   Pending                                      standard       35m

详细查看其中一个pvc的状态:

# kubectl describe  pvc redis-data-redis-master-0
Name:          redis-data-redis-master-0
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:
Labels:        app.kubernetes.io/component=master
               app.kubernetes.io/instance=redis
               app.kubernetes.io/name=redis
Annotations:   volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    redis-master-0
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  ExternalProvisioning  55s (x143 over 35m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator

我们看到该pvc在等待绑定一个volume,而k8s cluster当前在default命名空间中没有任何pv资源。问题究竟出在哪里?

我们回到minikube自身上来,在minikube文档中,负责自动创建HostPath类型pv的是storage-provisioner插件:

img{512x368}

图:minikube插件使能情况

我们看到storage-provisioner插件的状态为enabled,那么为什么该插件没能为redis提供需要的pv资源呢?我顺便查看了一下当前k8s cluster的控制平面组件的运行情况:

# kubectl get po -n kube-system
NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE
kube-system   coredns-54d67798b7-n6vw4                1/1     Running            0          20h
kube-system   etcd-minikube                           1/1     Running            0          20h
kube-system   kube-apiserver-minikube                 1/1     Running            0          20h
kube-system   kube-controller-manager-minikube        1/1     Running            0          20h
kube-system   kube-proxy-rtvvj                        1/1     Running            0          20h
kube-system   kube-scheduler-minikube                 1/1     Running            0          20h
kube-system   storage-provisioner                     0/1     ImagePullBackOff   0          20h

我们惊奇的发现:storage-provisioner这个pod居然处于ImagePullBackOff状态,即下载镜像有误!

3. 发现真相

还记得在minikube start命令的输出信息的末尾,我们看到这样一行内容:

Using image registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 (global image repository)

也就是说我们从registry.cn-hangzhou.aliyuncs.com下载storage-provisioner:v5有错误!我手动在本地执行了一下下面命令:

# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5

Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

居然真的无法下载成功!

究竟是什么地方出现问题了呢?从提示来看,要么是该镜像不存在,要么是docker login被拒绝,由于registry.cn-hangzhou.aliyuncs.com是公共仓库,因此不存在docker login的问题,那么就剩下一个原因了:镜像不存在!

于是我在minikube官方的issue试着搜索了一下有关registry.cn-hangzhou.aliyuncs.com作为mirror的问题,还真让我捕捉到了蛛丝马迹。

在https://github.com/kubernetes/minikube/pull/10770这PR中,有人提及当–image-mirror-country使用cn时,minikube使用了错误的storage-provisioner镜像,镜像的地址不应该是registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5,而应该是registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5。

我在本地试了一下registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5,的确可以下载成功:

# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
v5: Pulling from google_containers/storage-provisioner
Digest: sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5

4. 解决问题

发现问题真相:当–image-mirror-country使用cn时,minikube使用了错误的storage-provisioner镜像。那我们如何修正这个问题呢?

我们查看一下storage-provisioner pod的imagePullPolicy:

# kubectl get pod storage-provisioner  -n kube-system -o yaml
... ...
spec:
  containers:
  - command:
    - /storage-provisioner
    image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5
    imagePullPolicy: IfNotPresent
    name: storage-provisioner

我们发现storage-provisioner的imagePullPolicy为ifNotPresent,这意味着如果本地有storage-provisioner:v5这个镜像的话,minikube不会再去远端下载该image。这样我们可以先将storage-provisioner:v5下载到本地并重新tag为registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5。

下面我们就来操作一下:

# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5

一旦有了image,通过minikube addons子命令重新enable对应pod,可以重启storage-provisioner pod,让其进入正常状态:

# minikube addons enable storage-provisioner

    ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 (global image repository)
  The 'storage-provisioner' addon is enabled

# kubectl get po -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-54d67798b7-n6vw4           1/1     Running   0          25h
etcd-minikube                      1/1     Running   0          25h
kube-apiserver-minikube            1/1     Running   0          25h
kube-controller-manager-minikube   1/1     Running   0          25h
kube-proxy-rtvvj                   1/1     Running   0          25h
kube-scheduler-minikube            1/1     Running   0          25h
storage-provisioner                1/1     Running   0          69m

当storgae-provisioner恢复正常后,之前安装的dapr state component组件redis也自动恢复正常了:

# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          18h
redis-replicas-0   1/1     Running   1          18h
redis-replicas-1   1/1     Running   0          16h
redis-replicas-2   1/1     Running   0          16h

“Gopher部落”知识星球正式转正(从试运营星球变成了正式星球)!“gopher部落”旨在打造一个精品Go学习和进阶社群!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!部落目前虽小,但持续力很强。在2021年上半年,部落将策划两个专题系列分享,并且是部落独享哦:

  • Go技术书籍的书摘和读书体会系列
  • Go与eBPF系列

欢迎大家加入!

Go技术专栏“改善Go语⾔编程质量的50个有效实践”正在慕课网火热热销中!本专栏主要满足广大gopher关于Go语言进阶的需求,围绕如何写出地道且高质量Go代码给出50条有效实践建议,上线后收到一致好评!欢迎大家订
阅!

img{512x368}

我的网课“Kubernetes实战:高可用集群搭建、配置、运维与应用”在慕课网热卖中,欢迎小伙伴们订阅学习!

img{512x368}

我爱发短信:企业级短信平台定制开发专家 https://tonybai.com/。smspush : 可部署在企业内部的定制化短信平台,三网覆盖,不惧大并发接入,可定制扩展; 短信内容你来定,不再受约束, 接口丰富,支持长短信,签名可选。2020年4月8日,中国三大电信运营商联合发布《5G消息白皮书》,51短信平台也会全新升级到“51商用消息平台”,全面支持5G RCS消息。

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻)归档仓库 – https://github.com/bigwhite/gopherdaily

我的联系方式:

  • 微博:https://weibo.com/bigwhite20xx
  • 微信公众号:iamtonybai
  • 博客:tonybai.com
  • github: https://github.com/bigwhite
  • “Gopher部落”知识星球:https://public.zsxq.com/groups/51284458844544

微信赞赏:
img{512x368}

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

基于Redis Cluster的分布式锁实现以互斥方式操作共享资源

img{512x368}

今天要说的技术方案也是有一定项目背景的。在上一个项目中,我们需要对一个redis集群中过期的key进行处理,这是一个分布式
系统,考虑到高可用性,需要具备过期处理功能的服务有多个副本,这样我们就要求在同一时间内仅有一个副本可以对过期的key>进行处理,如果该副本挂掉,系统会在其他副本中再挑选出一个来处理过期的key。

很显然,这里涉及到一个选主(leader election)的过程。每当涉及选主,很多人就会想到一些高大上的分布式一致性/共识算法,
比如:raftpaxos等。当然使用这
些算法自然没有问题,但是也给系统徒增了很多复杂性。能否有一些更简单直接的方案呢?我们已经有了一个redis集群,是否可>以利用redis集群的能力来完成这一点呢?

Redis原生并没有提供leader election算法,但Redis作者提供了分布式锁的算法,也就>是说我们可以用分布式锁来实现一个简单的选主功能,见下图:

img{512x368}

图:利用redis分布式锁实现选主

在上图中我们看到,只有持有锁的服务才具备操作数据的资格,也就是说持有锁的服务的角色是leader,而其他服务则继续尝试去持有锁,它们是follower的角色。

1. 基于单节点redis的分布式锁

在redis官方有关分布式锁算法的介绍页面中,作者给出了各种编程语言的推荐实现,而Go语言的推荐实现仅redsync这一种。在这篇短文中,我们就来使用redsync实现基于Redis分布式锁的选主方案。

在Go生态中,连接和操作redis的主流go客户端库有go-redisredigo。最新的redsync版本底层redis driver既支持go-redis,也支持redigo,我个人日常使用最多的是go-redis这个客户端,这里我们就用go-redis。

redsync github主页中给出的例子是基于单redis node的分布式锁示例。下面我们也先以单redis节点来看看如何通过Redis的分布式锁实现我们的业务逻辑:

// github.com/bigwhite/experiments/blob/master/redis-cluster-distributed-lock/standalone/main.go

     1  package main
     2
     3  import (
     4      "context"
     5      "log"
     6      "os"
     7      "os/signal"
     8      "sync"
     9      "sync/atomic"
    10      "syscall"
    11      "time"
    12
    13      goredislib "github.com/go-redis/redis/v8"
    14      "github.com/go-redsync/redsync/v4"
    15      "github.com/go-redsync/redsync/v4/redis/goredis/v8"
    16  )
    17
    18  const (
    19      redisKeyExpiredEventSubj = `__keyevent@0__:expired`
    20  )
    21
    22  var (
    23      isLeader  int64
    24      m         atomic.Value
    25      id        string
    26      mutexName = "the-year-of-the-ox-2021"
    27  )
    28
    29  func init() {
    30      if len(os.Args) < 2 {
    31          panic("args number is not correct")
    32      }
    33      id = os.Args[1]
    34  }
    35
    36  func tryToBecomeLeader() (bool, func() (bool, error), error) {
    37      client := goredislib.NewClient(&goredislib.Options{
    38          Addr: "localhost:6379",
    39      })
    40      pool := goredis.NewPool(client)
    41      rs := redsync.New(pool)
    42
    43      mutex := rs.NewMutex(mutexName)
    44
    45      if err := mutex.Lock(); err != nil {
    46          client.Close()
    47          return false, nil, err
    48      }
    49
    50      return true, func() (bool, error) {
    51          return mutex.Unlock()
    52      }, nil
    53  }
    54
    55  func doElectionAndMaintainTheStatus(quit <-chan struct{}) {
    56      ticker := time.NewTicker(time.Second * 5)
    57      var err error
    58      var ok bool
    59      var cf func() (bool, error)
    60
    61      c := goredislib.NewClient(&goredislib.Options{
    62          Addr: "localhost:6379",
    63      })
    64      defer c.Close()
    65      for {
    66          select {
    67          case <-ticker.C:
    68              if atomic.LoadInt64(&isLeader) == 0 {
    69                  ok, cf, err = tryToBecomeLeader()
    70                  if ok {
    71                      log.Printf("prog-%s become leader successfully\n", id)
    72                      atomic.StoreInt64(&isLeader, 1)
    73                      defer cf()
    74                  }
    75                  if !ok || err != nil {
    76                      log.Printf("prog-%s try to become leader failed: %s\n", id, err)
    77                  }
    78              } else {
    79                  log.Printf("prog-%s is the leader\n", id)
    80                  // update the lock live time and maintain the leader status
    81                  c.Expire(context.Background(), mutexName, 8*time.Second)
    82              }
    83          case <-quit:
    84              return
    85          }
    86      }
    87  }
    88
    89  func doExpire(quit <-chan struct{}) {
    90      // subscribe the expire event of redis
    91      c := goredislib.NewClient(&goredislib.Options{
    92          Addr: "localhost:6379"})
    93      defer c.Close()
    94
    95      ctx := context.Background()
    96      pubsub := c.Subscribe(ctx, redisKeyExpiredEventSubj)
    97      _, err := pubsub.Receive(ctx)
    98      if err != nil {
    99          log.Printf("prog-%s subscribe expire event failed: %s\n", id, err)
   100          return
   101      }
   102      log.Printf("prog-%s subscribe expire event ok\n", id)
   103
   104      // Go channel which receives messages from redis db
   105      ch := pubsub.Channel()
   106      for {
   107          select {
   108          case event := <-ch:
   109              key := event.Payload
   110              if atomic.LoadInt64(&isLeader) == 0 {
   111                  break
   112              }
   113              log.Printf("prog-%s 收到并处理一条过期消息[key:%s]", id, key)
   114          case <-quit:
   115              return
   116          }
   117      }
   118  }
   119
   120  func main() {
   121      var wg sync.WaitGroup
   122      wg.Add(2)
   123      var quit = make(chan struct{})
   124
   125      go func() {
   126          doElectionAndMaintainTheStatus(quit)
   127          wg.Done()
   128      }()
   129      go func() {
   130          doExpire(quit)
   131          wg.Done()
   132      }()
   133
   134      c := make(chan os.Signal, 1)
   135      signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
   136      _ = <-c
   137      close(quit)
   138      log.Printf("recv exit signal...")
   139      wg.Wait()
   140      log.Printf("program exit ok")
   141  }

上面示例代码比较长,但它很完整。我们一点点来看。

首先,我们看120~141行的main函数结构。在这个函数中,我们创建了两个新goroutine,main goroutine通过sync.WaitGroup等待这两个子goroutine的退出并使用quit channel模式(关于goroutine的并发模式的详解,可以参考我的专栏文章《Go并发模型和常见并发模式》)在收到系统信号(关于signal包的使用,请参见我的专栏文章《小心被kill!不要忽略对系统信号的处理》)后通知两个子goroutine退出。

接下来,我们逐个看两个子goroutine的执行逻辑。第一个goroutine执行的是doElectionAndMaintainTheStatus函数。该函数会持续尝试去持有分布式锁(tryToBecomeLeader),一旦持有,它就变成了分布式系统中的leader角色;成为leader角色的副本会保持其角色状态(见81行)。

尝试持有分布式锁并成为leader是tryToBecomeLeader函数的主要职责,该函数直接使用了redsync包的算法,并利用与redis node建立的连接(NewClient),尝试建立并持有分布式锁“the-year-of-the-ox-2021”。我们使用的是默认的锁属性,从redsync包的NewMutex方法源码,我们能看到锁默认属性如下:

// github.com/go-redsync/redsync/redsync.go

// NewMutex returns a new distributed mutex with given name.
func (r *Redsync) NewMutex(name string, options ...Option) *Mutex {
        m := &Mutex{
                name:         name,
                expiry:       8 * time.Second,
                tries:        32,
                delayFunc:    func(tries int) time.Duration { return 500 * time.Millisecond },
                genValueFunc: genValue,
                factor:       0.01,
                quorum:       len(r.pools)/2 + 1,
                pools:        r.pools,
        }
        for _, o := range options {
                o.Apply(m)
        }
        return m
}

我们看到锁有一个过期时间属性(expiry),过期时间默认仅有8秒。问题来了:一旦锁过期了,那么情况会怎样?事实是一旦锁过期掉,在leader尚未解锁时,其follower也会加锁成功,因为原锁的key已经因过期而被删除掉了。长此以往,整个分布式系统就会存在多个自视为leader的进程,整个处理逻辑就乱了!

解决这个问题至少可以有三种方案:

  • 方案1:将锁的expiry设置的很长,长到一旦某个服务持有了锁,不需担心锁过期的问题;
  • 方案2:在所的默认expiry到期之前解锁,所有服务重新竞争锁;
  • 方案3:一旦某个服务持有了锁,则需要定期重设锁的expiry时间,保证锁不会过期,直到该服务主动执行unlock。

方案1的问题在于,一旦持有锁的leader因意外异常退出并且尚未unlock,那么由于锁的过期时间超级长,其他follower依然无法持有锁而变成下一任leader,导致整个分布式系统的leader缺失,业务逻辑无法继续进行;

方案2其实是基于Redis分布式锁的常规使用方式,但对于像我这里的业务场景,频繁lock和unlock没必要,我只需要保证系统中有一个leader一直在处理过期event即可,在服务间轮流处理并非我的需求。但这个方案是一个可行的方案,代码逻辑清晰也简单。

方案3则是非常适合我的业务场景的方案,持有锁的leader通过定期(<8s)的更新锁的过期时间来保证锁的有效性,这样避免了leader频繁切换。这里我们就使用了这一方案,见78~82行,我们在定时器的帮助下,定期重新设置了锁的过期时间(8s)。

在上述示例代码中,我们用一个变量isLeader来标识该服务是否持有了锁,由于该变量被多个goroutine访问和修改,因此我们通过atomic包实现对其的原子访问以避免出现race问题。

最后,我们说说这段示例承载的业务逻辑(doExpire函数)。真正的业务逻辑由doExpire函数实现。它通过监听redis 0号库的key空间的过期事件实现对目标key的过期处理(这里并未体现这一点)。

subscribe的subject字符串为keyevent@0:expired,这个字符串的组成含义可以参考redis官方对notifications的说明,这里的字串表明我们要监听key事件,在0号数据库,事件类型是key过期。

当在0号数据库有key过期后,我们的订阅channel(105行)就会收到一个事件,通过event的Payload我们可以得到key的名称,后续我们可以根据key的名字来过滤掉我们不关心的key,而仅对期望的key做相应处理。

在默认配置下, redis的通知功能处于关闭状态。我们需要通过命令或在redis.conf中开启这一功能。

$redis-cli
127.0.0.1:6379> config set notify-keyspace-events KEx
OK

到这里,我们已经搞清楚了上面示例代码的原理,下面我们就来真实运行一次上面的代码,我们编译上面代码并启动三个实例:

$go build main.go
$./main 1
$./main 2
$./main 3

由于./main 1先启动,因此第一个启动的服务一般会先成为leader:

$main 1
2021/02/11 05:43:15 prog-1 subscribe expire event ok
2021/02/11 05:43:20 prog-1 become leader successfully
2021/02/11 05:43:25 prog-1 is the leader
2021/02/11 05:43:30 prog-1 is the leader

而其他两个服务会定期尝试去持有锁:

$main 2
2021/02/11 05:43:17 prog-2 subscribe expire event ok
2021/02/11 05:43:37 prog-2 try to become leader failed: redsync: failed to acquire lock
2021/02/11 05:43:53 prog-2 try to become leader failed: redsync: failed to acquire lock

$main 3
2021/02/11 05:43:18 prog-3 subscribe expire event ok
2021/02/11 05:43:38 prog-3 try to become leader failed: redsync: failed to acquire lock
2021/02/11 05:43:54 prog-3 try to become leader failed: redsync: failed to acquire lock

这时我们通过redis-cli在0号数据库中创建一个key1,过期时间5s:

$redis-cli
127.0.0.1:6379> setex key1 5 value1
OK

5s后,我们会在prog-1这个服务实例的输出日志中看到如下内容:

2021/02/11 05:43:50 prog-1 is the leader
2021/02/11 05:43:53 prog-1 收到并处理一条过期消息[key:key1]
2021/02/11 05:43:55 prog-1 is the leader

接下来,我们停掉prog-1:

2021/02/11 05:44:00 prog-1 is the leader
^C2021/02/11 05:44:01 recv exit signal...
redis: 2021/02/11 05:44:01 pubsub.go:168: redis: discarding bad PubSub connection: read tcp [::1]:56594->[::1]:6379: use of closed network connection
2021/02/11 05:44:01 program exit ok

在停掉prog-1后的瞬间,prog-2成功持有了锁,并成为leader:

2021/02/11 05:44:01 prog-2 become leader successfully
2021/02/11 05:44:01 prog-2 is the leader

我们再通过redis-cli在0号数据库中创建一个key2,过期时间5s:

$redis-cli
127.0.0.1:6379> setex key2 5 value2
OK

5s后,我们会在prog-2这个服务实例的输出日志中看到如下内容:

2021/02/11 05:44:17 prog-2 is the leader
2021/02/11 05:44:19 prog-2 收到并处理一条过期消息[key:key2]
2021/02/11 05:44:22 prog-2 is the leader

从运行的结果来看,该分布式系统的运行逻辑是符合我们的设计预期的。

2. 基于redis集群的分布式锁

上面,我们实现了基于单个redis节点的分布式锁的选主功能。在生产环境,我们很少会使用单节点的Redis,通常会使用Redis集群以保证高可用性。

最新的redsync已经支持了redis cluster(基于go-redis)。和单节点唯一不同的是,我们传递给redsync的pool所使用的与redis的连接由Client类型变为了ClusterClient类型:

// github.com/bigwhite/experiments/blob/master/redis-cluster-distributed-lock/cluster/v1/main.go
const (
        redisClusterMasters      = "localhost:30001,localhost:30002,localhost:30003"
)

func main() {
    ... ...
        client := goredislib.NewClusterClient(&goredislib.ClusterOptions{
                Addrs: strings.Split(redisClusterMasters, ",")})
        defer client.Close()
    ... ...
}

我们在本地启动的redis cluster,三个master的地址分别为:localhost:30001、localhost:30002和localhost:30003。我们将master的地址组成一个逗号分隔的常量redisClusterMasters。

我们对上面单节点的代码做了改进,将Redis连接的创建放在了main中,并将client连接作为参数传递给各个goroutine的运行函数。下面是cluster版示例代码完整版(v1):

// github.com/bigwhite/experiments/blob/master/redis-cluster-distributed-lock/cluster/v1/main.go

     1  package main
     2
     3  import (
     4      "context"
     5      "log"
     6      "os"
     7      "os/signal"
     8      "strings"
     9      "sync"
    10      "sync/atomic"
    11      "syscall"
    12      "time"
    13
    14      goredislib "github.com/go-redis/redis/v8"
    15      "github.com/go-redsync/redsync/v4"
    16      "github.com/go-redsync/redsync/v4/redis/goredis/v8"
    17  )
    18
    19  const (
    20      redisKeyExpiredEventSubj = `__keyevent@0__:expired`
    21      redisClusterMasters      = "localhost:30001,localhost:30002,localhost:30003"
    22  )
    23
    24  var (
    25      isLeader  int64
    26      m         atomic.Value
    27      id        string
    28      mutexName = "the-year-of-the-ox-2021"
    29  )
    30
    31  func init() {
    32      if len(os.Args) < 2 {
    33          panic("args number is not correct")
    34      }
    35      id = os.Args[1]
    36  }
    37
    38  func tryToBecomeLeader(client *goredislib.ClusterClient) (bool, func() (bool, error), error) {
    39      pool := goredis.NewPool(client)
    40      rs := redsync.New(pool)
    41
    42      mutex := rs.NewMutex(mutexName)
    43
    44      if err := mutex.Lock(); err != nil {
    45          return false, nil, err
    46      }
    47
    48      return true, func() (bool, error) {
    49          return mutex.Unlock()
    50      }, nil
    51  }
    52
    53  func doElectionAndMaintainTheStatus(c *goredislib.ClusterClient, quit <-chan struct{}) {
    54      ticker := time.NewTicker(time.Second * 5)
    55      var err error
    56      var ok bool
    57      var cf func() (bool, error)
    58
    59      for {
    60          select {
    61          case <-ticker.C:
    62              if atomic.LoadInt64(&isLeader) == 0 {
    63                  ok, cf, err = tryToBecomeLeader(c)
    64                  if ok {
    65                      log.Printf("prog-%s become leader successfully\n", id)
    66                      atomic.StoreInt64(&isLeader, 1)
    67                      defer cf()
    68                  }
    69                  if !ok || err != nil {
    70                      log.Printf("prog-%s try to become leader failed: %s\n", id, err)
    71                  }
    72              } else {
    73                  log.Printf("prog-%s is the leader\n", id)
    74                  // update the lock live time and maintain the leader status
    75                  c.Expire(context.Background(), mutexName, 8*time.Second)
    76              }
    77          case <-quit:
    78              return
    79          }
    80      }
    81  }
    82
    83  func doExpire(c *goredislib.ClusterClient, quit <-chan struct{}) {
    84      // subscribe the expire event of redis
    85      ctx := context.Background()
    86      pubsub := c.Subscribe(ctx, redisKeyExpiredEventSubj)
    87      _, err := pubsub.Receive(ctx)
    88      if err != nil {
    89          log.Printf("prog-%s subscribe expire event failed: %s\n", id, err)
    90          return
    91      }
    92      log.Printf("prog-%s subscribe expire event ok\n", id)
    93
    94      // Go channel which receives messages from redis db
    95      ch := pubsub.Channel()
    96      for {
    97          select {
    98          case event := <-ch:
    99              key := event.Payload
   100              if atomic.LoadInt64(&isLeader) == 0 {
   101                  break
   102              }
   103              log.Printf("prog-%s 收到并处理一条过期消息[key:%s]", id, key)
   104          case <-quit:
   105              return
   106          }
   107      }
   108  }
   109
   110  func main() {
   111      var wg sync.WaitGroup
   112      wg.Add(2)
   113      var quit = make(chan struct{})
   114      client := goredislib.NewClusterClient(&goredislib.ClusterOptions{
   115          Addrs: strings.Split(redisClusterMasters, ",")})
   116      defer client.Close()
   117
   118      go func() {
   119          doElectionAndMaintainTheStatus(client, quit)
   120          wg.Done()
   121      }()
   122      go func() {
   123          doExpire(client, quit)
   124          wg.Done()
   125      }()
   126
   127      c := make(chan os.Signal, 1)
   128      signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
   129      _ = <-c
   130      close(quit)
   131      log.Printf("recv exit signal...")
   132      wg.Wait()
   133      log.Printf("program exit ok")
   134  }

和单一节点一样,我们运行三个服务实例:

$go build main.go
$main 1
2021/02/11 09:49:16 prog-1 subscribe expire event ok
2021/02/11 09:49:22 prog-1 become leader successfully
2021/02/11 09:49:26 prog-1 is the leader
2021/02/11 09:49:31 prog-1 is the leader
2021/02/11 09:49:36 prog-1 is the leader
... ...

$main 2
2021/02/11 09:49:19 prog-2 subscribe expire event ok
2021/02/11 09:49:40 prog-2 try to become leader failed: redsync: failed to acquire lock
2021/02/11 09:49:55 prog-2 try to become leader failed: redsync: failed to acquire lock
... ...

$main 3
2021/02/11 09:49:31 prog-3 subscribe expire event ok
2021/02/11 09:49:52 prog-3 try to become leader failed: redsync: failed to acquire lock
2021/02/11 09:50:07 prog-3 try to become leader failed: redsync: failed to acquire lock
... ...

我们看到基于Redis集群版的分布式锁也生效了!prog-1成功持有锁并成为leader! 接下来我们再来看看对过期key事件的处理!

我们通过下面命令让redis-cli连接到集群中的所有节点并设置每个节点开启key空间的事件通知:

三主:

$redis-cli -c -h localhost -p 30001
localhost:30001> config set notify-keyspace-events KEx
OK

$redis-cli -c -h localhost -p 30002
localhost:30002> config set notify-keyspace-events KEx
OK

$redis-cli -c -h localhost -p 30003
localhost:30003> config set notify-keyspace-events KEx
OK

三从:

$redis-cli -c -h localhost -p 30004
localhost:30004> config set notify-keyspace-events KEx
OK

$redis-cli -c -h localhost -p 30005
localhost:30005> config set notify-keyspace-events KEx
OK

$redis-cli -c -h localhost -p 30006
localhost:30006> config set notify-keyspace-events KEx
OK

在node1节点上,我们set一个有效期为5s的key:key1:

localhost:30001> setex key1 5 value1
-> Redirected to slot [9189] located at 127.0.0.1:30002
OK

等待5s后,我们的leader:prog-1并没有如预期那样受到expire通知! 这是怎么回事呢?追本溯源,我们查看一下redis官方文档关于notifications的说明,我们在文档最后一段找到如下描述:

Events in a cluster

Every node of a Redis cluster generates events about its own subset of the keyspace as described above. However, unlike regular Pub/Sub communication in a cluster, events' notifications are not broadcasted to all nodes. Put differently, keyspace events are node-specific. This means that to receive all keyspace events of a cluster, clients need to subscribe to each of the nodes.

这段话大致意思是Redis集群中的每个redis node都有自己的keyspace,事件通知不会被广播到集群内的所有节点,即keyspace的事件是node相关的。如果要接收一个集群中的所有keyspace的event,那客户端就需要Subcribe集群内的所有节点。我们来改一下代码,形成v2版(考虑到篇幅就不列出所有代码了,仅列出相对于v1版变化的代码):

// github.com/bigwhite/experiments/blob/master/redis-cluster-distributed-lock/cluster/v2/main.go

... ...
    19  const (
    20      redisKeyExpiredEventSubj = `__keyevent@0__:expired`
    21      redisClusterMasters      = "localhost:30001,localhost:30002,localhost:30003,localhost:30004,localhost:30005,localhost:30006"
    22  )
... ...
    83  func doExpire(quit <-chan struct{}) {
    84      var ch = make(chan *goredislib.Message)
    85      nodes := strings.Split(redisClusterMasters, ",")
    86
    87      for _, node := range nodes {
    88          node := node
    89          go func(quit <-chan struct{}) {
    90              c := goredislib.NewClient(&goredislib.Options{
    91                  Addr: node})
    92              defer c.Close()
    93
    94              // subscribe the expire event of redis
    95              ctx := context.Background()
    96              pubsub := c.Subscribe(ctx, redisKeyExpiredEventSubj)
    97              _, err := pubsub.Receive(ctx)
    98              if err != nil {
    99                  log.Printf("prog-%s subscribe expire event of node[%s] failed: %s\n",
   100                      id, node, err)
   101                  return
   102              }
   103              log.Printf("prog-%s subscribe expire event of node[%s] ok\n", id, node)
   104
   105              // Go channel which receives messages from redis db
   106              pch := pubsub.Channel()
   107
   108              for {
   109                  select {
   110                  case event := <-pch:
   111                      ch <- event
   112                  case <-quit:
   113                      return
   114                  }
   115              }
   116          }(quit)
   117      }
   118      for {
   119          select {
   120          case event := <-ch:
   121              key := event.Payload
   122              if atomic.LoadInt64(&isLeader) == 0 {
   123                  break
   124              }
   125              log.Printf("prog-%s 收到并处理一条过期消息[key:%s]", id, key)
   126          case <-quit:
   127              return
   128          }
   129      }
   130  }
   131
   132  func main() {
   133      var wg sync.WaitGroup
   134      wg.Add(2)
   135      var quit = make(chan struct{})
   136      client := goredislib.NewClusterClient(&goredislib.ClusterOptions{
   137          Addrs: strings.Split(redisClusterMasters, ",")})
   138      defer client.Close()
   139
   140      go func() {
   141          doElectionAndMaintainTheStatus(client, quit)
   142          wg.Done()
   143      }()
   144      go func() {
   145          doExpire(quit)
   146          wg.Done()
   147      }()
   148
   149      c := make(chan os.Signal, 1)
   150      signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
   151      _ = <-c
   152      close(quit)
   153      log.Printf("recv exit signal...")
   154      wg.Wait()
   155      log.Printf("program exit ok")
   156  }

在这个新版代码中,我们在每个新goroutine中实现对redis一个节点的Subscribe,并将收到的Event notifications通过“扇入”模式(更多关于并发扇入模式的内容,可以参考我的Go技术专栏文章《Go并发模型和常见并发模式》)统一写入到运行doExpire的goroutine中做统一处理。

我们再来运行一下这个示例,并在不同时机创建多个key来验证通知接收和处理的效果:

$main 1
2021/02/11 10:29:21 prog-1 subscribe expire event of node[localhost:30004] ok
2021/02/11 10:29:21 prog-1 subscribe expire event of node[localhost:30001] ok
2021/02/11 10:29:21 prog-1 subscribe expire event of node[localhost:30006] ok
2021/02/11 10:29:21 prog-1 subscribe expire event of node[localhost:30002] ok
2021/02/11 10:29:21 prog-1 subscribe expire event of node[localhost:30003] ok
2021/02/11 10:29:21 prog-1 subscribe expire event of node[localhost:30005] ok
2021/02/11 10:29:26 prog-1 become leader successfully
2021/02/11 10:29:31 prog-1 is the leader
2021/02/11 10:29:36 prog-1 is the leader
2021/02/11 10:29:41 prog-1 is the leader
2021/02/11 10:29:46 prog-1 is the leader
2021/02/11 10:29:47 prog-1 收到并处理一条过期消息[key:key1]
2021/02/11 10:29:51 prog-1 is the leader
2021/02/11 10:29:51 prog-1 收到并处理一条过期消息[key:key2]
2021/02/11 10:29:56 prog-1 收到并处理一条过期消息[key:key3]
2021/02/11 10:29:56 prog-1 is the leader
2021/02/11 10:30:01 prog-1 is the leader
2021/02/11 10:30:06 prog-1 is the leader
^C2021/02/11 10:30:08 recv exit signal...

$main 3
2021/02/11 10:29:27 prog-3 subscribe expire event of node[localhost:30004] ok
2021/02/11 10:29:27 prog-3 subscribe expire event of node[localhost:30006] ok
2021/02/11 10:29:27 prog-3 subscribe expire event of node[localhost:30002] ok
2021/02/11 10:29:27 prog-3 subscribe expire event of node[localhost:30001] ok
2021/02/11 10:29:27 prog-3 subscribe expire event of node[localhost:30005] ok
2021/02/11 10:29:27 prog-3 subscribe expire event of node[localhost:30003] ok
2021/02/11 10:29:48 prog-3 try to become leader failed: redsync: failed to acquire lock
2021/02/11 10:30:03 prog-3 try to become leader failed: redsync: failed to acquire lock
2021/02/11 10:30:08 prog-3 become leader successfully
2021/02/11 10:30:08 prog-3 is the leader
2021/02/11 10:30:12 prog-3 is the leader
2021/02/11 10:30:17 prog-3 is the leader
2021/02/11 10:30:22 prog-3 is the leader
2021/02/11 10:30:23 prog-3 收到并处理一条过期消息[key:key4]
2021/02/11 10:30:27 prog-3 is the leader
^C2021/02/11 10:30:28 recv exit signal...

$main 2
2021/02/11 10:29:24 prog-2 subscribe expire event of node[localhost:30005] ok
2021/02/11 10:29:24 prog-2 subscribe expire event of node[localhost:30006] ok
2021/02/11 10:29:24 prog-2 subscribe expire event of node[localhost:30003] ok
2021/02/11 10:29:24 prog-2 subscribe expire event of node[localhost:30004] ok
2021/02/11 10:29:24 prog-2 subscribe expire event of node[localhost:30002] ok
2021/02/11 10:29:24 prog-2 subscribe expire event of node[localhost:30001] ok
2021/02/11 10:29:45 prog-2 try to become leader failed: redsync: failed to acquire lock
2021/02/11 10:30:01 prog-2 try to become leader failed: redsync: failed to acquire lock
2021/02/11 10:30:16 prog-2 try to become leader failed: redsync: failed to acquire lock
2021/02/11 10:30:28 prog-2 become leader successfully
2021/02/11 10:30:28 prog-2 is the leader
2021/02/11 10:30:29 prog-2 is the leader
2021/02/11 10:30:34 prog-2 is the leader
2021/02/11 10:30:39 prog-2 收到并处理一条过期消息[key:key5]
2021/02/11 10:30:39 prog-2 is the leader
^C2021/02/11 10:30:41 recv exit signal...

这个运行结果如预期!

不过这个方案显然也不是那么理想,毕竟我们要单独Subscribe每个集群内的redis节点,目前没有理想方案,除非redis cluster支持带广播的Event notification。

以上示例代码可以在这里 https://github.com/bigwhite/experiments/tree/master/redis-cluster-distributed-lock 下载 。


“Gopher部落”知识星球开球了!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!星球首开,福利自然是少不了的!2020年年底之前,8.8折(很吉利吧^_^)加入星球,下方图片扫起来吧!

Go技术专栏“改善Go语⾔编程质量的50个有效实践”正在慕课网火热热销中!本专栏主要满足广大gopher关于Go语言进阶的需求,围绕如何写出地道且高质量Go代码给出50条有效实践建议,上线后收到一致好评!欢迎大家订阅!

我的网课“Kubernetes实战:高可用集群搭建、配置、运维与应用”在慕课网热卖中,欢迎小伙伴们订阅学习!

img{512x368}

我爱发短信:企业级短信平台定制开发专家 https://tonybai.com/。smspush : 可部署在企业内部的定制化短信平台,三网覆盖,不惧大并发接入,可定制扩展; 短信内容你来定,不再受约束, 接口丰富,支持长短信,签名可选。2020年4月8日,中国三大电信运营商联合发布《5G消息白皮书》,51短信平台也会全新升级到“51商用消息平台”,全面支持5G RCS消息。

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻)归档仓库 – https://github.com/bigwhite/gopherdaily

我的联系方式:

  • 微博:https://weibo.com/bigwhite20xx
  • 微信公众号:iamtonybai
  • 博客:tonybai.com
  • github: https://github.com/bigwhite
  • “Gopher部落”知识星球:https://public.zsxq.com/groups/51284458844544

微信赞赏:
img{512x368}

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats