标签 container 下的文章

Hello, Apollo

要说目前哪个技术领域投资最火热,莫过于人工智能。而人工智能领域中最火的(或者说之一)肯定要算上自动驾驶。自动驾驶的概念不是什么新鲜的玩意了,只是随着近两年这一波人工智能的大热,自动驾驶又被推到了风口浪尖。各大汽车厂商、互联网公司也都跃跃欲试,准备给汽车这一“历经百年的黄金平台”做一次新的“赋能”。

今年7月5日,国内搜索引擎No.1企业百度在其首届百度AI开发者大会上发布了Apollo自动驾驶开放平台,同时百度也对外宣布baidu正式从互联网公司转型为一家人工智能公司。作为“错过了移动互联网时代”的典型公司代表,百度这次押宝人工智能,我觉得也是战略上迫不得已的选择:在现有现金牛“搜索广告业务”还能带来大量利润的时候,为抓住未来那头现金牛而进行的努力。而Apollo自动驾驶平台恰是百度人工智能战略的重要组成部分。

Apollo,阿波罗是古希腊神话中的光明之神,这个名字在西方文化中“自带光环”。提到Apollo,很多人还会想到半个多世纪前美国著名的“登月计划”。百度将其自动驾驶平台命名为Apollo,我猜测是有“借势之意”,即期望Apollo这个项目能在百度众多人工智能业务中拥有美好光明的前景。

作为技术人员,我们不能像一般媒体人员那样根据官方提供的“说辞”做宽泛的介绍,我们要与Apoll亲密接触,看看Apollo究竟是什么,究竟能做什么。这里就和大家一起来Say Hello to Apollo。

一、自动驾驶汽车- “百年黄金平台”的新时代赋能

在正式入门Apollo之前,还要说点“废话”。在接触Apollo之前,我从未认真思考过“汽车”这个平台,这次算是“顿悟”,虽然也算不上深刻。就我看来,汽车 是一个不可多得的“黄金平台”。作为一个平台,汽车已经有了上百年的历史,见证了人类科学技术的发展,是跨学科之集大成者。这百年多时间,任何新的、先进的民用技术都会赋能在汽车工业上。以一个长不足5米,重量不超过2t的一般家用乘用车为例,我们在其上面能看到先进的能源技术、材料技术、化工技术、电子技术、通讯技术以及精密的机械原件和组装技术等,可以说汽车为各个公司的创造力提供了展示的舞台。

就普通老百姓的衣食住行而言,汽车也是史无前例的高频使用典范,且是最直接、最贴近普通百姓生活的,这些都是飞机、火车等无法媲美的(如果非要选一个,那只有智能终端能与汽车媲美了,尤其是在集成度方面)。即便是到了科幻片中的漫天跑飞行器的时候,汽车也可能依旧是短距离交通的首选。当然届时的汽车很可能与我们此时的汽车大不相同了。随着时代的进步,汽车也在演化,日新月异的新技术、新材料、新能源对汽车的进一步赋能,因此汽车依旧是朝阳产业,这也是国际资本依旧积极群雄逐鹿汽车工业发展的根本原因了。比如:通过新能源方式赋能汽车的特斯拉、通过无人驾驶技术赋能的Google的waymo等。当然,不仅是从技术方面,从商业模式方面也有围绕着汽车这一平台创新的经典案例,典型的比如:uber滴滴等的高效出行以及近期日渐升温的共享汽车出行。

可以说,各大公司都在从自身优势出发,考虑如何为汽车这一百年黄金平台赋能。从这一点出发,我们就能大致理解百度Apollo的出现了:它是baidu结合自身的技术优势和数据优势拥抱汽车工业、为汽车做新时代赋能而迈出的重要一步。

二、Apollo的技术架构

Apollo是一套完整的自动驾驶技术方案,官方架构原图的截图较为模糊,这里自己画了一个简单的四层结构,每层内的模块暂未画出,因为不是本次入门的重点:

img{512x368}

按照上图,apollo自动驾驶分成四层技术栈,从下到上分别为:

1、Reference Vehicle Platform(参考车辆平台)

自动驾驶最终都要落地到车上,因此apollo抽象了一个”参考车辆平台”层,通过电子化的方式控制车辆的行驶行为。

Note: 在开发者大会上,百度展示了由美国创业公司AutonomouStuff基于Apollo 1.0开放平台改装而成的循迹自动驾驶车,这辆车是一辆美系的林肯MKZ。也就是说当前发布的Apollo适配林肯MKZ是没有问题的。但这款中型车对于普通开发者来说门槛算是稍高了。如果百度能拿出一款大众系、丰田系或至少也应该是一个本田系这样的车型,那对自动驾驶领域的开发者或者说爱好者来说,才是福利。相比而言,著名黑客George Hotz创立的自动驾驶技术公司comma.ai为其openpilot初始选用的车型则是Honda系的思域和CR-V,滥大街的车型,容易搞到,且低成本搞到,也容易改装。

2、Reference Hardware Platform(参考硬件平台)

这一层为自动驾驶汽车提供计算、感知、交互的硬件能力,包括计算单元(车载处理器设备)、GPS/IMU(惯性测量设备)、摄像头、激光雷达、声波雷达、HMI(人机接口)等。在发布的Apollo 1.0版本中,开放的硬件能力包括:计算单元、GPS/IMU(惯性测量设备)以及HMI。

3、Apollo open software Platform (开放软件平台)

这一层是百度Apollo 1.0开放的核心部分,见下图(蓝色的代表在apollo 1.0.0中已经开放的能力):

img{512x368}

从图中看到,这一层还可以分为三个子层,从下至上分别是:

  • apollo kernel层

这一层是运行于硬件上面的OS,对于自动驾驶这种实时性要求特别强的领域,这里显然只能是RTOS(实时操作系统)。Apollo 1.0开放的源码中包含一个”Apollo Kernel“的项目,在这个项目下汇集着可以满足实时性需求的OS kernel。当然目前还仅有一个选择:realtime linux kernel。这是apollo基于Linux Kernel 4.4.32+realtime patch定制的一款专用linux内核。

  • apollo platform层

在Kernel层的上面就是apollo的runtime framework了,提供platform级的支撑。Apollo 1.0同样也创建了一个专用项目:apollo-platform,用于汇集满足apollo平台级支撑需求的platform。当前该项目下也仅提供了一种选择:Apollo ROS,是基于ROS1的Indigo版二次开发后的定制版ROS。Apollo ROS基于自动驾驶需求出发,对ROS1主要做了三方面改进:

  • 为优化自动驾驶大量使用传感器引发很大的传输带宽需求, Apollo ROS改变基于socket的网络传输模式,大量采用共享内存的node间通信机制,减少传输中的数据拷贝,显著提升传输效率, 尤其是在满足一对多的传输场景下效果明显;

  • 从鲁棒性出发,使用RTPS(Real-Time Publish Subscribe)服务发现协议实现完全的P2P网络拓扑,避免原ROS的以Master作为拓扑网络的中心的单点故障问题;

  • 使用protobuf替代原ROSmessage,提供很好的向后兼容,避免接口升级后,不同版本的模块难以兼容的问题。

其实第二点改进也是ROS2正在做的事情。关于Apollo ROS的详尽变化,可以参考前不久百度工程师的一个分享:《Apollo代码开放框架—ROS 探索与实践》

  • apollo modules层

在这一层是apollo的功能modules,当前似乎依旧是基于ROS的package开发的,在github.com/ApolloAuto/apollo/modules/common/apollo_app.cc你大致能看出来一个ROS Package的开发模板。这一层提供诸如:规划(planning)、洞察(perception)、控制(control)、预测(prediction)、决策(decision)、定位等诸多功能。但Apollo 1.0仅仅开放了Control、Localization和HMI三个module,因为这三块足以构成Apollo 1.0提供的封闭场地循迹驾驶体系了。

4、Cloud Services(云端服务)

Apollo 1.0还开放了云端数据平台,以及唤醒万物的DuerOS能力。DuerOS也是Baidu人工智能战略的重要棋子,似乎也是目前Baidu在AI方面最为成熟的、应用最广的产品。当然这一层还包括仿真、高精度地图等服务,不过目前尚未开放。

三、上手Apollo

买不起林肯MKZ的童鞋也不要担心,Apollo 1.0提供了一个本地仿真工具,给你一个与Apollo亲密接触的途径,让你可以在PC上肆无忌惮地玩耍,毕竟Apollo 1.0仅提供封闭场地的寻迹能力,相对简单。

我们的重点是Apollo open software Platform这一层,而这一层中,我们不关心apollo kernel,只关心Apollo ROS和三个已经开放的apollo modules。

1、下载release版本

截至目前为止,Apollo仅发布了一个版本:apollo-v1.0.0,我们可以从github上将其下载到本地:

# wget -c https://github.com/ApolloAuto/apollo/archive/v1.0.0.tar.gz
# tar zxvf v1.0.0.tar.gz
# cd apollo-1.0.0
# ls -F
apollo_docker.sh*  apollo.doxygen  apollo.sh*  AUTHORS.md  BUILD  CPPLINT.cfg
docker/  docs/  LICENSE  modules/  README.md  scripts/  third_party/  tools/  WORKSPACE

注意:我的实验环境为ubuntu 16.04.1 amd64。

2、本地源码构建

对于基于Apollo这个framework的开发者,Apollo官方强烈建议直接采用官方预定义好的专用docker环境(for dev)。对于爱折腾的我而言,必须要在本地做一次源码构建,即使这个体验是糟糕的,甚至最终是失败的^0^。源码构建的命令很简单,一行即可:

# cd apollo-1.0.0
# bash apollo.sh build

在这个过程中,我遇到了两个错误:

  • bazel不存在

Apollo的构建依赖google出品的bazel构建工具,我个人对bazel并没有什么研究,这里先装上再说:

# echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" |  tee /etc/apt/sources.list.d/bazel.list
deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8

# curl https://bazel.build/bazel-release.pub.gpg | apt-key add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3157  100  3157    0     0   3202      0 --:--:-- --:--:-- --:--:--  3201
OK

# apt-get update && apt-get install bazel
  • third_party/ros/setup.bash: No such file or directory

apollo的编译要依赖ros,但apollo并没有自带ros。我们需要到apollo platform那个项目中去下载Apollo ROS:

# wget -c https://github.com/ApolloAuto/apollo-platform/releases/download/1.0.0/ros-indigo-apollo-1.0.0.x86_64.tar.gz
# tar zxvf ros-indigo-apollo-1.0.0.x86_64.tar.gz
# cd ros
# ls -F
bin/  BUILD  env.sh*  etc/  include/  lib/  setup.bash  setup.sh  _setup_util.py*  setup.zsh  share/

将下载的ros目录copy到apollo-1.0.0/third_party下,并chmod +x third_party/ros/setup.bash。

我们再次执行bash apollo.sh build,这次执行前面的error和warning基本都消失了,apollo.sh脚本开始下载依赖包并编译:

# bash apollo.sh build
ROS_DISTRO was set to 'kinetic' before. Please make sure that the environment does not mix paths from different distributions.
[WARNING] ESD CAN library supplied by ESD Electronics does not exit.
[WARNING] If you need ESD CAN, please refer to third_party/can_card_library/esd_can/README.md
.
____Loading package: modules/common/util/testing
____Loading package: @com_github_grpc_grpc//
____Loading package: @google_styleguide//
____Loading package: @glog//
____Loading package: @eigen//
____Loading package: @gtest//
____Loading package: @civetweb//
____Loading package: @com_github_google_protobuf//
____Loading package: @websocketpp//
____Loading package: @curlpp//
Building on x86_64, with targets:
//tools/platforms:x86_64
//tools/platforms:aarch64
//modules/prediction:prediction
//modules/prediction:prediction_lib
... ...
//modules/common:log
//modules/canbus/proto:canbus_proto.pb
//:x86_64
//:arm64
WARNING: Running Bazel server needs to be killed, because the startup options are different.
INFO: Downloading https://github.com/google/boringssl/archive/master-with-bazel.zip via codeload.github.com: 2,750,374 bytes
INFO: Cloning https://github.com/madler/zlib: Receiving objects (3309 / 5016)
INFO: Downloading https://github.com/google/boringssl/archive/master-with-bazel.zip via codeload.github.com: 2,773,664 bytes
INFO: Cloning https://github.com/madler/zlib: Receiving objects (3314 / 5016)
INFO: Downloading https://github.com/google/boringssl/archive/master-with-bazel.zip via codeload.github.com: 2,795,584 bytes
INFO: Downloading https://github.com/google/boringssl/archive/master-with-bazel.zip via codeload.github.com: 13,504,198 bytes

INFO: Downloading https://github.com/google/boringssl/archive/master-with-bazel.zip via codeload.github.com: 13,522,008 bytes
INFO: Found 190 targets...
[34 / 41] Compiling external/com_github_google_protobuf/src/google/protobuf/compiler/java/java_message_lite.cc [for host]
[41 / 48] Compiling external/com_github_google_protobuf/src/google/protobuf/compiler/command_line_interface.cc [for host]
[157 / 163] Compiling external/com_github_google_protobuf/src/google/protobuf/compiler/javanano/javanano_enum.cc [for host]
[752 / 756] Compiling external/com_github_grpc_grpc/src/core/ext/client_config/resolver_result.c

ERROR: /root/test/apolloauto/apollo-1.0.0/modules/canbus/BUILD:32:1: Linking of rule '//modules/canbus:canbus' failed: gcc failed: error executing command /usr/bin/gcc -o bazel-out/local-dbg/bin/modules/canbus/canbus '-Wl,-rpath,$ORIGIN/../../_solib_k8/_U_S_Sthird_Uparty_Sros_Cros_Ucommon___Uthird_Uparty_Sros_Slib' ... (remaining 8 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
modules/canbus/main.cc:21: error: undefined reference to 'ros::init(int&, char**, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int)'
third_party/ros/include/ros/publisher.h:107: error: undefined reference to 'ros::console::initializeLogLocation(ros::console::LogLocation*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::console::levels::Level)'
... ...
collect2: error: ld returned 1 exit status
INFO: Elapsed time: 578.172s, Critical Path: 26.62s
============================
[ERROR] Build failed!
[INFO] Took 597.189 seconds
============================

经过漫长的等待后,还是以失败告终。并且C++的错误输出分析起来真是好痛苦,于是暂时放弃本地源码编译。

3、pre-specified Docker dev环境

既然apollo已经为我们准备好了pre-specified Docker dev环境,我们不妨用一下,下载和启动该环境可以用下面命令:

# cd apollo-1.0.0
# bash docker/scripts/dev_start.sh

apolloauto/apollo:dev-latest这个image超级庞大,大约有7个G左右,所以你需要耐心等待一会儿了。docker运行起来后,我们在另外一个terminal windows下可以执行下面命令切入到该docker容器内部:

# bash docker/scripts/dev_into.sh
root@myhost: /apollo#

在dev container中,我们可以来编译一下apollo源码:

root@myhost:/apollo# bash apollo.sh build
... ...
Copyright (c) 2017 Various License Holders. All Rights Reserved
Apollo software is built on top of various other open source software packages,
a complete list of licenses are located at https://github.com/ApolloAuto/apollo/blob/master/third_party/ACKNOWLEDGEMENT.txt

You agree to the terms of all the License Agreements.

Type 'y' or 'Y' to agree to the license agreement above, or type any other key to exit
y[WARNING] ESD CAN library supplied by ESD Electronics does not exit.
[WARNING] If you need ESD CAN, please refer to third_party/can_card_library/esd_can/README.md
____Loading package: modules/monitor/common
____Loading package: modules/common/adapters
____Loading package: modules/dreamview/conf
____Loading package: modules/control/integration_tests
____Loading package: @google_styleguide//
____Loading package: @com_github_google_protobuf//
... ...
[502 / 1,099] Compiling external/com_github_grpc_grpc/src/core/ext/transport/chttp2/transport/hpack_encoder.c
[914 / 1,524] Compiling external/com_github_grpc_grpc/src/core/ext/census/tracing.c
[1,304 / 1,527] Linking modules/canbus/vehicle/libmessage_manager_base.a

INFO: Elapsed time: 371.151s, Critical Path: 260.93s
============================
[ OK ] Build passed!
[INFO] Took 401.521 seconds
============================

由于dev环境中相关的依赖已经就绪,因此无需过多干预,在漫长的一段等待后,我们看到编译ok了。

4、运行apollo demo

在dev enviroment中或apollo:release-latest中,我们都可以运行apollo的一个寻迹小车的demo。以apollo:release-latest image环境为例:

// 启动基于apollo:release-latest image的apollo container(image size大约为3G,耐心等待下载):

# cd apollo-1.0.0/
# bash docker/scripts/release_start.sh

//切入到容器中去
# bash docker/scripts/release_into.sh
root@myhost:/apollo#

在容器中启动HMI(human-machine interface):

root@myhost:/apollo# bash scripts/hmi.sh
Start roscore...
HMI ros node service running at localhost:8887
HMI running at http://localhost:8887

root@myhostr:/apollo# rosnode list
/hmi_ros_node_service
/rosout

可以看到,hmi.sh脚本启动了roscore(ros master节点和相关服务)以及hmi的service,我们打开浏览器,输入:http://host_ip:8887即可看到如下场景:

img{512x368}

在容器内继续执行如下命令,回放小车的轨迹数据:

# rosbag play -l ./docs/demo_guide/demo.bag

[ INFO] [1502809442.462789096]: Opening ./docs/demo_guide/demo.bag

Waiting 0.2 seconds after advertising topics... done.

Hit space to toggle paused, or 's' to step.
 [RUNNING]  Bag Time: 1497125289.756657   Duration: 20.614178 / 41.613536
 [RUNNING]  Bag Time: 1497125289.896669   Duration: 20.754189 / 41.613536
... ...

我们打开hmi页面上的Debug开关,点击右上角的”Dreamview”按钮,稍后片刻,你就会在新打开的页面上看到小车仿真寻迹行驶的场景了:

img{512x368}

最初实验时,由于没有在阿里云的防火墙打开8888端口,导致dreamview的websocket建立连接失败,dreamview页面始终无法显示出小车。后经与apollo team的ycool在线联调才发现这个问题。这个问题的解决方法也已更新到Apollo的FAQ中了。

四、小结

Baidu为apollo项目做了一个4年的规划(见下面的roadmap),并计划在2020年实现全路网自动驾驶,这个说法似乎有意避开了自动驾驶的级别,这个2020目标到底是L4呢还是L5呢?不过无论是L4还是L5,这个目标都十分有挑战啊。

img{512x368}

个人觉得:未来的L4、L5级别的自动驾驶一定不光光是依靠车辆自身的设备与算法,还要与道路基础设施相配合去实现。甚至是依赖车与车之间的通信才能做到全天候、全路况的自动驾驶。apollo虽然迈出了第一步,但任重道远,让我们拭目以待吧!


微博:@tonybai_cn
微信公众号:iamtonybai
github.com: https://github.com/bigwhite

使用Fluentd和ElasticSearch Stack实现Kubernetes的集群Logging

在本篇文章中,我们继续来说Kubernetes

经过一段时间的探索,我们先后完成了Kubernetes集群搭建DNSDashboardHeapster等插件安装,集群安全配置,搭建作为Persistent Volume的CephRBD,以及服务更新探索和实现工作。现在Kubernetes集群层面的Logging需求逐渐浮上水面了。

随着一些小应用在我们的Kubernetes集群上的部署上线,集群的运行迈上了正轨。但问题随之而来,那就是如何查找和诊断集群自身的问题以及运行于Pod中应用的问题。日志,没错!我们也只能依赖Kubernetes组件以及Pod中应用输出的日志。不过目前我们仅能通过kubectl logs命令或Kubernetes Dashboard来查看Log。在没有cluster level logging的情况下,我们需要分别查看各个Pod的日志,操作繁琐,过程低效。我们迫切地需要为Kubernetes集群搭建一套集群级别的集中日志收集和分析设施。

对于任何基础设施或后端服务系统,日志都是极其重要的。对于受Google内部容器管理系统Borg启发而催生出的Kubernetes项目来说,自然少不了对Logging的支持。在“Logging Overview“中,官方概要介绍了Kubernetes上的几个层次的Logging方案,并给出Cluster-level logging的参考架构:

img{512x368}

Kubernetes还给出了参考实现:
– Logging Backend:Elastic Search stack(包括:Kibana)
– Logging-agent:fluentd

ElasticSearch stack实现的cluster level logging的一个优势在于其对Kubernetes集群中的Pod没有侵入性,Pod无需做任何配合性改动。同时EFK/ELK方案在业内也是相对成熟稳定的。

在本文中,我将为我们的Kubernetes 1.3.7集群安装ElasticSearch、Fluentd和Kibana。由于1.3.7版本略有些old,EFK能否在其上面run起来,我也是心中未知。能否像《生化危机:终章》那样有一个完美的结局,我们还需要一步一步“打怪升级”慢慢看。

一、Kubernetes 1.3.7集群的 “漏网之鱼”

Kubernetes 1.3.7集群是通过kube-up.sh搭建并初始化的。按照K8s官方文档有关elasticsearch logging的介绍,在kubernetes/cluster/ubuntu/config-default.sh中,我也发现了下面几个配置项:

// kubernetes/cluster/ubuntu/config-default.sh
# Optional: Enable node logging.
ENABLE_NODE_LOGGING=false
LOGGING_DESTINATION=${LOGGING_DESTINATION:-elasticsearch}

# Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_LOGGING=false
ELASTICSEARCH_LOGGING_REPLICAS=${ELASTICSEARCH_LOGGING_REPLICAS:-1}

显然,当初如果搭建集群伊始时要是知道这些配置的意义,可能那个时候就会将elastic logging集成到集群中了。现在为时已晚,集群上已经跑了很多应用,无法重新通过kube-up.sh中断集群运行并安装elastic logging了。我只能手工进行安装了!

二、镜像准备

1.3.7源码中kubernetes/cluster/addons/fluentd-elasticsearch下的manifest已经比较old了,我们直接使用kubernetes最新源码中的manifest文件

k8s.io/kubernetes/cluster/addons/fluentd-elasticsearch$ ls *.yaml
es-controller.yaml  es-service.yaml  fluentd-es-ds.yaml  kibana-controller.yaml  kibana-service.yaml

分析这些yaml,我们需要三个镜像:

 gcr.io/google_containers/fluentd-elasticsearch:1.22
 gcr.io/google_containers/elasticsearch:v2.4.1-1
 gcr.io/google_containers/kibana:v4.6.1-1

显然镜像都在墙外。由于生产环境下的Docker引擎并没有配置加速器代理,因此我们需要手工下载一下这三个镜像。我采用的方法是通过另外一台配置了加速器的机器上的Docker引擎将三个image下载,并重新打tag,上传到我在hub.docker.com上的账号下,以elasticsearch:v2.4.1-1为例:

# docker pull  gcr.io/google_containers/elasticsearch:v2.4.1-1
# docker tag gcr.io/google_containers/elasticsearch:v2.4.1-1 bigwhite/elasticsearch:v2.4.1-1
# docker push bigwhite/elasticsearch:v2.4.1-1

下面是我们在后续安装过程中真正要使用到的镜像:

bigwhite/fluentd-elasticsearch:1.22
bigwhite/elasticsearch:v2.4.1-1
bigwhite/kibana:v4.6.1-1

三、启动fluentd

fluentd是以DaemonSet的形式跑在K8s集群上的,这样k8s可以保证每个k8s cluster node上都会启动一个fluentd(注意:将image改为上述镜像地址,如果你配置了加速器,那自然就不必了)。

# kubectl create -f fluentd-es-ds.yaml --record
daemonset "fluentd-es-v1.22" created

查看daemonset中的Pod的启动情况,我们发现:

kube-system                  fluentd-es-v1.22-as3s5                  0/1       CrashLoopBackOff   2          43s       172.16.99.6    10.47.136.60
kube-system                  fluentd-es-v1.22-qz193                  0/1       CrashLoopBackOff   2          43s       172.16.57.7    10.46.181.146

fluentd Pod启动失败,fluentd的日志可以通过/var/log/fluentd.log查看:

# tail -100f /var/log/fluentd.log

2017-03-02 02:27:01 +0000 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2017-03-02 02:27:01 +0000 [info]: starting fluentd-0.12.31
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-docker_metadata_filter' version '0.1.3'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.5.0'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-kafka' version '0.4.1'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.24.0'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-mongo' version '0.7.16'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-s3' version '0.8.0'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-scribe' version '0.10.14'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-td' version '0.10.29'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2017-03-02 02:27:01 +0000 [info]: gem 'fluent-plugin-webhdfs' version '0.4.2'
2017-03-02 02:27:01 +0000 [info]: gem 'fluentd' version '0.12.31'
2017-03-02 02:27:01 +0000 [info]: adding match pattern="fluent.**" type="null"
2017-03-02 02:27:01 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2017-03-02 02:27:02 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error="Invalid Kubernetes API v1 endpoint https://192.168.3.1:443/api: 401 Unauthorized"
2017-03-02 02:27:02 +0000 [info]: process finished code=256
2017-03-02 02:27:02 +0000 [warn]: process died within 1 second. exit.

从上述日志中的error来看:fluentd访问apiserver secure port(443)出错了:Unauthorized! 通过分析 cluster/addons/fluentd-elasticsearch/fluentd-es-image/build.sh和td-agent.conf,我们发现是fluentd image中的fluent-plugin-kubernetes_metadata_filter要去访问API Server以获取一些kubernetes的metadata信息。不过未做任何特殊配置的fluent-plugin-kubernetes_metadata_filter,我猜测它使用的是kubernetes为Pod传入的环境变量:KUBERNETES_SERVICE_HOST和KUBERNETES_SERVICE_PORT来得到API Server的访问信息的。但API Server在secure port上是开启了安全身份验证机制的,fluentd直接访问必然是失败的。

我们找到了fluent-plugin-kubernetes_metadata_filter项目在github.com上的主页,在这个页面上我们看到了fluent-plugin-kubernetes_metadata_filter支持的其他配置,包括:ca_file、client_cert、client_key等,显然这些字眼非常眼熟。我们需要修改一下fluentd image中td-agent.conf的配置,为fluent-plugin-kubernetes_metadata_filter增加一些配置项,比如:

// td-agent.conf
... ...
<filter kubernetes.**>
  type kubernetes_metadata
  ca_file /srv/kubernetes/ca.crt
  client_cert /srv/kubernetes/kubecfg.crt
  client_key /srv/kubernetes/kubecfg.key
</filter>
... ...

这里我不想重新制作image,那么怎么办呢?Kubernetes提供了ConfigMap这一强大的武器,我们可以将新版td-agent.conf制作成kubernetes的configmap资源,并挂载到fluentd pod的相应位置以替换image中默认的td-agent.conf。

需要注意两点:
* 在基于td-agent.conf创建configmap资源之前,需要将td-agent.conf中的注释行都删掉,否则生成的configmap的内容可能不正确;
* fluentd pod将创建在kube-system下,因此configmap资源也需要创建在kube-system namespace下面,否则kubectl create无法找到对应的configmap。

# kubectl create configmap td-agent-config --from-file=./td-agent.conf -n kube-system
configmap "td-agent-config" created

# kubectl get configmaps -n kube-system
NAME              DATA      AGE
td-agent-config   1         9s

# kubectl get configmaps td-agent-config -o yaml
apiVersion: v1
data:
  td-agent.conf: |
    <match fluent.**>
      type null
    </match>

    <source>
      type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      tag kubernetes.*
      format json
      read_from_head true
    </source>
... ...

fluentd-es-ds.yaml也要随之做一些改动,主要是增加两个mount: 一个是mount 上面的configmap td-agent-config,另外一个就是mount hostpath:/srv/kubernetes以获取到相关client端的数字证书:

  spec:
      containers:
      - name: fluentd-es
        image: bigwhite/fluentd-elasticsearch:1.22
        command:
          - '/bin/sh'
          - '-c'
          - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
        resources:
          limits:
            memory: 200Mi
          #requests:
            #cpu: 100m
            #memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: td-agent-config
          mountPath: /etc/td-agent
        - name: tls-files
          mountPath: /srv/kubernetes
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: td-agent-config
        configMap:
          name: td-agent-config
      - name: tls-files
        hostPath:
          path: /srv/kubernetes

接下来,我们重新创建fluentd ds,步骤不赘述。这回我们的创建成功了:

kube-system                  fluentd-es-v1.22-adsrx                  1/1       Running    0          1s        172.16.99.6    10.47.136.60
kube-system                  fluentd-es-v1.22-rpme3                  1/1       Running    0          1s        172.16.57.7    10.46.181.146

但通过查看/var/log/fluentd.log,我们依然能看到“问题”:

2017-03-02 03:57:58 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-03-02 03:57:59 +0000 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"elasticsearch-logging\", :port=>9200, :scheme=>\"http\"})!" plugin_id="object:3fd99fa857d8"
  2017-03-02 03:57:58 +0000 [warn]: suppressed same stacktrace
2017-03-02 03:58:00 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-03-02 03:58:03 +0000 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"elasticsearch-logging\", :port=>9200, :scheme=>\"http\"})!" plugin_id="object:3fd99fa857d8"
2017-03-02 03:58:00 +0000 [info]: process finished code=9
2017-03-02 03:58:00 +0000 [error]: fluentd main process died unexpectedly. restarting.

由于ElasticSearch logging还未创建,这是连不上elasticsearch所致。

四、启动elasticsearch

启动elasticsearch:

# kubectl create -f es-controller.yaml
replicationcontroller "elasticsearch-logging-v1" created

# kubectl create -f es-service.yaml
service "elasticsearch-logging" created

get pods:

kube-system                  elasticsearch-logging-v1-3bzt6          1/1       Running    0          7s        172.16.57.8    10.46.181.146
kube-system                  elasticsearch-logging-v1-nvbe1          1/1       Running    0          7s        172.16.99.10   10.47.136.60

elastic search logging启动成功后,上述fluentd的fail日志就没有了!

不过elastic search真的运行ok了么?我们查看一下elasticsearch相关Pod日志:

# kubectl logs -f elasticsearch-logging-v1-3bzt6 -n kube-system
F0302 03:59:41.036697       8 elasticsearch_logging_discovery.go:60] kube-system namespace doesn't exist: the server has asked for the client to provide credentials (get namespaces kube-system)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0x19a8100, 0xc400000000, 0xc2, 0x186)
... ...
main.main()
    elasticsearch_logging_discovery.go:60 +0xb53

[2017-03-02 03:59:42,587][INFO ][node                     ] [elasticsearch-logging-v1-3bzt6] version[2.4.1], pid[16], build[c67dc32/2016-09-27T18:57:55Z]
[2017-03-02 03:59:42,588][INFO ][node                     ] [elasticsearch-logging-v1-3bzt6] initializing ...
[2017-03-02 03:59:44,396][INFO ][plugins                  ] [elasticsearch-logging-v1-3bzt6] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
... ...
[2017-03-02 03:59:44,441][INFO ][env                      ] [elasticsearch-logging-v1-3bzt6] heap size [1007.3mb], compressed ordinary object pointers [true]
[2017-03-02 03:59:48,355][INFO ][node                     ] [elasticsearch-logging-v1-3bzt6] initialized
[2017-03-02 03:59:48,355][INFO ][node                     ] [elasticsearch-logging-v1-3bzt6] starting ...
[2017-03-02 03:59:48,507][INFO ][transport                ] [elasticsearch-logging-v1-3bzt6] publish_address {172.16.57.8:9300}, bound_addresses {[::]:9300}
[2017-03-02 03:59:48,547][INFO ][discovery                ] [elasticsearch-logging-v1-3bzt6] kubernetes-logging/7_f_M2TKRZWOw4NhBc4EqA
[2017-03-02 04:00:18,552][WARN ][discovery                ] [elasticsearch-logging-v1-3bzt6] waited for 30s and no initial state was set by the discovery
[2017-03-02 04:00:18,562][INFO ][http                     ] [elasticsearch-logging-v1-3bzt6] publish_address {172.16.57.8:9200}, bound_addresses {[::]:9200}
[2017-03-02 04:00:18,562][INFO ][node                     ] [elasticsearch-logging-v1-3bzt6] started
[2017-03-02 04:01:15,754][WARN ][discovery.zen.ping.unicast] [elasticsearch-logging-v1-3bzt6] failed to send ping to [{#zen_unicast_1#}{127.0.0.1}{127.0.0.1:9300}]
SendRequestTransportException[[][127.0.0.1:9300][internal:discovery/zen/unicast]]; nested: NodeNotConnectedException[[][127.0.0.1:9300] Node not connected];
... ...
Caused by: NodeNotConnectedException[[][127.0.0.1:9300] Node not connected]
    at org.elasticsearch.transport.netty.NettyTransport.nodeChannel(NettyTransport.java:1141)
    at org.elasticsearch.transport.netty.NettyTransport.sendRequest(NettyTransport.java:830)
    at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:329)
    ... 12 more

总结了一下,日志中有两个错误:
- 无法访问到API Server,这个似乎和fluentd最初的问题一样;
- elasticsearch两个节点间互ping失败。

要想找到这两个问题的原因,还得回到源头,去分析elastic search image的组成。

通过cluster/addons/fluentd-elasticsearch/es-image/run.sh文件内容:

/elasticsearch_logging_discovery >> /elasticsearch/config/elasticsearch.yml

chown -R elasticsearch:elasticsearch /data

/bin/su -c /elasticsearch/bin/elasticsearch elasticsearch

我们了解到image中,其实包含了两个程序,一个为/elasticsearch_logging_discovery,该程序执行后生成一个配置文件: /elasticsearch/config/elasticsearch.yml。该配置文件后续被另外一个程序:/elasticsearch/bin/elasticsearch使用。

我们查看一下已经运行的docker中的elasticsearch.yml文件内容:

# docker exec 3cad31f6eb08 cat /elasticsearch/config/elasticsearch.yml
cluster.name: kubernetes-logging

node.name: ${NODE_NAME}
node.master: ${NODE_MASTER}
node.data: ${NODE_DATA}

transport.tcp.port: ${TRANSPORT_PORT}
http.port: ${HTTP_PORT}

path.data: /data

network.host: 0.0.0.0

discovery.zen.minimum_master_nodes: ${MINIMUM_MASTER_NODES}
discovery.zen.ping.multicast.enabled: false

这个结果中缺少了一项:

discovery.zen.ping.unicast.hosts: ["172.30.0.11", "172.30.192.15"]

这也是导致第二个问题的原因。综上,elasticsearch logging的错误其实都是由于/elasticsearch_logging_discovery无法访问API Server导致 /elasticsearch/config/elasticsearch.yml没有被正确生成造成的,我们就来解决这个问题。

我查看了一下/elasticsearch_logging_discovery的源码,elasticsearch_logging_discovery是一个典型通过client-go通过service account访问API Server的程序,很显然这就是我在《在Kubernetes Pod中使用Service Account访问API Server》一文中提到的那个问题:默认的service account不好用。

解决方法:在kube-system namespace下创建一个新的service account资源,并在es-controller.yaml中显式使用该新创建的service account。

创建一个新的serviceaccount在kube-system namespace下:

//serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-efk

# kubectl create -f serviceaccount.yaml -n kube-system
serviceaccount "k8s-efk" created

# kubectl get serviceaccount -n kube-system
NAME      SECRETS   AGE
default   1         139d
k8s-efk   1         17s

在es-controller.yaml中,使用service account “k8s-efk”:

//es-controller.yaml
... ...
spec:
  replicas: 2
  selector:
    k8s-app: elasticsearch-logging
    version: v1
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: k8s-efk
      containers:
... ...

重新创建elasticsearch logging service后,我们再来查看elasticsearch-logging pod的日志:

# kubectl logs -f elasticsearch-logging-v1-dklui -n kube-system
[2017-03-02 08:26:46,500][INFO ][node                     ] [elasticsearch-logging-v1-dklui] version[2.4.1], pid[14], build[c67dc32/2016-09-27T18:57:55Z]
[2017-03-02 08:26:46,504][INFO ][node                     ] [elasticsearch-logging-v1-dklui] initializing ...
[2017-03-02 08:26:47,984][INFO ][plugins                  ] [elasticsearch-logging-v1-dklui] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2017-03-02 08:26:48,073][INFO ][env                      ] [elasticsearch-logging-v1-dklui] using [1] data paths, mounts [[/data (/dev/vda1)]], net usable_space [16.9gb], net total_space [39.2gb], spins? [possibly], types [ext4]
[2017-03-02 08:26:48,073][INFO ][env                      ] [elasticsearch-logging-v1-dklui] heap size [1007.3mb], compressed ordinary object pointers [true]
[2017-03-02 08:26:53,241][INFO ][node                     ] [elasticsearch-logging-v1-dklui] initialized
[2017-03-02 08:26:53,241][INFO ][node                     ] [elasticsearch-logging-v1-dklui] starting ...
[2017-03-02 08:26:53,593][INFO ][transport                ] [elasticsearch-logging-v1-dklui] publish_address {172.16.57.8:9300}, bound_addresses {[::]:9300}
[2017-03-02 08:26:53,651][INFO ][discovery                ] [elasticsearch-logging-v1-dklui] kubernetes-logging/Ky_OuYqMRkm_918aHRtuLg
[2017-03-02 08:26:56,736][INFO ][cluster.service          ] [elasticsearch-logging-v1-dklui] new_master {elasticsearch-logging-v1-dklui}{Ky_OuYqMRkm_918aHRtuLg}{172.16.57.8}{172.16.57.8:9300}{master=true}, added {{elasticsearch-logging-v1-vjxm3}{cbzgrfZATyWkHfQYHZhs7Q}{172.16.99.10}{172.16.99.10:9300}{master=true},}, reason: zen-disco-join(elected_as_master, [1] joins received)
[2017-03-02 08:26:56,955][INFO ][http                     ] [elasticsearch-logging-v1-dklui] publish_address {172.16.57.8:9200}, bound_addresses {[::]:9200}
[2017-03-02 08:26:56,956][INFO ][node                     ] [elasticsearch-logging-v1-dklui] started
[2017-03-02 08:26:57,157][INFO ][gateway                  ] [elasticsearch-logging-v1-dklui] recovered [0] indices into cluster_state
[2017-03-02 08:27:05,378][INFO ][cluster.metadata         ] [elasticsearch-logging-v1-dklui] [logstash-2017.03.02] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2017-03-02 08:27:06,360][INFO ][cluster.metadata         ] [elasticsearch-logging-v1-dklui] [logstash-2017.03.01] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2017-03-02 08:27:07,163][INFO ][cluster.routing.allocation] [elasticsearch-logging-v1-dklui] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2017.03.01][3], [logstash-2017.03.01][3]] ...]).
[2017-03-02 08:27:07,354][INFO ][cluster.metadata         ] [elasticsearch-logging-v1-dklui] [logstash-2017.03.02] create_mapping [fluentd]
[2017-03-02 08:27:07,988][INFO ][cluster.metadata         ] [elasticsearch-logging-v1-dklui] [logstash-2017.03.01] create_mapping [fluentd]
[2017-03-02 08:27:09,578][INFO ][cluster.routing.allocation] [elasticsearch-logging-v1-dklui] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2017.03.02][4]] ...]).

elasticsearch logging启动运行ok!

五、启动kibana

有了elasticsearch logging的“前车之鉴”,这次我们也把上面新创建的serviceaccount:k8s-efk显式赋值给kibana-controller.yaml:

//kibana-controller.yaml
... ...
spec:
      serviceAccount: k8s-efk
      containers:
      - name: kibana-logging
        image: bigwhite/kibana:v4.6.1-1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
          #requests:
          #  cpu: 100m
        env:
          - name: "ELASTICSEARCH_URL"
            value: "http://elasticsearch-logging:9200"
          - name: "KIBANA_BASE_URL"
            value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
... ...

启动kibana,并观察pod日志:

# kubectl create -f kibana-controller.yaml
# kubectl create -f kibana-service.yaml
# kubectl logs -f kibana-logging-3604961973-jby53 -n kube-system
ELASTICSEARCH_URL=http://elasticsearch-logging:9200
server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
{"type":"log","@timestamp":"2017-03-02T08:30:15Z","tags":["info","optimize"],"pid":6,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}

kibana缓存着实需要一段时间,请耐心等待!可能是几分钟。之后你将会看到如下日志:

# kubectl logs -f kibana-logging-3604961973-jby53 -n kube-system
ELASTICSEARCH_URL=http://elasticsearch-logging:9200
server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
{"type":"log","@timestamp":"2017-03-02T08:30:15Z","tags":["info","optimize"],"pid":6,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}
{"type":"log","@timestamp":"2017-03-02T08:40:04Z","tags":["info","optimize"],"pid":6,"message":"Optimization of bundles for kibana and statusPage complete in 588.60 seconds"}
{"type":"log","@timestamp":"2017-03-02T08:40:04Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:05Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:05Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:05Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:05Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:06Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:06Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:06Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-03-02T08:40:06Z","tags":["listening","info"],"pid":6,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2017-03-02T08:40:11Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":6,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-03-02T08:40:14Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

接下来,通过浏览器访问下面地址就可以访问kibana的web页面了,注意:Kinaba的web页面加载也需要一段时间。

https://{API Server external IP}:{API Server secure port}/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana#/settings/indices/

下面是创建一个index(相当于mysql中的一个database)页面:

img{512x368}

取消“Index contains time-based events”,然后点击“Create”即可创建一个Index。

点击页面上的”Setting” -> “Status”,可以查看当前elasticsearch logging的整体状态,如果一切ok,你将会看到下图这样的页面:

img{512x368}

创建Index后,可以在Discover下看到ElasticSearch logging中汇聚的日志:

img{512x368}

六、小结

以上就是在Kubernetes 1.3.7集群上安装Fluentd和ElasticSearch stack,实现kubernetes cluster level logging的过程。在使用kubeadm安装的Kubernetes 1.5.1环境下安装这些,则基本不会遇到上述这些问题。

另外ElasticSearch logging默认挂载的volume是emptyDir,实验用可以。但要部署在生产环境,必须换成Persistent Volume,比如:CephRBD

如发现本站页面被黑,比如:挂载广告、挖矿等恶意代码,请朋友们及时联系我。十分感谢! Go语言第一课 Go语言精进之路1 Go语言精进之路2 商务合作请联系bigwhite.cn AT aliyun.com

欢迎使用邮件订阅我的博客

输入邮箱订阅本站,只要有新文章发布,就会第一时间发送邮件通知你哦!

这里是 Tony Bai的个人Blog,欢迎访问、订阅和留言! 订阅Feed请点击上面图片

如果您觉得这里的文章对您有帮助,请扫描上方二维码进行捐赠 ,加油后的Tony Bai将会为您呈现更多精彩的文章,谢谢!

如果您希望通过微信捐赠,请用微信客户端扫描下方赞赏码:

如果您希望通过比特币或以太币捐赠,可以扫描下方二维码:

比特币:

以太币:

如果您喜欢通过微信浏览本站内容,可以扫描下方二维码,订阅本站官方微信订阅号“iamtonybai”;点击二维码,可直达本人官方微博主页^_^:
本站Powered by Digital Ocean VPS。
选择Digital Ocean VPS主机,即可获得10美元现金充值,可 免费使用两个月哟! 著名主机提供商Linode 10$优惠码:linode10,在 这里注册即可免费获 得。阿里云推荐码: 1WFZ0V立享9折!


View Tony Bai's profile on LinkedIn
DigitalOcean Referral Badge

文章

评论

  • 正在加载...

分类

标签

归档



View My Stats