<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>《一步步打造基于Kubeadm的高可用Kubernetes集群-第一部分》的评论</title>
	<atom:link href="http://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/feed/" rel="self" type="application/rss+xml" />
	<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/</link>
	<description>一个程序员的心路历程</description>
	<lastBuildDate>Wed, 25 Mar 2026 09:21:20 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.2.1</generator>
	<item>
		<title>作者：weqiu</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7069</link>
		<dc:creator>weqiu</dc:creator>
		<pubDate>Thu, 04 Jan 2018 11:11:10 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7069</guid>
		<description>这个问题，我已经试过 
v1.7.x使用了NodeRestriction等安全检查控制，要设置成v1.6.x推荐的admission-control配置即可 
vi /etc/kubernetes/manifests/kube-apiserver.yaml
#    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds</description>
		<content:encoded><![CDATA[<p>这个问题，我已经试过<br />
v1.7.x使用了NodeRestriction等安全检查控制，要设置成v1.6.x推荐的admission-control配置即可<br />
vi /etc/kubernetes/manifests/kube-apiserver.yaml<br />
#    &#8211; &#8211;admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota<br />
    &#8211; &#8211;admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：weqiu</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7068</link>
		<dc:creator>weqiu</dc:creator>
		<pubDate>Thu, 04 Jan 2018 11:08:36 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7068</guid>
		<description>[root@node131 net.d]# docker exec -it 98bc06acd262  ash
/ # etcdctl make-mirror --no-dest-prefix=true  127.0.0.1:2379  --endpoints=192.168.1.56:2379 --insecure-skip-tls-verify=true
No help topic for &#039;make-mirror&#039;
/ # etcdctl --version
etcdctl version: 3.0.17
API version: 2
/ # etcdctl help
NAME:
   etcdctl - A simple command line client for etcd.

USAGE:
   etcdctl [global options] command [command options] [arguments...]
   
VERSION:
   3.0.17
   
COMMANDS:
     backup          backup an etcd directory
     cluster-health  check the health of the etcd cluster
     mk              make a new key with a given value
     mkdir           make a new directory
     rm              remove a key or a directory
     rmdir           removes the key if it is an empty directory or a key-value pair
     get             retrieve the value of a key
     ls              retrieve a directory
     set             set the value of a key
     setdir          create a new directory or update an existing directory TTL
     update          update an existing key with a given value
     updatedir       update an existing directory
     watch           watch a key for changes
     exec-watch      watch a key for changes and exec an executable
     member          member add, remove and list subcommands
     import          import a snapshot to a cluster
     user            user add, grant and revoke subcommands
     role            role add, grant and revoke subcommands
     auth            overall auth controls
我的etcd是v2  怎么样在容器内设置成v3，求教··</description>
		<content:encoded><![CDATA[<p>[root@node131 net.d]# docker exec -it 98bc06acd262  ash<br />
/ # etcdctl make-mirror &#8211;no-dest-prefix=true  127.0.0.1:2379  &#8211;endpoints=192.168.1.56:2379 &#8211;insecure-skip-tls-verify=true<br />
No help topic for &#8216;make-mirror&#8217;<br />
/ # etcdctl &#8211;version<br />
etcdctl version: 3.0.17<br />
API version: 2<br />
/ # etcdctl help<br />
NAME:<br />
   etcdctl &#8211; A simple command line client for etcd.</p>
<p>USAGE:<br />
   etcdctl [global options] command [command options] [arguments...]</p>
<p>VERSION:<br />
   3.0.17</p>
<p>COMMANDS:<br />
     backup          backup an etcd directory<br />
     cluster-health  check the health of the etcd cluster<br />
     mk              make a new key with a given value<br />
     mkdir           make a new directory<br />
     rm              remove a key or a directory<br />
     rmdir           removes the key if it is an empty directory or a key-value pair<br />
     get             retrieve the value of a key<br />
     ls              retrieve a directory<br />
     set             set the value of a key<br />
     setdir          create a new directory or update an existing directory TTL<br />
     update          update an existing key with a given value<br />
     updatedir       update an existing directory<br />
     watch           watch a key for changes<br />
     exec-watch      watch a key for changes and exec an executable<br />
     member          member add, remove and list subcommands<br />
     import          import a snapshot to a cluster<br />
     user            user add, grant and revoke subcommands<br />
     role            role add, grant and revoke subcommands<br />
     auth            overall auth controls<br />
我的etcd是v2  怎么样在容器内设置成v3，求教··</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：bigwhite</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7039</link>
		<dc:creator>bigwhite</dc:creator>
		<pubDate>Thu, 14 Dec 2017 05:56:15 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7039</guid>
		<description>只是改了个连接的ip地址，理论上不应该啊。不过这一步是为了让你做etcd数据同步的。后续kube-apiserver也是要去连新建的etcd cluster的。</description>
		<content:encoded><![CDATA[<p>只是改了个连接的ip地址，理论上不应该啊。不过这一步是为了让你做etcd数据同步的。后续kube-apiserver也是要去连新建的etcd cluster的。</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：朱志扬</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7038</link>
		<dc:creator>朱志扬</dc:creator>
		<pubDate>Thu, 14 Dec 2017 05:40:20 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7038</guid>
		<description>这样改是启动起来了，但是一起在restart
[root@node1 manifests]# kubectl get pods -n kube-system
NAME                            READY     STATUS    RESTARTS   AGE
etcd-node1                      1/1       Running   5          5m
etcd-node2                      1/1       Running   6          15m
etcd-node3                      1/1       Running   0          12m
kube-apiserver-node1            1/1       Running   0          5m
kube-controller-manager-node1   1/1       Running   1          1h
kube-dns-3913472980-fb545       3/3       Running   0          1h
kube-proxy-1799f                1/1       Running   0          1h
kube-proxy-8413q                1/1       Running   0          16m
kube-proxy-nmjds                1/1       Running   0          12m
kube-scheduler-node1            1/1       Running   1          1h
weave-net-5nh94                 2/2       Running   0          1h
weave-net-bd2lj                 2/2       Running   0          16m
weave-net-q6ktt                 2/2       Running   0          12m
而且我执行这条命令的时候要好久才会出现，大概一分钟，很慢。</description>
		<content:encoded><![CDATA[<p>这样改是启动起来了，但是一起在restart<br />
[root@node1 manifests]# kubectl get pods -n kube-system<br />
NAME                            READY     STATUS    RESTARTS   AGE<br />
etcd-node1                      1/1       Running   5          5m<br />
etcd-node2                      1/1       Running   6          15m<br />
etcd-node3                      1/1       Running   0          12m<br />
kube-apiserver-node1            1/1       Running   0          5m<br />
kube-controller-manager-node1   1/1       Running   1          1h<br />
kube-dns-3913472980-fb545       3/3       Running   0          1h<br />
kube-proxy-1799f                1/1       Running   0          1h<br />
kube-proxy-8413q                1/1       Running   0          16m<br />
kube-proxy-nmjds                1/1       Running   0          12m<br />
kube-scheduler-node1            1/1       Running   1          1h<br />
weave-net-5nh94                 2/2       Running   0          1h<br />
weave-net-bd2lj                 2/2       Running   0          16m<br />
weave-net-q6ktt                 2/2       Running   0          12m<br />
而且我执行这条命令的时候要好久才会出现，大概一分钟，很慢。</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：bigwhite</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7037</link>
		<dc:creator>bigwhite</dc:creator>
		<pubDate>Thu, 14 Dec 2017 04:42:26 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7037</guid>
		<description>估计：这是因为kube-apiserver使用的是127.0.0.1:2379去访问etcd，修改后，访问失败了。试试修改apiserver的--etcd-servers配置，改为192.168.186.111:2379</description>
		<content:encoded><![CDATA[<p>估计：这是因为kube-apiserver使用的是127.0.0.1:2379去访问etcd，修改后，访问失败了。试试修改apiserver的&#8211;etcd-servers配置，改为192.168.186.111:2379</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：朱志扬</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7036</link>
		<dc:creator>朱志扬</dc:creator>
		<pubDate>Thu, 14 Dec 2017 03:33:07 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7036</guid>
		<description>我修改为本地IP后集群都起不来了
spec:
  containers:
  - command:
    - etcd
    - --listen-client-urls=http://192.168.186.111:2379
    - --advertise-client-urls=http://192.168.186.111:2379
    - --data-dir=/var/lib/etcd
    image: gcr.io/google_containers/etcd-amd64:3.0.17
    livenessProbe:
      failureThreshold: 8
      httpGet:
[root@node1 kubernetes]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Thu 2017-12-14 11:22:16 CST; 47s ago
     Docs: http://kubernetes.io/docs/
 Main PID: 117270 (kubelet)
   Memory: 27.2M
   CGroup: /system.slice/kubelet.service
           ├─117270 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-p...
           └─117293 journalctl -k -f

Dec 14 11:22:47 node1 kubelet[117270]: W1214 11:22:47.277864  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 14 11:22:47 node1 kubelet[117270]: E1214 11:22:47.277953  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes...ninitialized
Dec 14 11:22:47 node1 kubelet[117270]: I1214 11:22:47.281257  117270 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 14 11:22:47 node1 kubelet[117270]: I1214 11:22:47.282871  117270 kubelet_node_status.go:77] Attempting to register node node1
Dec 14 11:22:52 node1 kubelet[117270]: W1214 11:22:52.279739  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 14 11:22:52 node1 kubelet[117270]: E1214 11:22:52.280011  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes...ninitialized
Dec 14 11:22:57 node1 kubelet[117270]: W1214 11:22:57.282005  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 14 11:22:57 node1 kubelet[117270]: E1214 11:22:57.282151  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes...ninitialized
Dec 14 11:23:02 node1 kubelet[117270]: W1214 11:23:02.283695  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 14 11:23:02 node1 kubelet[117270]: E1214 11:23:02.283785  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes...ninitialized
Hint: Some lines were ellipsized, use -l to show in full.
[root@node1 kubernetes]# kubectl get pods -n kube-system -o wide
Error from server (ServerTimeout): the server cannot complete the requested operation at this time, try again later (get pods)
我查看etcd错误日志：
[root@node1 kubernetes]# docker logs 70fe
2017-12-14 03:27:54.511300 I &#124; etcdmain: etcd Version: 3.0.17
2017-12-14 03:27:54.511376 I &#124; etcdmain: Git SHA: cc198e2
2017-12-14 03:27:54.511380 I &#124; etcdmain: Go Version: go1.6.4
2017-12-14 03:27:54.511382 I &#124; etcdmain: Go OS/Arch: linux/amd64
2017-12-14 03:27:54.511386 I &#124; etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2017-12-14 03:27:54.511676 I &#124; etcdmain: listening for peers on http://localhost:2380
2017-12-14 03:27:54.511714 I &#124; etcdmain: listening for client requests on 192.168.186.111:2379
2017-12-14 03:27:54.528431 I &#124; etcdserver: name = default
2017-12-14 03:27:54.528514 I &#124; etcdserver: data dir = /var/lib/etcd
2017-12-14 03:27:54.528527 I &#124; etcdserver: member dir = /var/lib/etcd/member
2017-12-14 03:27:54.528534 I &#124; etcdserver: heartbeat = 100ms
2017-12-14 03:27:54.528540 I &#124; etcdserver: election = 1000ms
2017-12-14 03:27:54.528546 I &#124; etcdserver: snapshot count = 10000
2017-12-14 03:27:54.528564 I &#124; etcdserver: advertise client URLs = http://192.168.186.111:2379
2017-12-14 03:27:54.528600 I &#124; etcdserver: initial advertise peer URLs = http://localhost:2380
2017-12-14 03:27:54.528610 I &#124; etcdserver: initial cluster = default=http://localhost:2380
2017-12-14 03:27:54.539365 I &#124; etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
2017-12-14 03:27:54.539408 I &#124; raft: 8e9e05c52164694d became follower at term 0
2017-12-14 03:27:54.539422 I &#124; raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-12-14 03:27:54.539425 I &#124; raft: 8e9e05c52164694d became follower at term 1
2017-12-14 03:27:54.602614 I &#124; etcdserver: starting server... [version: 3.0.17, cluster version: to_be_decided]
2017-12-14 03:27:54.603825 I &#124; membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2017-12-14 03:27:54.741124 I &#124; raft: 8e9e05c52164694d is starting a new election at term 1
2017-12-14 03:27:54.741270 I &#124; raft: 8e9e05c52164694d became candidate at term 2
2017-12-14 03:27:54.741314 I &#124; raft: 8e9e05c52164694d received vote from 8e9e05c52164694d at term 2
2017-12-14 03:27:54.741407 I &#124; raft: 8e9e05c52164694d became leader at term 2
2017-12-14 03:27:54.741443 I &#124; raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
2017-12-14 03:27:54.743118 I &#124; etcdserver: setting up the initial cluster version to 3.0
2017-12-14 03:27:54.745342 N &#124; membership: set the initial cluster version to 3.0
2017-12-14 03:27:54.745447 I &#124; api: enabled capabilities for version 3.0
2017-12-14 03:27:54.745778 I &#124; etcdserver: published {Name:default ClientURLs:[http://192.168.186.111:2379]} to cluster cdf818194e3a8c32
2017-12-14 03:27:54.745808 I &#124; etcdmain: ready to serve client requests
2017-12-14 03:27:54.747113 N &#124; etcdmain: serving insecure client requests on 192.168.186.111:2379, this is strongly discouraged!
2017-12-14 03:29:25.151993 N &#124; osutil: received terminated signal, shutting down...
[root@node1 kubernetes]#</description>
		<content:encoded><![CDATA[<p>我修改为本地IP后集群都起不来了<br />
spec:<br />
  containers:<br />
  &#8211; command:<br />
    &#8211; etcd<br />
    &#8211; &#8211;listen-client-urls=http://192.168.186.111:2379<br />
    &#8211; &#8211;advertise-client-urls=http://192.168.186.111:2379<br />
    &#8211; &#8211;data-dir=/var/lib/etcd<br />
    image: gcr.io/google_containers/etcd-amd64:3.0.17<br />
    livenessProbe:<br />
      failureThreshold: 8<br />
      httpGet:<br />
[root@node1 kubernetes]# systemctl status kubelet<br />
● kubelet.service &#8211; kubelet: The Kubernetes Node Agent<br />
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)<br />
  Drop-In: /etc/systemd/system/kubelet.service.d<br />
           └─10-kubeadm.conf<br />
   Active: active (running) since Thu 2017-12-14 11:22:16 CST; 47s ago<br />
     Docs: <a href="http://kubernetes.io/docs/" rel="nofollow">http://kubernetes.io/docs/</a><br />
 Main PID: 117270 (kubelet)<br />
   Memory: 27.2M<br />
   CGroup: /system.slice/kubelet.service<br />
           ├─117270 /usr/bin/kubelet &#8211;kubeconfig=/etc/kubernetes/kubelet.conf &#8211;require-kubeconfig=true &#8211;pod-manifest-path=/etc/kubernetes/manifests &#8211;allow-privileged=true &#8211;network-p&#8230;<br />
           └─117293 journalctl -k -f</p>
<p>Dec 14 11:22:47 node1 kubelet[117270]: W1214 11:22:47.277864  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d<br />
Dec 14 11:22:47 node1 kubelet[117270]: E1214 11:22:47.277953  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes&#8230;ninitialized<br />
Dec 14 11:22:47 node1 kubelet[117270]: I1214 11:22:47.281257  117270 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach<br />
Dec 14 11:22:47 node1 kubelet[117270]: I1214 11:22:47.282871  117270 kubelet_node_status.go:77] Attempting to register node node1<br />
Dec 14 11:22:52 node1 kubelet[117270]: W1214 11:22:52.279739  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d<br />
Dec 14 11:22:52 node1 kubelet[117270]: E1214 11:22:52.280011  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes&#8230;ninitialized<br />
Dec 14 11:22:57 node1 kubelet[117270]: W1214 11:22:57.282005  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d<br />
Dec 14 11:22:57 node1 kubelet[117270]: E1214 11:22:57.282151  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes&#8230;ninitialized<br />
Dec 14 11:23:02 node1 kubelet[117270]: W1214 11:23:02.283695  117270 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d<br />
Dec 14 11:23:02 node1 kubelet[117270]: E1214 11:23:02.283785  117270 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady mes&#8230;ninitialized<br />
Hint: Some lines were ellipsized, use -l to show in full.<br />
[root@node1 kubernetes]# kubectl get pods -n kube-system -o wide<br />
Error from server (ServerTimeout): the server cannot complete the requested operation at this time, try again later (get pods)<br />
我查看etcd错误日志：<br />
[root@node1 kubernetes]# docker logs 70fe<br />
2017-12-14 03:27:54.511300 I | etcdmain: etcd Version: 3.0.17<br />
2017-12-14 03:27:54.511376 I | etcdmain: Git SHA: cc198e2<br />
2017-12-14 03:27:54.511380 I | etcdmain: Go Version: go1.6.4<br />
2017-12-14 03:27:54.511382 I | etcdmain: Go OS/Arch: linux/amd64<br />
2017-12-14 03:27:54.511386 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1<br />
2017-12-14 03:27:54.511676 I | etcdmain: listening for peers on <a href="http://localhost:2380" rel="nofollow">http://localhost:2380</a><br />
2017-12-14 03:27:54.511714 I | etcdmain: listening for client requests on 192.168.186.111:2379<br />
2017-12-14 03:27:54.528431 I | etcdserver: name = default<br />
2017-12-14 03:27:54.528514 I | etcdserver: data dir = /var/lib/etcd<br />
2017-12-14 03:27:54.528527 I | etcdserver: member dir = /var/lib/etcd/member<br />
2017-12-14 03:27:54.528534 I | etcdserver: heartbeat = 100ms<br />
2017-12-14 03:27:54.528540 I | etcdserver: election = 1000ms<br />
2017-12-14 03:27:54.528546 I | etcdserver: snapshot count = 10000<br />
2017-12-14 03:27:54.528564 I | etcdserver: advertise client URLs = <a href="http://192.168.186.111:2379" rel="nofollow">http://192.168.186.111:2379</a><br />
2017-12-14 03:27:54.528600 I | etcdserver: initial advertise peer URLs = <a href="http://localhost:2380" rel="nofollow">http://localhost:2380</a><br />
2017-12-14 03:27:54.528610 I | etcdserver: initial cluster = default=http://localhost:2380<br />
2017-12-14 03:27:54.539365 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32<br />
2017-12-14 03:27:54.539408 I | raft: 8e9e05c52164694d became follower at term 0<br />
2017-12-14 03:27:54.539422 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]<br />
2017-12-14 03:27:54.539425 I | raft: 8e9e05c52164694d became follower at term 1<br />
2017-12-14 03:27:54.602614 I | etcdserver: starting server&#8230; [version: 3.0.17, cluster version: to_be_decided]<br />
2017-12-14 03:27:54.603825 I | membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32<br />
2017-12-14 03:27:54.741124 I | raft: 8e9e05c52164694d is starting a new election at term 1<br />
2017-12-14 03:27:54.741270 I | raft: 8e9e05c52164694d became candidate at term 2<br />
2017-12-14 03:27:54.741314 I | raft: 8e9e05c52164694d received vote from 8e9e05c52164694d at term 2<br />
2017-12-14 03:27:54.741407 I | raft: 8e9e05c52164694d became leader at term 2<br />
2017-12-14 03:27:54.741443 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2<br />
2017-12-14 03:27:54.743118 I | etcdserver: setting up the initial cluster version to 3.0<br />
2017-12-14 03:27:54.745342 N | membership: set the initial cluster version to 3.0<br />
2017-12-14 03:27:54.745447 I | api: enabled capabilities for version 3.0<br />
2017-12-14 03:27:54.745778 I | etcdserver: published {Name:default ClientURLs:[http://192.168.186.111:2379]} to cluster cdf818194e3a8c32<br />
2017-12-14 03:27:54.745808 I | etcdmain: ready to serve client requests<br />
2017-12-14 03:27:54.747113 N | etcdmain: serving insecure client requests on 192.168.186.111:2379, this is strongly discouraged!<br />
2017-12-14 03:29:25.151993 N | osutil: received terminated signal, shutting down&#8230;<br />
[root@node1 kubernetes]#</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：bigwhite</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7033</link>
		<dc:creator>bigwhite</dc:creator>
		<pubDate>Wed, 13 Dec 2017 10:36:54 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7033</guid>
		<description>timeout: 显然是etcdctl连不上master上的etcd。你在master上打开/etc/kubernetes/manifests/etcd.yaml，查看一下listen-client-urls和advertise-client-urls，是不是都是127.0.0.1:2379啊，把这两个ip修改一下，改为master的local ip。修改保存后，kubelet会重启master 上的etcd。然后再在某个slave node上sync master的etcd的配置。</description>
		<content:encoded><![CDATA[<p>timeout: 显然是etcdctl连不上master上的etcd。你在master上打开/etc/kubernetes/manifests/etcd.yaml，查看一下listen-client-urls和advertise-client-urls，是不是都是127.0.0.1:2379啊，把这两个ip修改一下，改为master的local ip。修改保存后，kubelet会重启master 上的etcd。然后再在某个slave node上sync master的etcd的配置。</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：朱志扬</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7030</link>
		<dc:creator>朱志扬</dc:creator>
		<pubDate>Wed, 13 Dec 2017 08:18:30 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7030</guid>
		<description>我的是master是192.168.186.111，节点分别是192.168.186.112，192.168.186.113，
我在186.112，186.113上面执行下面语句有报错：
etcdctl make-mirror --no-dest-prefix=false  127.0.0.1:2379  --endpoints=192.168.186.111:2379 --insecure-skip-tls-verify=true
Error:  grpc: timed out when dialing
也就是想把186.111上面的etcd数据同步到cluster上面同步报错，但是我在186.111上面执行就可以，也就是可以把cluster上面的数据同步到186.111上面。</description>
		<content:encoded><![CDATA[<p>我的是master是192.168.186.111，节点分别是192.168.186.112，192.168.186.113，<br />
我在186.112，186.113上面执行下面语句有报错：<br />
etcdctl make-mirror &#8211;no-dest-prefix=false  127.0.0.1:2379  &#8211;endpoints=192.168.186.111:2379 &#8211;insecure-skip-tls-verify=true<br />
Error:  grpc: timed out when dialing<br />
也就是想把186.111上面的etcd数据同步到cluster上面同步报错，但是我在186.111上面执行就可以，也就是可以把cluster上面的数据同步到186.111上面。</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：朱志扬</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7029</link>
		<dc:creator>朱志扬</dc:creator>
		<pubDate>Wed, 13 Dec 2017 08:14:45 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7029</guid>
		<description>etcdctl make-mirror --no-dest-prefix=false  127.0.0.1:2379  --endpoints=192.168.186.111:2379 --insecure-skip-tls-verify=true
Error:  grpc: timed out when dialing</description>
		<content:encoded><![CDATA[<p>etcdctl make-mirror &#8211;no-dest-prefix=false  127.0.0.1:2379  &#8211;endpoints=192.168.186.111:2379 &#8211;insecure-skip-tls-verify=true<br />
Error:  grpc: timed out when dialing</p>
]]></content:encoded>
	</item>
	<item>
		<title>作者：bigwhite</title>
		<link>https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/#comment-7001</link>
		<dc:creator>bigwhite</dc:creator>
		<pubDate>Tue, 28 Nov 2017 09:05:15 +0000</pubDate>
		<guid isPermaLink="false">http://tonybai.com/?p=2315#comment-7001</guid>
		<description>我没有使用过cloud provider。</description>
		<content:encoded><![CDATA[<p>我没有使用过cloud provider。</p>
]]></content:encoded>
	</item>
</channel>
</rss>
