[关闭]
@zhangyy 2021-05-17T09:51:26.000000Z 字数 3161 阅读 169

k8s1.18.18 的高可用部署

kubernetes升级系列



一:k8s 高可用简介:

1.1 k8s 多个master 架构

  1. 高可用架构(扩容多Master架构)
  2. Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。
  3. 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。
  4. Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
  5. Master节点主要有三个服务kube-apiserverkube-controller-mansgerkube-scheduler,其中kube-controller-mansgerkube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

1.2 k8s 多个master 架构

image_1f5ic85v0kbdcg8i797f124dp.png-2006.2kB

二:部署步骤

  1. 承接上文:
  2. 2. 拷贝文件(Master1操作)
  3. 拷贝Master1上所有K8s文件和etcd证书到Master2
  4. scp -P36022 -r /data/application/kubernetes root@192.168.3.172:/data/application/
  5. scp -P36022 -r /opt/cni/ root@192.168.3.172:/opt
  6. scp -P36022 /usr/lib/systemd/system/kube* root@192.168.3.172:/usr/lib/systemd/system
  7. scp -P36022 /usr/bin/kubectl root@192.168.3.172:/usr/bin

image_1f5icqbl51bdg1pt6osa1g861tj316.png-260.1kB

  1. 修改配置文件IP和主机名
  2. 修改apiserverkubeletkube-proxy配置文件为本地IP
  3. vim /data/application/kubernetes/cfg/kube-apiserver.conf
  4. ...
  5. --bind-address=192.168.3.172 \
  6. --advertise-address=192.168.3.172 \
  7. ...

  1. 启动
  2. systemctl daemon-reload
  3. systemctl start kube-apiserver
  4. systemctl start kube-controller-manager
  5. systemctl start kube-scheduler

image_1f5idk5p11ep1r77fab6b798u2d.png-227.5kB


  1. master3 重复以上配置:
  2. scp -P36022 -r /data/application/kubernetes root@192.168.3.173:/data/application/
  3. scp -P36022 -r /opt/cni/ root@192.168.3.173:/opt
  4. scp -P36022 /usr/lib/systemd/system/kube* root@192.168.3.173:/usr/lib/systemd/system
  5. scp -P36022 /usr/bin/kubectl root@192.168.3.173:/usr/bin

image_1f5idb8ecj1r1qgo10dm1g4ri281j.png-126.8kB

  1. vim /data/application/kubernetes/cfg/kube-apiserver.conf
  2. ----
  3. --bind-address=192.168.3.173 \
  4. --advertise-address=192.168.3.173 \
  5. ----

  1. 启动
  2. systemctl daemon-reload
  3. systemctl start kube-apiserver
  4. systemctl start kube-controller-manager
  5. systemctl start kube-scheduler

image_1f5idi4fq14t81dvoiss17gicnn20.png-214.4kB


三:部署负载均衡器nginx

  1. kube-apiserver高可用架构图:

image_1f5ieakolum94v1cjm1u1b1e682q.png-248kB


  1. 这边只做单nginx 负载均衡
  2. nginx服务器的IP 地址: 192.168.3.201 并没有配置keepalive
  1. cat > /etc/nginx/nginx.conf << "EOF"
  2. user nginx;
  3. worker_processes auto;
  4. error_log /var/log/nginx/error.log;
  5. pid /run/nginx.pid;
  6. include /usr/share/nginx/modules/*.conf;
  7. events {
  8. worker_connections 1024;
  9. }
  10. # 四层负载均衡,为两台Master apiserver组件提供负载均衡
  11. stream {
  12. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  13. access_log /var/log/nginx/k8s-access.log main;
  14. upstream k8s-apiserver {
  15. server 192.168.3.171:6443; # Master1 APISERVER IP:PORT
  16. server 192.168.3.172:6443; # Master2 APISERVER IP:PORT
  17. server 192.168.3.173:6443; # Master3 APISERVER IP:PORT
  18. }
  19. server {
  20. listen 6443;
  21. proxy_pass k8s-apiserver;
  22. }
  23. }
  24. http {
  25. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  26. '$status $body_bytes_sent "$http_referer" '
  27. '"$http_user_agent" "$http_x_forwarded_for"';
  28. access_log /var/log/nginx/access.log main;
  29. sendfile on;
  30. tcp_nopush on;
  31. tcp_nodelay on;
  32. keepalive_timeout 65;
  33. types_hash_max_size 2048;
  34. include /etc/nginx/mime.types;
  35. default_type application/octet-stream;
  36. server {
  37. listen 80 default_server;
  38. server_name _;
  39. location / {
  40. }
  41. }
  42. }
  43. EOF

  1. 启动nginx
  2. nginx -t
  3. service nginx start

image_1f5if18ij1gqkqmeri0gqf1qlk37.png-64.8kB

image_1f5if1tjn19qsppk1fj21v7d1tl344.png-98.7kB


  1. 去任意一个k8s master 节点上面去验证
  2. curl -k https://192.168.3.201:6443/version

image_1f5if5nbqv9s10nsklr66h1pkg4h.png-66.7kB

image_1f5if63qieqm8fi1kiea8tmc4u.png-63.2kB

image_1f5if6fji9dk19fllt51t4d905b.png-59.9kB

  1. 修改所有Worker Node连接LB VIP
  2. 所有node 节点执行命令
  3. sed -i 's#192.168.3.171:6443#192.168.3.201:6443#' /data/application/kubernetes/cfg/*
  4. systemctl restart kubelet
  5. systemctl restart kube-proxy
  6. kubectl get node

image_1f5ifca3sc2u19qpisc1r1hfi45o.png-42.4kB

image_1f5ifcu6k1l7mi9imgi1uic1cut65.png-77.3kB

image_1f5ifgtr713oq1gcl4r41ohp5i16i.png-35.8kB

image_1f5ifhep0u69jb3su41cffd7p6v.png-43.4kB

image_1f5ifhui01tvt62p43n1qi21vqg7c.png-54kB

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注