[关闭]
@chensiqi 2021-06-02T15:12:18.000000Z 字数 41033 阅读 1475

容器自动化(九):K8S容器云平台入门(中)

云计算专题之容器自动化

--私人课件,不公开,不出版,禁止传播

想做好运维工作,人先要学会勤快;
居安而思危,勤记而补拙,方可不断提高;
别人资料不论你用着再如何爽那也是别人的;
自己总结东西是你自身特有的一种思想与理念的展现;
精髓不是看出来的,精髓是记出来的;
请同学们在学习的过程中养成好的学习习惯;
勤于实践,抛弃教案,勤于动手,整理文档。

二,Kubernetes生产级高可用集群部署

角色 IP 组件 推荐配置
master01 192.168.200.207 kube-apiserver/kube-controller-manager/kube-scheduler/etcd CPU:2C+ 内存:4G+
master02 192.168.200.208 kube-apiserver/kube-controller-manager/kube-scheduler/etcd CPU:2C+ 内存:4G+
node01 192.168.200.209 kubelet/kube-proxy/docker/flannel/etcd CPU:2C+ 内存:4G+
node02 192.168.200.210 kubelet/kube-proxy/docker/flannel CPU:2C+ 内存:4G+
Load_Balancer_Master 192.168.200.205,VIP:192.168.200.100 Nginx L4 CPU:2C+ 内存:4G+
Load_Balancer_Backup 192.168.200.206 Nginx L4 CPU:2C+ 内存:4G+
Registry_Harbor 192.168.200.211 Harbor CPU:2C+ 内存:4G+

2.7 单Master集群-在Master节点部署组件

基本流程:

配置文件-->systemd管理组件-->启动

在部署K8S之前一定要确保etcd,flannel,docker是正常工作的,否则先解决问题再继续。

2.7.1 自签APIServer的SSL证书

  1. #在master01查看事先准备好的k8s-cert.sh证书脚本
  2. [root@Master01 scripts]# pwd
  3. /server/scripts
  4. [root@Master01 scripts]# ls
  5. cfssl.sh etcd-cert etcd-cert.sh etcd.sh flannel.sh k8s-cert.sh
  6. [root@Master01 scripts]# cat k8s-cert.sh
  7. #!/bin/bash
  8. cat > ca-config.json <<FOF
  9. {
  10. "signing": {
  11. "default": {
  12. "expiry": "87600h"
  13. },
  14. "profiles": {
  15. "kubernetes": {
  16. "expiry": "87600h",
  17. "usages": [
  18. "signing",
  19. "key encipherment",
  20. "server auth",
  21. "client auth"
  22. ]
  23. }
  24. }
  25. }
  26. }
  27. FOF
  28. cat > ca-csr.json <<FOF
  29. {
  30. "CN": "kubernetes",
  31. "key": {
  32. "algo": "rsa",
  33. "size": 2048
  34. },
  35. "names": [
  36. {
  37. "C": "CN",
  38. "L": "Beijing",
  39. "ST": "Beijing",
  40. "O": "k8s",
  41. "OU": "System"
  42. }
  43. ]
  44. }
  45. FOF
  46. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  47. #-----------------------------
  48. cat > server-csr.json <<FOF
  49. {
  50. "CN": "kubernetes",
  51. "hosts": [
  52. "10.0.0.1",
  53. "127.0.0.1",
  54. "192.168.200.205", #LB-Master-IP,脚本里必须去掉#号后内容
  55. "192.168.200.206", #LB-Backup-IP,脚本里必须去掉#号后内容
  56. "192.168.200.207", #Master01-IP,脚本里必须去掉#号后内容
  57. "192.168.200.208", #Master02-IP,脚本里必须去掉#号后内容
  58. "192.168.200.100" #LB-VIP,脚本里必须去掉#号后内容
  59. ]
  60. "key": {
  61. "algo": "rsa",
  62. "size": 2048
  63. },
  64. "names": [
  65. {
  66. "C": "CN",
  67. "L": "BeiJing",
  68. "ST": "BeiJing",
  69. "O": "k8s",
  70. "OU": "System"
  71. }
  72. ]
  73. }
  74. FOF
  75. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  76. #--------------------------------------------
  77. cat > admin-csr.json <<FOF
  78. {
  79. "CN": "admin",
  80. "hosts": [],
  81. "key": {
  82. "algo": "rsa",
  83. "size": 2048
  84. },
  85. "names": [
  86. {
  87. "C": "CN",
  88. "L": "BeiJing",
  89. "ST": "BeiJing",
  90. "O": "system:masters",
  91. "OU": "System"
  92. }
  93. ]
  94. }
  95. FOF
  96. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
  97. #------------------------------------------
  98. cat > kube-proxy-csr.json <<FOF
  99. {
  100. "CN": "system:kube-proxy",
  101. "hosts": [],
  102. "key": {
  103. "algo": "rsa",
  104. "size": 2048
  105. },
  106. "names": [
  107. {
  108. "C": "CN",
  109. "L": "BeiJing",
  110. "ST": "BeiJing",
  111. "O": "k8s",
  112. "OU": "System"
  113. }
  114. ]
  115. }
  116. FOF
  117. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  118. #创建一个k8s认证文件的目录
  119. [root@Master01 scripts]# mkdir k8s-cert
  120. [root@Master01 scripts]# ls
  121. cfssl.sh etcd-cert etcd-cert.sh etcd.sh flannel.sh k8s-cert k8s-cert.sh
  122. #将k8s-cert.sh脚本复制到k8s-cert目录下,并执行脚本生成证书文件
  123. [root@Master01 scripts]# cp k8s-cert.sh k8s-cert/
  124. [root@Master01 scripts]# cd k8s-cert
  125. [root@Master01 k8s-cert]# ls
  126. k8s-cert.sh
  127. [root@Master01 k8s-cert]# ./k8s-cert.sh
  128. 2019/03/27 21:29:04 [INFO] generating a new CA key and certificate from CSR
  129. 2019/03/27 21:29:04 [INFO] generate received request
  130. 2019/03/27 21:29:04 [INFO] received CSR
  131. 2019/03/27 21:29:04 [INFO] generating key: rsa-2048
  132. 2019/03/27 21:29:04 [INFO] encoded CSR
  133. 2019/03/27 21:29:04 [INFO] signed certificate with serial number 698868405666489018019126285154260178827388854960
  134. 2019/03/27 21:29:04 [INFO] generate received request
  135. 2019/03/27 21:29:04 [INFO] received CSR
  136. 2019/03/27 21:29:04 [INFO] generating key: rsa-2048
  137. 2019/03/27 21:29:04 [INFO] encoded CSR
  138. 2019/03/27 21:29:04 [INFO] signed certificate with serial number 247050472817843981620570557481195852376982745393
  139. 2019/03/27 21:29:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  140. websites. For more information see the Baseline Requirements for the Issuance and Management
  141. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  142. specifically, section 10.2.3 ("Information Requirements").
  143. 2019/03/27 21:29:04 [INFO] generate received request
  144. 2019/03/27 21:29:04 [INFO] received CSR
  145. 2019/03/27 21:29:04 [INFO] generating key: rsa-2048
  146. 2019/03/27 21:29:05 [INFO] encoded CSR
  147. 2019/03/27 21:29:05 [INFO] signed certificate with serial number 250281993730797310179397407942653719272199224012
  148. 2019/03/27 21:29:05 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  149. websites. For more information see the Baseline Requirements for the Issuance and Management
  150. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  151. specifically, section 10.2.3 ("Information Requirements").
  152. 2019/03/27 21:29:05 [INFO] generate received request
  153. 2019/03/27 21:29:05 [INFO] received CSR
  154. 2019/03/27 21:29:05 [INFO] generating key: rsa-2048
  155. 2019/03/27 21:29:05 [INFO] encoded CSR
  156. 2019/03/27 21:29:05 [INFO] signed certificate with serial number 452414759269306400213655560995193188612670947501
  157. 2019/03/27 21:29:05 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  158. websites. For more information see the Baseline Requirements for the Issuance and Management
  159. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  160. specifically, section 10.2.3 ("Information Requirements").
  161. [root@Master01 k8s-cert]# ls
  162. admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json
  163. admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem
  164. admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem

2.7.2 部署Master01组件(apiserver,controller,scheuler)

配置文件--> systemd管理组件-->启动

从官网下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群
https://github.com/kubernetes/kubernetes/releases

(1)部署kube-apiserver组件

  1. #在master01上下载kubernetes二进制包版本号V1.12.1
  2. [root@Master01 ~]# wget https://dl.k8s.io/v1.12.1/kubernetes-server-linux-amd64.tar.gz
  3. [root@Master01 ~]# ls kubernetes-server-linux-amd64.tar.gz
  4. kubernetes-server-linux-amd64.tar.gz
  5. #创建kubernetes程序目录
  6. [root@Master01 scripts]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
  7. #将解压出来的kubernetes的二进制进制文件移动到/opt/kubernetes/bin目录下
  8. [root@Master01 ~]# ls kubernetes-server-linux-amd64.tar.gz
  9. kubernetes-server-linux-amd64.tar.gz
  10. [root@Master01 ~]# tar xf kubernetes-server-linux-amd64.tar.gz
  11. [root@Master01 ~]# cd kubernetes
  12. [root@Master01 kubernetes]# ls
  13. addons kubernetes-src.tar.gz LICENSES server
  14. [root@Master01 kubernetes]# cd server/bin/
  15. [root@Master01 bin]# ls
  16. apiextensions-apiserver kube-apiserver.docker_tag kube-proxy
  17. cloud-controller-manager kube-apiserver.tar kube-proxy.docker_tag
  18. cloud-controller-manager.docker_tag kube-controller-manager kube-proxy.tar
  19. cloud-controller-manager.tar kube-controller-manager.docker_tag kube-scheduler
  20. hyperkube kube-controller-manager.tar kube-scheduler.docker_tag
  21. kubeadm kubectl kube-scheduler.tar
  22. kube-apiserver kubelet mounter
  23. [root@Master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
  24. [root@Master01 bin]# ls /opt/kubernetes/bin/
  25. kube-apiserver kube-controller-manager kubectl kube-scheduler
  26. #首先要确保/opt/etcd/ssl下有指定认证文件
  27. [root@Master01 scripts]# ls /opt/etcd/ssl/
  28. ca-key.pem ca.pem server-key.pem server.pem
  29. #拷贝K8S证书到指定目录下
  30. [root@Master01 scripts]# cd ~
  31. [root@Master01 ~]# cd /server/scripts/k8s-cert
  32. [root@Master01 k8s-cert]# ll ca.pem ca-key.pem server*.pem
  33. -rw-r--r-- 1 root root 1359 3 27 21:29 ca.pem
  34. -rw-r--r-- 1 root root 1359 3 27 21:29 ca-key.pem
  35. -rw------- 1 root root 1675 3 27 21:29 server-key.pem
  36. -rw-r--r-- 1 root root 1643 3 27 21:29 server.pem
  37. [root@Master01 k8s-cert]# cp ca.pem ca-key.pem server*.pem /opt/kubernetes/ssl/
  38. [root@Master01 k8s-cert]# ls /opt/kubernetes/ssl/
  39. ca.pem ca-key.pem server-key.pem server.pem #拷贝这四个证书到此目录下
  40. #查看事先写好的apiserver.sh脚本
  41. [root@Master01 ~]# cd /server/scripts/
  42. [root@Master01 scripts]# cat apiserver.sh
  43. #!/bin/bash
  44. MASTER_ADDRESS=$1
  45. ETCD_SERVERS=$2
  46. cat <<FOF >/opt/kubernetes/cfg/kube-apiserver
  47. KUBE_APISERVER_OPTS="--logtostderr=true \\
  48. --v=4 \\
  49. --etcd-servers=${ETCD_SERVERS} \\
  50. --bind-address=${MASTER_ADDRESS} \\
  51. --secure-port=6443 \\
  52. --advertise-address=${MASTER_ADDRESS} \\
  53. --allow-privileged=true \\
  54. --service-cluster-ip-range=10.0.0.0/24 \\
  55. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
  56. --authorization-mode=RBAC,Node \\
  57. --kubelet-https=true \\
  58. --enable-bootstrap-token-auth \\
  59. --token-auth-file=/opt/kubernetes/cfg/token.csv \\
  60. --service-node-port-range=30000-50000 \\
  61. --tls-cert-file=/opt/kubernetes/ssl/server.pem \\
  62. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
  63. --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
  64. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  65. --etcd-cafile=/opt/etcd/ssl/ca.pem \\
  66. --etcd-certfile=/opt/etcd/ssl/server.pem \\
  67. --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  68. FOF
  69. cat <<FOF >/usr/lib/systemd/system/kube-apiserver.service
  70. [Unit]
  71. Description=Kubernetes API Server
  72. Documentation=https://github.com/kubernetes/kubernetes
  73. [Service]
  74. EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
  75. ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
  76. Restart=on-failure
  77. [Install]
  78. WantedBy=multi-user.target
  79. FOF
  80. systemctl daemon-reload
  81. systemctl enable kube-apiserver
  82. systemctl restart kube-apiserver
  83. #通过脚本生成apiserver配置文件和systemd启动脚本
  84. [root@Master01 scripts]# ./apiserver.sh 192.168.200.207 https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379
  85. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
  86. #查看kube-apiserver的配置文件
  87. [root@Master01 ~]# cat /opt/kubernetes/cfg/kube-apiserver
  88. KUBE_APISERVER_OPTS="--logtostderr=true \
  89. --v=4 \
  90. --etcd-servers=https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379 \
  91. --bind-address=192.168.200.207 \
  92. --secure-port=6443 \
  93. --advertise-address=192.168.200.207 \
  94. --allow-privileged=true \
  95. --service-cluster-ip-range=10.0.0.0/24 \
  96. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
  97. --authorization-mode=RBAC,Node \
  98. --kubelet-https=true \
  99. --enable-bootstrap-token-auth \
  100. --token-auth-file=/opt/kubernetes/cfg/token.csv \
  101. --service-node-port-range=30000-50000 \
  102. --tls-cert-file=/opt/kubernetes/ssl/server.pem \
  103. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
  104. --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  105. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  106. --etcd-cafile=/opt/etcd/ssl/ca.pem \
  107. --etcd-certfile=/opt/etcd/ssl/server.pem \
  108. --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  109. 特别说明:
  110. --logtostderr:启用日志
  111. --v:日志等级
  112. --etcd-servers:etcd集群地址
  113. --bind-address:监听地址
  114. --secure-port:https安全端口
  115. --advertise-address:集群通告地址
  116. --allow-privileged:启用授权
  117. --service-cluster-ip-range:Service虚拟IP地址段
  118. --enable-admission-plugins:准入控制模块
  119. --authorization-mode:认证授权,启用RBAC授权和节点自管理
  120. --enable-bootstrap-token-auth:启用TLS bootstrap功能,后面会提到
  121. --token-auth-file token文件
  122. --service-node-port-rangeService Node类型默认分配端口范围
  1. #我们此时还不能启动apiserver,因为配置文件里的token.csv认证文件还没有生成
  2. #生成一段随机16位字符串作为token认证内容
  3. [root@Master01 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  4. 87ba208e8fe7dfd393b93bf0f7749898
  5. [root@Master01 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' > /opt/kubernetes/cfg/token.csv
  6. [root@Master01 ~]# vim /opt/kubernetes/cfg/token.csv
  7. [root@Master01 ~]# cat /opt/kubernetes/cfg/token.csv #token认证文件生成结束(没有的手动补全)
  8. df3334281501df44c2bea4db952c1ee8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

特别说明:

df3334281501df44c2bea4db952c1ee8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

  1. #启动kube-apiserver
  2. [root@Master01 ~]# systemctl daemon-reload
  3. [root@Master01 ~]# systemctl start kube-apiserver
  4. [root@Master01 ~]# ps -ef | grep kube-apiserver
  5. root 28492 1 23 22:39 ? 00:00:00 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379 --bind-address=192.168.200.207 --secure-port=6443 --advertise-address=192.168.200.207 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
  6. root 28501 27502 0 22:39 pts/0 00:00:00 grep --color=auto kube-apiserver

(2)部署kube-controller-manager组件

  1. #在Master01上查看已经写好的controller-manager.sh脚本
  2. [root@Master01 scripts]# ls controller-manager.sh
  3. controller-manager.sh
  4. [root@Master01 scripts]# cat controller-manager.sh
  5. #!/bin/bash
  6. MASTER_ADDRESS=$1
  7. cat <<FOF >/opt/kubernetes/cfg/kube-controller-manager
  8. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
  9. --v=4 \\
  10. --master=${MASTER_ADDRESS}:8080 \\
  11. --leader-elect=true \\
  12. --address=127.0.0.1 \\
  13. --service-cluster-ip-range=10.10.10.0/24 \\
  14. --cluster-name=kubernetes \\
  15. --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
  16. --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  17. --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
  18. --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  19. --experimental-cluster-signing-duration=87600h0m0s"
  20. FOF
  21. cat <<FOF >/usr/lib/systemd/system/kube-controller-manager.service
  22. [Unit]
  23. Description=Kubernetes Controller Manager
  24. Documentation=https://github.com/kubernetes/kubernetes
  25. [Service]
  26. EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
  27. ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
  28. Restart=on-failure
  29. [Install]
  30. WantedBy=multi-user.target
  31. FOF
  32. systemctl daemon-reload
  33. systemctl enable kube-controller-manager
  34. systemctl start kube-controller-manager
  35. #执行脚本获取controller-manager配置文件及systemd启动脚本
  36. [root@Master01 scripts]# ./controller-manager.sh 127.0.0.1
  37. #查看服务是否启动
  38. [root@Master01 scripts]# ps -ef | grep kube
  39. root 28944 1 3 23:32 ? 00:00:08 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379 --bind-address=192.168.200.207 --secure-port=6443 --advertise-address=192.168.200.207 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
  40. root 28963 1 1 23:34 ? 00:00:01 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.10.10.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s
  41. root 28994 28226 0 23:37 pts/1 00:00:00 grep --color=auto kube

(3)部署kube-scheduler组件

  1. #在Master01上,查看事先准备好的scheduler.sh脚本
  2. [root@Master01 ~]# cd /server/scripts/
  3. [root@Master01 scripts]# cat scheduler.sh
  4. #!/bin/bash
  5. MASTER_ADDRESS=$1
  6. cat <<FOF >/opt/kubernetes/cfg/kube-scheduler
  7. KUBE_SCHEDULER_OPTS="--logtostderr=true \\
  8. --v=4 \\
  9. --master=${MASTER_ADDRESS}:8080 \\
  10. --leader-elect"
  11. FOF
  12. cat <<FOF >/usr/lib/systemd/system/kube-scheduler.service
  13. [Unit]
  14. Description=Kubernetes.Scheduler
  15. Documentation=https://github.com/kubernetes/kubernetes
  16. [Service]
  17. EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
  18. ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
  19. Restart=on-failure
  20. [Install]
  21. WantedBy=multi-user.target
  22. FOF
  23. systemctl daemon-reload
  24. systemctl enable kube-scheduler
  25. systemctl restart kube-scheduler
  26. #给脚本x权限后,执行该脚本
  27. [root@Master01 scripts]# ./scheduler.sh 127.0.0.1
  28. #查看kube-scheduler进程,是否启动
  29. [root@Master01 scripts]# ps -ef | grep kube-scheduler | grep -v grep
  30. root 30020 1 0 17:52 ? 00:00:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
  31. #查看一下schduler的配置文件:
  32. [root@Master01 ~]# cat /opt/kubernetes/cfg/kube-scheduler
  33. KUBE_SCHEDULER_OPTS="--logtostderr=true \
  34. --v=4 \
  35. --master=127.0.0.1:8080 \
  36. --leader-elect"
  37. 特别说明:
  38. --master:连接本地的apiserver
  39. --leader-elect:当该组件启动多个时,自动选举(HA
  1. #最后我们检查一个master集群的健康状态
  2. #将K8S的命令做一个软链接
  3. [root@Master01 scripts]# cd ~
  4. [root@Master01 ~]# ls /opt/kubernetes/bin/
  5. kube-apiserver kube-controller-manager kubectl kube-scheduler
  6. [root@Master01 ~]# ln -s /opt/kubernetes/bin/* /usr/local/bin/
  7. #执行集群健康状态检查命令
  8. [root@Master01 ~]# kubectl get cs
  9. NAME STATUS MESSAGE ERROR
  10. controller-manager Healthy ok
  11. scheduler Healthy ok
  12. etcd-2 Healthy {"health":"true"}
  13. etcd-0 Healthy {"health":"true"}
  14. etcd-1 Healthy {"health":"true"}

2.8 单Master集群-在Node节点部署组件

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

认证大致工作流程如图所示:

QQ截图20190407221528.png-45.6kB

2.8.1 在Master节点将kubelet-bootstrap用户绑定到系统集群角色

  1. #在master节点上创建kubelet-bootstrap用户角色(用于验证Node访问Master apiserver),并绑定到系统集群
  2. [root@Master01 ~]# cat /opt/kubernetes/cfg/token.csv
  3. df3334281501df44c2bea4db952c1ee8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  4. [root@Master01 ~]# which kubectl
  5. /usr/local/bin/kubectl
  6. [root@Master01 ~]# kubectl create clusterrolebinding kubelet-bootstrap \
  7. > --clusterrole=system:node-bootstrapper \
  8. > --user=kubelet-bootstrap
  9. clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

2.8.2 在Master节点创建kubeconfig文件

  1. #在master01上查看准备好的kubeconfig.sh脚本
  2. #注意脚本中的BOOTSTRAP_TOKEN需要设定成你自己的
  3. #注意脚本中的KUBE_APISERVER需要设定成你的master节点IP
  4. [root@Master01 scripts]# cat kubeconfig.sh
  5. #!/bin/bash
  6. #创建kubelet bootstrapping kubeconfig
  7. BOOTSTRAP_TOKEN=df3334281501df44c2bea4db952c1ee8 #特别注意,这行所有人都要修改
  8. KUBE_APISERVER="https://192.168.200.207:6443"
  9. #设置集群参数
  10. kubectl config set-cluster kubernetes \
  11. --certificate-authority=./ca.pem \
  12. --embed-certs=true \
  13. --server=${KUBE_APISERVER} \
  14. --kubeconfig=bootstrap.kubeconfig
  15. #设置客户端认证参数
  16. kubectl config set-credentials kubelet-bootstrap \
  17. --token=${BOOTSTRAP_TOKEN} \
  18. --kubeconfig=bootstrap.kubeconfig
  19. #设置上下文参数
  20. kubectl config set-context default \
  21. --cluster=kubernetes \
  22. --user=kubelet-bootstrap \
  23. --kubeconfig=bootstrap.kubeconfig
  24. #设置默认上下文
  25. kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  26. #-----------------------------------------
  27. #创建kube-proxy kubeconfig文件
  28. kubectl config set-cluster kubernetes \
  29. --certificate-authority=./ca.pem \
  30. --embed-certs=true \
  31. --server=${KUBE_APISERVER} \
  32. --kubeconfig=kube-proxy.kubeconfig
  33. kubectl config set-credentials kube-proxy \
  34. --client-certificate=./kube-proxy.pem \
  35. --client-key=./kube-proxy-key.pem \
  36. --embed-certs=true \
  37. --kubeconfig=kube-proxy.kubeconfig
  38. kubectl config set-context default \
  39. --cluster=kubernetes \
  40. --user=kube-proxy \
  41. --kubeconfig=kube-proxy.kubeconfig
  42. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  43. #将脚本复制到/server/scripts/k8s-cert/目录下
  44. #脚本必须在之前生成的k8s集群的证书目录下,以相对路径执行
  45. [root@Master01 scripts]# chmod +x kubeconfig.sh
  46. [root@Master01 scripts]# ls kubeconfig.sh
  47. kubeconfig.sh
  48. [root@Master01 scripts]# pwd
  49. /server/scripts
  50. [root@Master01 scripts]# cp kubeconfig.sh k8s-cert/
  51. [root@Master01 k8s-cert]# ./kubeconfig.sh
  52. Cluster "kubernetes" set.
  53. User "kubelet-bootstrap" set.
  54. Context "default" created.
  55. Switched to context "default".
  56. Cluster "kubernetes" set.
  57. User "kube-proxy" set.
  58. Context "default" created.
  59. Switched to context "default".
  60. #查看生成的文件
  61. [root@Master01 k8s-cert]# ls bootstrap.kubeconfig kube-proxy.kubeconfig
  62. bootstrap.kubeconfig kube-proxy.kubeconfig

将最后生成的这两个文件拷贝到Node节点的/opt/kubernetes/cfg目录下

  1. [root@Master01 k8s-cert]# which scp
  2. /usr/bin/scp
  3. [root@Master01 k8s-cert]# scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.200.209:/opt/kubernetes/cfg/
  4. root@192.168.200.209's password:
  5. bootstrap.kubeconfig 100% 2169 4.0MB/s 00:00
  6. kube-proxy.kubeconfig 100% 6271 10.0MB/s 00:00

在Node节点上查看文件拷贝情况

  1. [root@node01 ~]# cd /opt/kubernetes/cfg/
  2. [root@node01 cfg]# ls
  3. bootstrap.kubeconfig flanneld kube-proxy.kubeconfig

2.8.3 在Node节点部署kubelet组件

首先要将之前解压过的kubernetes的源码包中的两个二进制文件,拷贝到Node节点下的/opt/kubernetes/bin目录下

QQ截图20190407231750.png-22.5kB

  1. #将K8S源码包中的kubelet和kube-proxy拷贝到Node节点
  2. [root@Master01 ~]# pwd
  3. /root
  4. [root@Master01 ~]# cd kubernetes
  5. [root@Master01 kubernetes]# ls
  6. addons kubernetes-src.tar.gz LICENSES server
  7. [root@Master01 kubernetes]# cd server/bin/
  8. [root@Master01 bin]# ls kubelet kube-proxy
  9. kubelet kube-proxy
  10. [root@Master01 bin]# scp kubelet kube-proxy 192.168.200.209:/opt/kubernetes/bin/
  11. root@192.168.200.209s password:
  12. kubelet 100% 169MB 99.0MB/s 00:01
  13. kube-proxy 100% 48MB 94.5MB/s 00:00
  14. #在Node节点上查看拷贝情况
  15. [root@node01 cfg]# cd /opt/kubernetes/bin/
  16. [root@node01 bin]# ls
  17. flanneld kubelet kube-proxy mk-docker-opts.sh

然后将我们事先在Master节点准备好的脚本kubelet.sh拷贝到Node节点上

  1. [root@Master01 scripts]# pwd
  2. /server/scripts
  3. [root@Master01 scripts]# ls kubelet.sh
  4. kubelet.sh
  5. [root@Master01 scripts]# chmod +x kubelet.sh
  6. [root@Master01 scripts]# scp kubelet.sh 192.168.200.209:~/
  7. root@192.168.200.209s password:
  8. kubelet.sh 100% 1218 1.8MB/s 00:00
  9. #在Node节点查看拷贝情况,并执行脚本
  10. [root@node01 ~]# ls kubelet.sh
  11. kubelet.sh
  12. [root@Master01 scripts]# cat kubelet.sh
  13. #!/bin/bash
  14. NODE_ADDRESS=$1
  15. DNS_SERVER_IP=${2:-"10.0.0.2"}
  16. cat <<FOF >/opt/kubernetes/cfg/kubelet
  17. KUBELET_OPTS="--logtostderr=true \\
  18. --v=4 \\
  19. --hostname-override=${NODE_ADDRESS} \\
  20. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
  21. --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
  22. --config=/opt/kubernetes/cfg/kubelet.config \\
  23. --cert-dir=/opt/kubernetes/ssl \\
  24. --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  25. FOF
  26. cat <<FOF >/opt/kubernetes/cfg/kubelet.config
  27. kind: KubeletConfiguration
  28. apiVersion: kubelet.config.k8s.io/v1beta1
  29. address: ${NODE_ADDRESS}
  30. port: 10250
  31. readOnlyPort: 10255
  32. cgroupDriver: cgroupfs
  33. clusterDNS: ["${DNS_SERVER_IP}"]
  34. clusterDomain: cluster.local.
  35. failSwapOn: false
  36. authentication:
  37. anonymous:
  38. enabled: true
  39. FOF
  40. cat <<FOF >/usr/lib/systemd/system/kubelet.service
  41. [Unit]
  42. Description=Kubernetes Kubelet
  43. After=dockerd.service
  44. Requires=dockerd.service
  45. [Service]
  46. EnvironmentFile=/opt/kubernetes/cfg/kubelet
  47. ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
  48. Restart=on-failure
  49. KillMode=process
  50. [Install]
  51. WantedBy=multi-user.target
  52. FOF
  53. systemctl daemon-reload
  54. systemctl enable kubelet
  55. systemctl restart kubelet
  56. [root@node01 ~]# sh kubelet.sh 192.168.200.209
  57. #查看kubelet是否启动
  58. [root@node01 ~]# ps -ef | grep kubelet
  59. root 61623 1 3 22:42 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.200.209 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
  60. root 61635 60348 0 22:42 pts/0 00:00:00 grep --color=auto kubelet

kubelet在Node节点每隔一段时间,就会通过低权限用户bootstrap来向master节点上的apiserver申请证书。那么master节点上需要通过手动来授权向节点颁发证书。

  1. #在Master审批Node加入集群
  2. [root@Master01 scripts]# kubectl get csr #查看master节点上收到的申请(Pending延迟决定)
  3. NAME AGE REQUESTOR CONDITION
  4. node-csr-gHxjWJd9EfxOw3Q0_0HgiBUqJ7AI07FPbxLpKgaYrVg 7m34s kubelet-bootstrap Pending
  5. #手动通过kubectl通过申请,kubectl certificate approve 加上申请的名字即可。
  6. [root@Master01 scripts]# kubectl certificate approve node-csr-gHxjWJd9EfxOw3Q0_0HgiBUqJ7AI07FPbxLpKgaYrVg
  7. certificatesigningrequest.certificates.k8s.io/node-csr-gHxjWJd9EfxOw3Q0_0HgiBUqJ7AI07FPbxLpKgaYrVg approved
  8. #再次查看申请状态(Approved,Issued批准,发布)
  9. [root@Master01 scripts]# kubectl get csr
  10. NAME AGE REQUESTOR CONDITION
  11. node-csr-gHxjWJd9EfxOw3Q0_0HgiBUqJ7AI07FPbxLpKgaYrVg 8m49s kubelet-bootstrap Approved,Issued
  12. #在master上查看已经颁发证书的Node节点信息
  13. [root@Master01 scripts]# kubectl get node
  14. NAME STATUS ROLES AGE VERSION
  15. 192.168.200.209 Ready <none> 32s v1.12.1

2.8.4 在Node节点部署kube-proxy组件

我们先将事先在Master节点准备好的脚本kube-proxy.sh拷贝到Node节点上

  1. [root@Node01 ~]# cat kube-proxy.sh
  2. #!/bin/bash
  3. NODE_ADDRESS=$1
  4. cat <<FOF >/opt/kubernetes/cfg/kube-proxy
  5. KUBE_PROXY_OPTS="--logtostderr=true \\
  6. --v=4 \\
  7. --hostname-override=${NODE_ADDRESS} \\
  8. --cluster-cidr=10.0.0.0/24 \\
  9. --proxy-mode=ipvs \\
  10. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  11. FOF
  12. cat <<FOF >/usr/lib/systemd/system/kube-proxy.service
  13. [Unit]
  14. Description=Kubernetes Proxy
  15. After=network.target
  16. [Service]
  17. EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
  18. ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
  19. Restart=on-failure
  20. [Install]
  21. WantedBy=multi-user.target
  22. FOF
  23. systemctl daemon-reload
  24. systemctl enable kube-proxy
  25. systemctl restart kube-proxy
  26. [root@Master01 scripts]# ls kube-proxy.sh
  27. kube-proxy.sh
  28. [root@Master01 scripts]# chmod +x kube-proxy.sh
  29. [root@Master01 scripts]# scp kube-proxy.sh 192.168.200.209:~/
  30. root@192.168.200.209s password:
  31. kube-proxy.sh
  32. #在Node节点查看拷贝情况,并执行脚本
  33. [root@node01 ~]# ls kube-proxy.sh
  34. kube-proxy.sh
  35. [root@node01 ~]# sh kube-proxy.sh 192.168.200.209
  36. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
  37. #查看启动是否成功
  38. [root@node01 ~]# ps -ef | grep proxy
  39. root 62458 1 0 23:12 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.200.209 --cluster-cidr=192.168.200.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
  40. root 62581 60348 0 23:12 pts/0 00:00:00 grep --color=auto proxy

2.8.5 再部署一个Node节点

  1. #因为Node节点之间/opt/kubernetes目录里的内容是一样的
  2. #因此,我们拷贝Node01节点的/opt/kubernetes目录的内容过去
  3. [root@node01 ~]# scp -r /opt/kubernetes 192.168.200.210:/opt
  4. root@192.168.200.210's password:
  5. flanneld 100% 34MB 93.7MB/s 00:00
  6. mk-docker-opts.sh 100% 2139 2.1MB/s 00:00
  7. kubelet 100% 169MB 98.5MB/s 00:01
  8. kube-proxy 100% 48MB 103.6MB/s 00:00
  9. flanneld 100% 241 739.6KB/s 00:00
  10. bootstrap.kubeconfig 100% 2169 3.2MB/s 00:00
  11. kube-proxy.kubeconfig 100% 6271 11.6MB/s 00:00
  12. kubelet 100% 379 814.7KB/s 00:00
  13. kubelet.config 100% 274 682.5KB/s 00:00
  14. kubelet.kubeconfig 100% 2298 5.6MB/s 00:00
  15. kube-proxy 100% 196 568.7KB/s 00:00
  16. kubelet.crt 100% 2197 5.3MB/s 00:00
  17. kubelet.key 100% 1679 3.4MB/s 00:00
  18. kubelet-client-2019-04-08-22-43-32.pem 100% 1277 2.5MB/s 00:00
  19. kubelet-client-current.pem 100% 1277 2.9MB/s 00:00
  20. #将Node01节点的systemd启动管理脚本拷贝到另外一个Node02节点的目录下
  21. [root@node01 ~]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service 192.168.200.210:/usr/lib/systemd/system/
  22. root@192.168.200.210's password:
  23. kubelet.service 100% 266 542.4KB/s 00:00
  24. kube-proxy.service 100% 231 521.7KB/s 00:00
  25. #清空Node02节点的/opt/kubernets/ssl/目录下的所有证书(这是为之前Node01节点颁发的)
  26. [root@node02 kubernetes]# pwd
  27. /opt/kubernetes
  28. [root@node02 kubernetes]# ls ssl/
  29. kubelet-client-2019-04-08-22-43-32.pem kubelet.crt
  30. kubelet-client-current.pem kubelet.key
  31. [root@node02 kubernetes]# rm -f ssl/*
  32. [root@node02 kubernetes]# ls ssl/

然后我们需要给新的Node节点的配置文件修改为本机IP地址

  1. #进入新Node节点的/opt/kubernetes/cfg目录下
  2. #修改kubelet,kubelet.config和kube-proxy三个文件里的IP地址(修改为新Node节点的IP地址)
  3. [root@node02 ~]# cd /opt/kubernetes/cfg/
  4. [root@node02 cfg]# ls
  5. bootstrap.kubeconfig kubelet kubelet.kubeconfig kube-proxy.kubeconfig
  6. flanneld kubelet.config kube-proxy
  7. [root@node02 cfg]# grep "209" *
  8. flanneld:FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  9. kubelet:--hostname-override=192.168.200.209 \ #这个需要修改为新Node节点IP
  10. kubelet.config:address: 192.168.200.209 #这个需要修改为新Node节点IP
  11. kube-proxy:--hostname-override=192.168.200.209 \#这个需要修改为新Node节点IP
  12. [root@node02 cfg]# vim kubelet
  13. [root@node02 cfg]# vim kubelet.config
  14. [root@node02 cfg]# vim kube-proxy
  15. [root@node02 cfg]# grep "210" *
  16. kubelet:--hostname-override=192.168.200.210 \
  17. kubelet.config:address: 192.168.200.210
  18. kube-proxy:--hostname-override=192.168.200.210 \

做完这些事情后,我们就可以启动kubelet和kube-proxy服务了

  1. [root@node02 cfg]# systemctl restart kubelet
  2. [root@node02 cfg]# systemctl restart kube-proxy
  3. [root@node02 cfg]# ps -ef | grep kube
  4. root 51230 1 3 20:32 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.200.210 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
  5. root 51246 1 1 20:32 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.200.210 --cluster-cidr=192.168.200.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
  6. root 51377 51072 0 20:32 pts/0 00:00:00 grep --color=auto kube

然后我们在Master节点查看是否有新的节点申请加入集群

  1. [root@Master01 ~]# kubectl get csr
  2. NAME AGE REQUESTOR CONDITION
  3. node-csr-fDKZPFApSaC7XGP7l1Ik3EYsAbosKXYg7mj9cFPp3Vg 117s kubelet-bootstrap Pending
  4. node-csr-gHxjWJd9EfxOw3Q0_0HgiBUqJ7AI07FPbxLpKgaYrVg 21h kubelet-bootstrap Approved,Issued
  5. #手动审批节点加入集群
  6. [root@Master01 ~]# kubectl get csr
  7. NAME AGE REQUESTOR CONDITION
  8. node-csr-fDKZPFApSaC7XGP7l1Ik3EYsAbosKXYg7mj9cFPp3Vg 117s kubelet-bootstrap Pending
  9. node-csr-gHxjWJd9EfxOw3Q0_0HgiBUqJ7AI07FPbxLpKgaYrVg 21h kubelet-bootstrap Approved,Issued
  10. [root@Master01 ~]# kubectl certificate approve node-csr-fDKZPFApSaC7XGP7l1Ik3EYsAbosKXYg7mj9cFPp3Vg
  11. certificatesigningrequest.certificates.k8s.io/node-csr-fDKZPFApSaC7XGP7l1Ik3EYsAbosKXYg7mj9cFPp3Vg approved
  12. [root@Master01 ~]# kubectl get node
  13. NAME STATUS ROLES AGE VERSION
  14. 192.168.200.209 Ready <none> 22h v1.12.1
  15. 192.168.200.210 Ready <none> 51s v1.12.1

2.9 运行一个测试示例检验集群工作状态

  1. #创建一个nginx的pod并运行
  2. [root@Master01 ~]# kubectl run nginx --image=nginx
  3. kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
  4. deployment.apps/nginx created
  5. #查看pod情况,READY为0/1是还没有创建好;STATUS状态为容器正在建立
  6. [root@Master01 ~]# kubectl get pod
  7. NAME READY STATUS RESTARTS AGE
  8. nginx-dbddb74b8-ggpxm 0/1 ContainerCreating 0 9s
  9. [root@Master01 ~]# kubectl get pod
  10. NAME READY STATUS RESTARTS AGE
  11. nginx-dbddb74b8-ggpxm 0/1 ContainerCreating 0 15s
  12. [root@Master01 ~]# kubectl get pod
  13. NAME READY STATUS RESTARTS AGE
  14. nginx-dbddb74b8-ggpxm 0/1 ContainerCreating 0 38s
  15. [root@Master01 ~]# kubectl get pod
  16. NAME READY STATUS RESTARTS AGE
  17. nginx-dbddb74b8-ggpxm 0/1 ContainerCreating 0 52s
  18. #过了一会儿,容器终于创建好并运行
  19. [root@Master01 ~]# kubectl get pod
  20. NAME READY STATUS RESTARTS AGE
  21. nginx-dbddb74b8-ggpxm 1/1 Running 0 53s
  22. #让创建的Pod开放80端口,以便于外部能够访问
  23. [root@Master01 ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
  24. service/nginx exposed
  25. #查看集群服务的开启情况
  26. [root@Master01 ~]# kubectl get svc
  27. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  28. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 11d
  29. nginx NodePort 10.0.0.192 <none> 80:31442/TCP 60s

特别说明:

2.9.1 在K8S集群中,对Pod进行访问测试(两个Node节点上)

  1. #特别注意:如果你的实验不是在一天完成的,那么你的虚拟机的Node节点的flannel服务可能已经挂掉了。因此,我们需要重启动flanneld和dockerd服务
  2. [root@node01 ~]# systemctl daemon-reload
  3. [root@node01 ~]# systemctl restart flanneld
  4. [root@node01 ~]# systemctl restart dockerd
  5. [root@node01 ~]# ps -ef | grep flanneld
  6. root 76933 1 0 21:49 ? 00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
  7. [root@node02 cfg]# systemctl daemon-reload
  8. [root@node02 cfg]# systemctl restart flanneld
  9. [root@node02 cfg]# systemctl restart dockerd
  10. [root@node02 cfg]# ps -ef | grep flanneld
  11. root 59410 1 0 21:49 ? 00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.200.207:2379,https://192.168.200.208:2379,https://192.168.200.209:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
  12. #在Node01和Node02上分别访问10.0.0.192:80
  13. [root@node01 ~]# curl 10.0.0.192:80
  14. <!DOCTYPE html>
  15. <html>
  16. <head>
  17. <title>Welcome to nginx!</title>
  18. <style>
  19. body {
  20. width: 35em;
  21. margin: 0 auto;
  22. font-family: Tahoma, Verdana, Arial, sans-serif;
  23. }
  24. </style>
  25. </head>
  26. <body>
  27. <h1>Welcome to nginx!</h1>
  28. <p>If you see this page, the nginx web server is successfully installed and
  29. working. Further configuration is required.</p>
  30. <p>For online documentation and support please refer to
  31. <a href="http://nginx.org/">nginx.org</a>.<br/>
  32. Commercial support is available at
  33. <a href="http://nginx.com/">nginx.com</a>.</p>
  34. <p><em>Thank you for using nginx.</em></p>
  35. </body>
  36. </html>
  37. [root@node02 ~]# curl 10.0.0.192:80
  38. <!DOCTYPE html>
  39. <html>
  40. <head>
  41. <title>Welcome to nginx!</title>
  42. <style>
  43. body {
  44. width: 35em;
  45. margin: 0 auto;
  46. font-family: Tahoma, Verdana, Arial, sans-serif;
  47. }
  48. </style>
  49. </head>
  50. <body>
  51. <h1>Welcome to nginx!</h1>
  52. <p>If you see this page, the nginx web server is successfully installed and
  53. working. Further configuration is required.</p>
  54. <p>For online documentation and support please refer to
  55. <a href="http://nginx.org/">nginx.org</a>.<br/>
  56. Commercial support is available at
  57. <a href="http://nginx.com/">nginx.com</a>.</p>
  58. <p><em>Thank you for using nginx.</em></p>
  59. </body>
  60. </html>

2.9.2 在K8S集群外,对Pod进行访问测试

由于我们还没有负载均衡器LB,因此我们需要查看一下,pod被创建在了哪个Node节点上

  1. [root@Master01 ~]# kubectl get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  3. nginx-dbddb74b8-ggpxm 1/1 Running 2 19m 172.17.95.2 192.168.200.209 <none>
  4. [root@Master01 ~]# kubectl get svc
  5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 11d
  7. nginx NodePort 10.0.0.192 <none> 80:31442/TCP 22m

通过查询,我们知道,Nginx的pod被创建在了IP为192.168.200.209的Node节点上
因此,我们在宿主机浏览器上通过访问http://192.168.200.209:31442进行访问

QQ截图20190409220159.png-22.9kB

2.9.3 查看Pod的访问日志(进行匿名用户的集群权限绑定)

  1. #我们在master上查看pod的访问日志
  2. [root@Master01 ~]# kubectl get pods
  3. NAME READY STATUS RESTARTS AGE
  4. nginx-dbddb74b8-ggpxm 1/1 Running 2 29m
  5. [root@Master01 ~]# kubectl logs nginx-dbddb74b8-ggpxm
  6. Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-ggpxm)

我们发现我们通过kubectl进行日志访问被权限拒绝了,这是因为:
kubernetes需要将system:anonymous用户的集群访问权限提高到cluster-admin(系统内置管理员权限)

  1. #将匿名用户system:anonymous的权限绑定到集群的cluster-admin用户上
  2. [root@Master01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  3. clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
  4. [root@Master01 ~]# kubectl logs nginx-dbddb74b8-ggpxm
  5. 172.17.95.1 - - [09/Apr/2019:13:51:56 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
  6. 172.17.16.0 - - [09/Apr/2019:13:52:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
  7. 172.17.95.1 - - [09/Apr/2019:14:00:31 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" "-"
  8. 2019/04/09 14:00:31 [error] 6#6: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.95.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.200.209:31442", referrer: "http://192.168.200.209:31442/"
  9. 172.17.95.1 - - [09/Apr/2019:14:00:31 +0000] "GET /favicon.ico HTTP/1.1" 404 556 "http://192.168.200.209:31442/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" "-"
  10. 172.17.95.1 - - [09/Apr/2019:14:02:03 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" "-"

2.10 部署Web UI(Dashboard)

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

在我们之前的K8S的源码包里有dashboard插件

  1. #在Master01上,进行如下操作
  2. [root@Master01 ~]# ls
  3. anaconda-ks.cfg etcd-v3.3.12-linux-amd64 etcd-v3.3.12-linux-amd64.tar.gz kubernetes kubernetes-server-linux-amd64.tar.gz
  4. #进入解压后的kubernetes源码包目录
  5. [root@Master01 ~]# cd kubernetes
  6. [root@Master01 kubernetes]# ls
  7. addons kubernetes-src.tar.gz LICENSES server
  8. #解包kubernetes-src.tar.gz
  9. [root@Master01 kubernetes]# tar xf kubernetes-src.tar.gz
  10. [root@Master01 kubernetes]# ls
  11. addons CHANGELOG.md docs LICENSES OWNERS_ALIASES server translations
  12. api cluster Godeps logo pkg staging vendor
  13. build cmd hack Makefile plugin SUPPORT.md WORKSPACE
  14. BUILD.bazel code-of-conduct.md kubernetes-src.tar.gz Makefile.generated_files README.md test
  15. CHANGELOG-1.12.md CONTRIBUTING.md LICENSE OWNERS SECURITY_CONTACTS third_party
  16. #进入dashboard插件目录
  17. [root@Master01 kubernetes]# cd cluster/addons/dashboard/
  18. [root@Master01 dashboard]# pwd
  19. /root/kubernetes/cluster/addons/dashboard
  20. [root@Master01 dashboard]# ls
  21. dashboard-configmap.yaml dashboard-rbac.yaml dashboard-service.yaml OWNERS
  22. dashboard-controller.yaml dashboard-secret.yaml MAINTAINERS.md README.md
  23. #创建dashboard各个组件
  24. [root@Master01 dashboard]# kubectl create -f dashboard-configmap.yaml
  25. configmap/kubernetes-dashboard-settings created
  26. [root@Master01 dashboard]# kubectl create -f dashboard-rbac.yaml
  27. role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  28. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  29. [root@Master01 dashboard]# kubectl create -f dashboard-secret.yaml
  30. secret/kubernetes-dashboard-certs created
  31. secret/kubernetes-dashboard-key-holder created
  32. #修改dashboard-controller.yaml文件的第34行的镜像下载地址,如下所示:
  33. [root@Master01 dashboard]# sed -n '34p' dashboard-controller.yaml
  34. image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
  35. [root@Master01 dashboard]# vim dashboard-controller.yaml +34
  36. #修改为如下内容
  37. [root@Master01 dashboard]# sed -n '34p' dashboard-controller.yaml
  38. image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
  39. #创建这个组件文件
  40. [root@Master01 dashboard]# kubectl create -f dashboard-controller.yaml
  41. serviceaccount/kubernetes-dashboard created
  42. deployment.apps/kubernetes-dashboard created
  43. #查看dashboard镜像运行情况
  44. #之前测试启动的pods
  45. [root@Master01 dashboard]# kubectl get pods
  46. NAME READY STATUS RESTARTS AGE
  47. nginx-dbddb74b8-ggpxm 1/1 Running 2 11d
  48. #指定命名空间后,我们看到了dashboard的pods
  49. [root@Master01 dashboard]# kubectl get pods -n kube-system
  50. NAME READY STATUS RESTARTS AGE
  51. kubernetes-dashboard-6bff7dc67d-jl7pg 1/1 Running 0 51s
  52. #查看dashboard的pods日志,看看是否启动成功
  53. [root@Master01 ~]# kubectl logs kubernetes-dashboard-6bff7dc67d-jl7pg -n kube-system
  54. 2019/04/21 03:55:55 Using in-cluster config to connect to apiserver
  55. 2019/04/21 03:55:55 Using service account token for csrf signing
  56. 2019/04/21 03:55:55 No request provided. Skipping authorization
  57. 2019/04/21 03:55:55 Starting overwatch
  58. 2019/04/21 03:55:55 Successful initial request to the apiserver, version: v1.12.1
  59. 2019/04/21 03:55:55 Generating JWE encryption key
  60. 2019/04/21 03:55:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
  61. 2019/04/21 03:55:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
  62. 2019/04/21 03:55:57 Initializing JWE encryption key from synchronized object
  63. 2019/04/21 03:55:57 Creating in-cluster Heapster client
  64. 2019/04/21 03:55:57 Auto-generating certificates
  65. 2019/04/21 03:55:57 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  66. 2019/04/21 03:55:57 Successfully created certificates
  67. 2019/04/21 03:55:57 Serving securely on HTTPS port: 8443

出现上述的提示,表示已经启动成功
Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
这句话可以忽略

  1. #打开dashboard-service.yaml配置文件,添加一行代码
  2. [root@Master01 dashboard]# pwd
  3. /root/kubernetes/cluster/addons/dashboard
  4. [root@Master01 dashboard]# vim dashboard-service.yaml
  5. [root@Master01 dashboard]# cat dashboard-service.yaml
  6. apiVersion: v1
  7. kind: Service
  8. metadata:
  9. name: kubernetes-dashboard
  10. namespace: kube-system
  11. labels:
  12. k8s-app: kubernetes-dashboard
  13. kubernetes.io/cluster-service: "true"
  14. addonmanager.kubernetes.io/mode: Reconcile
  15. spec:
  16. type: NodePort #本行代码是后添加的
  17. selector:
  18. k8s-app: kubernetes-dashboard
  19. ports:
  20. - port: 443
  21. targetPort: 8443
  22. #创建这个配置文件的组件
  23. [root@Master01 dashboard]# kubectl create -f dashboard-service.yaml
  24. service/kubernetes-dashboard created
  25. #查看dashboad被创建在了哪个Node上
  26. [root@Master01 ~]# kubectl get pods -o wide -n kube-system
  27. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  28. kubernetes-dashboard-6bff7dc67d-jl7pg 1/1 Running 1 8h 172.17.12.3 192.168.200.209 <none>
  29. #查看dashboard开启的内/外部访问端口
  30. [root@Master01 ~]# kubectl get svc -n kube-system
  31. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  32. kubernetes-dashboard NodePort 10.0.0.46 <none> 443:42460/TCP 5h37m

特别提示:
通过查看,我们发现dashboard被创建在了Node节点192.168.200.209上
开启的内部访问端口为443,外部访问端口为42460
因为这是一个加密的https访问协议,我们使用的都是不合法的证书,所以有些浏览器可能访问不了
如果同学们实验到这里时无法访问,那么请关闭所有杀毒软件后,安装firefox火狐浏览器再次进行访问尝试

通过firefox进行访问尝试:

QQ截图20190421180903.png-61.4kB

QQ截图20190421181006.png-31.3kB

如果我们要访问dashboard仪表盘,那么我们此时还需要一个身份令牌。

  1. #利用我们事先做好的k8s-admin.yaml认证配置文件,创建用户身份令牌
  2. [root@Master01 ~]# vim k8s-admin.yaml
  3. [root@Master01 ~]# cat k8s-admin.yaml
  4. apiVersion: v1
  5. kind: ServiceAccount
  6. metadata:
  7. name: dashboard-admin
  8. namespace: kube-system
  9. ---
  10. kind: ClusterRoleBinding
  11. apiVersion: rbac.authorization.k8s.io/v1beta1
  12. metadata:
  13. name: dashboard-admin
  14. subjects:
  15. - kind: ServiceAccount
  16. name: dashboard-admin
  17. namespace: kube-system
  18. roleRef:
  19. kind: ClusterRole
  20. name: cluster-admin
  21. apiGroup: rbac.authorization.k8s.io
  22. [root@Master01 ~]# kubectl create -f k8s-admin.yaml
  23. serviceaccount/dashboard-admin created
  24. clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
  25. #查看我们刚刚创建的用户的身份认证token值
  26. [root@Master01 ~]# kubectl get secret -n kube-system
  27. NAME TYPE DATA AGE
  28. dashboard-admin-token-ppknr kubernetes.io/service-account-token 3 3m30s
  29. default-token-jjt2j kubernetes.io/service-account-token 3 23d
  30. kubernetes-dashboard-certs Opaque 0 9h
  31. kubernetes-dashboard-key-holder Opaque 2 9h
  32. kubernetes-dashboard-token-mms5w kubernetes.io/service-account-token 3 9h
  33. 特别说明:
  34. dashboard-admin-token-ppknr:就是我们刚刚创建的那个用户dashboard-admin
  1. #查看用户的token认证的详细信息
  2. [root@Master01 ~]# kubectl describe secret dashboard-admin-token-ppknr -n kube-system
  3. Name: dashboard-admin-token-ppknr
  4. Namespace: kube-system
  5. Labels: <none>
  6. Annotations: kubernetes.io/service-account.name: dashboard-admin
  7. kubernetes.io/service-account.uid: 9514e04e-641f-11e9-910b-000c29090fc9
  8. Type: kubernetes.io/service-account-token
  9. Data
  10. ====
  11. namespace: 11 bytes
  12. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcHBrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTUxNGUwNGUtNjQxZi0xMWU5LTkxMGItMDAwYzI5MDkwZmM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.kxocPlazEo8yHPoFpHA7eHLP8qUhCwV21YHx-LL25yOP6_ZNN6HuTtw11AQH_oQ5R8fpet4vCbkABnaZtVOXBkQbO8oU2i5FgLW5o8-nH1Zn264X9fCmqZRcwBV6-q5dwDTwUfbn-3Yv5dibxq5bs_Uc5_fOL32zayiTHHZka85JBENz61R3tQrd3utQIez_yZQ78Uegx-Uk816oJ-zJcGQNuRKSpeJLP5p5AMgQ-TZ47gQUWaeEQsmRmxPw4pNQJ3aS7pM-VA74M3JbIGwgGZLMZ_sp6v0-JWBI3pH7zQkoCwQfnPVnk-oP_zcp4CSc3eKTLN2dqrIr2dImjqQ0QA
  13. ca.crt: 1359 bytes

我们复制dashboard-admin的token认证信息里的token值到Web端进行认证登陆

QQ截图20190421183257.png-31.3kB

QQ截图20190421183501.png-70.3kB

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注