[关闭]
@zhangyy 2020-04-26T13:57:47.000000Z 字数 14722 阅读 125

Kubernetes 的概念与安装配置补充

kubernetes系列


  • 一:kubernetes的介绍
  • 二:kubernetes的安装配置
  • 三:kubernetes 的 WEB UI

一:kubernetes的介绍

1.1、Kubernetes是什么

  1. KubernetesGoogle2014年开源的一个容器集群管理系统,Kubernetes简称K8S
  2. K8S用于容器化应用程序的部署,扩展和管理。
  3. K8S提供了容器编排,资源调度,弹性伸缩,部署管理,服务发现等一系列功能。
  4. Kubernetes目标是让部署容器化应用简单高效。
  5. 官方网站:http://www.kubernetes.io

1.2、Kubernetes特性

  1. 1、自我修复
  2. 在节点故障时重新启动失败的容器,替换和重新部署,保证预期的副本数量;杀死健康检查失败的容器,并且在未准备好之前不会处理客户端请求,确保线上服务不中断。
  3. 2 弹性伸缩
  4. 使用命令、UI或者基于CPU使用情况自动快速扩容和缩容应用程序实例,保证应用业务高峰并发时的高可用性;业务低峰时回收资源,以最小成本运行服务。 自动部署和回滚
  5. K8S采用滚动更新策略更新应用,一次更新一个Pod,而不是同时删除所有Pod,如果更新过程中出现问题,将回滚更改,确保升级不受影响业务。
  6. 3、服务发现和负载均衡
  7. K8S为多个容器提供一个统一访问入口(内部IP地址和一个DNS名称),并且负载均衡关联的所有容器,使得用户无需考虑容器IP问题。
  8. 4 机密和配置管理
  9. 管理机密数据和应用程序配置,而不需要把敏感数据暴露在镜像里,提高敏感数据安全性。并可以将一些常用的配置存储在K8S中,方便应用程序使用。
  10. 5、存储编排
  11. 挂载外部存储系统,无论是来自本地存储,公有云(如AWS),还是网络存储(如NFSGlusterFSCeph)都作为集群资源的一部分使用,极大提高存储使用灵活性。
  12. 6 批处理
  13. 提供一次性任务,定时任务;满足批量数据处理和分析的场景。

1.3 Kubernetes集群架构与组件

1.png-255.3kB

image_1dimpj9kn155d15bh12v2ll731ul.png-153.9kB

  1. Master组件
  2. kube-apiserverKubernetes API
  3. 集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
  4. kube-controller-manager处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
  5. kube-scheduler根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。
  6. etcd分布式键值存储系统。用于保存集群状态数据,比如PodService等对象信息。
  7. Node组件
  8. kubelet kubeletMasterNode节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。
  9. kube-proxy
  10. Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
  11. dockerrocket容器引擎,运行容器。

image_1dimq981i10ku1c5l1lh21hd61kla12.png-905.9kB

image_1dimr3jkr1j3q14lu102i1vg81g9a1f.png-188.6kB

image_1dp53hak6bs3nvj1ebr1ddd14as9.png-442.7kB

  1. 1. Pod
  2. 最小部署单元
  3. 一组容器的集合
  4. 一个Pod中的容器共享网络命名空间
  5. Pod是短暂的
  6. 2. Controllers
  7. ReplicaSet 确保预期的Pod副本数量
  8. Deployment 无状态应用部署
  9. StatefulSet 有状态应用部署
  10. DaemonSet 确保所有Node运行同一个Pod
  11. Job 一次性任务
  12. Cronjob 定时任务更高级层次对象,部署和管理Pod
  13. 3. Service
  14. 防止Pod失联
  15. 定义一组Pod的访问策略
  16. 4. Label 标签,附加到某个资源上,用于关联对象、查询和筛选
  17. 5. Namespaces 命名空间,将对象逻辑上隔离
  18. 6. Annotations :注释

二: kubernetes 高可用集群环境部署

2.1 官方提供的三种部署方式

  1. 1. minikube
  2. Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。部署地址:https://kubernetes.io/docs/setup/minikube/
  3. 2. kubeadm
  4. Kubeadm也是一个工具,提供kubeadm initkubeadm join,用于快速部署Kubernetes集群。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
  5. 3.二进制包推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。下载地址:https://github.com/kubernetes/kubernetes/releases

2.2 采用二进制包部署

2.2.1 软件与版本

image_1dimt7dur18vr10pv1j1e1eidkq11s.png-23.8kB

2.2.2 IP地址 与角色规划

image_1dimtqhsirno1f7hd0lko11vm129.png-146.2kB

2.2.3 单节点master

image_1dimu8jak1h8t278ufb19rc1o2v36.png-207.1kB

2.2.4 多节点master

image_1dimuauhj17nuc0n7t0ovr184c3j.png-259.5kB

2.3 首先部署单master节点

2.3.1 自签SSL证书

image_1dimuf66hlcoo2l40kc1ish40.png-106.3kB

  1. mkdir /k8s/{k8s-cert,etcd-cert} -p
  2. cd /Deploy
  3. cp -p etcd-cert.sh /k8s/etcd-cert
  4. cd /k8s/etcd-cert
  5. chmod +x etcd-cert.sh
  6. ----
  7. etcd-cert.sh 脚本内容
  8. cat > ca-config.json <<EOF
  9. {
  10. "signing": {
  11. "default": {
  12. "expiry": "87600h"
  13. },
  14. "profiles": {
  15. "www": {
  16. "expiry": "87600h",
  17. "usages": [
  18. "signing",
  19. "key encipherment",
  20. "server auth",
  21. "client auth"
  22. ]
  23. }
  24. }
  25. }
  26. }
  27. EOF
  28. cat > ca-csr.json <<EOF
  29. {
  30. "CN": "etcd CA",
  31. "key": {
  32. "algo": "rsa",
  33. "size": 2048
  34. },
  35. "names": [
  36. {
  37. "C": "CN",
  38. "L": "Beijing",
  39. "ST": "Beijing"
  40. }
  41. ]
  42. }
  43. EOF
  44. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  45. #-----------------------
  46. cat > server-csr.json <<EOF
  47. {
  48. "CN": "etcd",
  49. "hosts": [
  50. "192.168.100.11",
  51. "192.168.100.12",
  52. "192.168.100.13",
  53. "192.168.100.14",
  54. "192.168.100.15",
  55. "192.168.100.16",
  56. "192.168.100.17",
  57. "192.168.100.18",
  58. "192.168.100.60"
  59. ],
  60. "key": {
  61. "algo": "rsa",
  62. "size": 2048
  63. },
  64. "names": [
  65. {
  66. "C": "CN",
  67. "L": "BeiJing",
  68. "ST": "BeiJing"
  69. }
  70. ]
  71. }
  72. EOF
  73. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
  74. ----

image_1dimvlfmv11fh10gjeo3k8j6365d.png-460kB

  1. cfsssl 命令支持
  2. cd /root/Deploy
  3. cp -p cfssl.sh /k8s/etcd-cert
  4. cd /k8s/etcd-cert
  5. chmod +x cfssl.sh
  6. ./cfssl.sh
  7. ./etcd-cert.sh
  8. ----
  9. cfssl.sh 脚本内容
  10. curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
  11. curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
  12. curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
  13. chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
  14. ----

image_1din02ksu8u31e5efo8u1v1tj85q.png-457.5kB

image_1din0539jnpt1qmm1j4k1pl1lrm67.png-213.4kB

image_1din0o9681nea1uoc1akq1bpsa9571.png-626.9kB

2.3.2 配置etcd 服务

  1. 二进制包下载地址
  2. https://github.com/etcd-io/etcd/releases
  1. cd /root/Soft
  2. tar -zxvf tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz
  3. cd cd etcd-v3.3.10-linux-amd64/
  4. mkdir -p /opt/etcd/{ssl,bin,cfg}
  5. mv etcd etcdctl /opt/etcd/bin/
  6. cd /k8s/etcd-cert
  7. cp -p *.pem /opt/etcd/ssl
  8. cp -p *.csr /opt/etcd/ssl

image_1din13lctdevd5b1e5pij91off7u.png-169kB

image_1din1449l1gb6lem1ocmk3v13l18b.png-427.5kB

image_1din1bejefhkp5911tr140a30h98.png-410.5kB

image_1din20esf1pm71mkd1dgj1ai916f29l.png-107.9kB

  1. cd /root/Deploy
  2. cp -p etcd.sh /root
  3. chmod +x etcd.sh
  4. ./etcd.sh etcd01 192.168.100.11 etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380
  5. scp -r /opt/etcd 192.168.100.13:/opt/
  6. scp -r /opt/etcd 192.168.100.14:/opt/
  7. scp /usr/lib/systemd/system/etcd.service root@192.168.100.13:/usr/lib/systemd/system/
  8. scp /usr/lib/systemd/system/etcd.service root@192.168.100.14:/usr/lib/systemd/system/
  9. ----
  10. etcd.sh 脚本内容
  11. #!/bin/bash
  12. # example: ./etcd.sh etcd01 192.168.100.11 etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380
  13. ETCD_NAME=$1
  14. ETCD_IP=$2
  15. ETCD_CLUSTER=$3
  16. WORK_DIR=/opt/etcd
  17. cat <<EOF >$WORK_DIR/cfg/etcd
  18. #[Member]
  19. ETCD_NAME="${ETCD_NAME}"
  20. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  21. ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
  22. ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
  23. #[Clustering]
  24. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
  25. ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
  26. ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
  27. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  28. ETCD_INITIAL_CLUSTER_STATE="new"
  29. EOF
  30. cat <<EOF >/usr/lib/systemd/system/etcd.service
  31. [Unit]
  32. Description=Etcd Server
  33. After=network.target
  34. After=network-online.target
  35. Wants=network-online.target
  36. [Service]
  37. Type=notify
  38. EnvironmentFile=${WORK_DIR}/cfg/etcd
  39. ExecStart=${WORK_DIR}/bin/etcd \
  40. --name=\${ETCD_NAME} \
  41. --data-dir=\${ETCD_DATA_DIR} \
  42. --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
  43. --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  44. --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
  45. --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  46. --initial-cluster=\${ETCD_INITIAL_CLUSTER} \
  47. --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
  48. --initial-cluster-state=new \
  49. --cert-file=${WORK_DIR}/ssl/server.pem \
  50. --key-file=${WORK_DIR}/ssl/server-key.pem \
  51. --peer-cert-file=${WORK_DIR}/ssl/server.pem \
  52. --peer-key-file=${WORK_DIR}/ssl/server-key.pem \
  53. --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
  54. --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
  55. Restart=on-failure
  56. LimitNOFILE=65536
  57. [Install]
  58. WantedBy=multi-user.target
  59. EOF
  60. systemctl daemon-reload
  61. systemctl enable etcd
  62. systemctl restart etcd
  63. ----

image_1din2gsk418l1r2f15otg5g59jai.png-353.4kB

image_1din2hhja1opvivl9hp5r4rrsav.png-544.7kB

  1. login :
  2. 192.168.100.11 etcd 的文件
  3. vim /opt/etcd/cfg/etcd
  4. ---
  5. #[Member]
  6. ETCD_NAME="etcd01"
  7. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  8. ETCD_LISTEN_PEER_URLS="https://192.168.100.11:2380"
  9. ETCD_LISTEN_CLIENT_URLS="https://192.168.100.11:2379"
  10. #[Clustering]
  11. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.11:2380"
  12. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.11:2379"
  13. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380"
  14. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  15. ETCD_INITIAL_CLUSTER_STATE="new"
  16. ~
  17. ---
  18. login :
  19. 192.168.100.13 etcd 文件内容
  20. vim /opt/etcd/cfg/etcd
  21. ---
  22. #[Member]
  23. ETCD_NAME="etcd02"
  24. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  25. ETCD_LISTEN_PEER_URLS="https://192.168.100.13:2380"
  26. ETCD_LISTEN_CLIENT_URLS="https://192.168.100.13:2379"
  27. #[Clustering]
  28. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.13:2380"
  29. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.13:2379"
  30. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380"
  31. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  32. ETCD_INITIAL_CLUSTER_STATE="new"
  33. ----
  34. login :
  35. 192.168.100.14 etcd 文件内容
  36. vim /opt/etcd/cfg/etcd
  37. ----
  38. #[Member]
  39. ETCD_NAME="etcd03"
  40. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  41. ETCD_LISTEN_PEER_URLS="https://192.168.100.14:2380"
  42. ETCD_LISTEN_CLIENT_URLS="https://192.168.100.14:2379"
  43. #[Clustering]
  44. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.14:2380"
  45. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.14:2379"
  46. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.13:2380,etcd03=https://192.168.100.14:2380"
  47. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  48. ETCD_INITIAL_CLUSTER_STATE="new"
  49. ----
  1. 启动 etcd 服务
  2. service etcd start
  3. chkconfig etcd on

image_1din2v62eues14046pu1k221li7bc.png-347.3kB

image_1din30i9mn7afcf19o2nub1m9bcp.png-315kB

image_1din312rf1v2d1hab13d2192g1pu4d6.png-266kB

  1. 验证 etcd集群
  2. cd /opt/etcd/ssl
  3. /opt/etcd/bin/etcdctl \
  4. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  5. --endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" \
  6. cluster-health

image_1din38k4k1nn811jc1jh81s4os3keg.png-223.2kB

image_1din39gromb71r0j9g1osi1urset.png-242.5kB

image_1din3743d1ecm4s41be66n01v0edj.png-316.6kB

2.4 Node安装Docker

image_1din3g00bicf1e00ql910tasmhfa.png-198.8kB

  1. node节点 安装docker
  2. 192.168.100.13 192.168.100.14
  3. ----
  4. yum install -y yum-utils device-mapper-persistent-data lvm2
  5. # yum-config-manager \
  6. --add-repo \
  7. https://download.docker.com/linux/centos/docker-ce.repo
  8. docker 加速器
  9. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://abcd1234.m.daocloud.io
  10. # yum install docker-ce
  11. # systemctl start docker
  12. # systemctl enable docker
  13. ---

image_1din4jab5611t4k1n589lg1v2kgh.png-483.5kB

image_1din4k37s17gqr14unmvf117ggu.png-208.8kB

image_1din4khsko7912jqfc3pj28bmhb.png-348.7kB

image_1din4l0n91j547r91cuo19f1h4lho.png-340.6kB

image_1din4e954chg1cmk1n7661oqpfn.png-945.9kB

image_1din4f98v1oks11f47ig2pm17cbg4.png-914.8kB

2.5 部署flannel 网络 模型

2.5.1 kubernetes 网络模型(CNI)

  1. Container Network Interface(CNI): 容器网络接口,Google cCoreOS 主导
  2. Kubernetes 网络模型设计要求:
  3. 1. 一个pod 一个IP
  4. 2. 每个pod 独立IPpod 内所有容器共享网络(同一个IP
  5. 3. 所有容器都可以与所有其他容器通信
  6. 4. 所有节点都可以与所有容器通信

image_1din5hn8v1lvk1qjm114bdhq14ugi5.png-418.9kB

image_1din7apbid9i1gg549mnklpm1j5.png-538kB

2.5.2 flannel的 网络模型

  1. Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
  2. VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目
  3. 标地址。
  4. Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDPVXLANAWS VPCGCE路由等数据转发方
  5. 式。

image_1din7ojqr1po12ji1sb51gv9r7kji.png-94.6kB

image_1dip4je05rg313va1s6f107mdj219.png-303.4kB

2.5.3 flannel 安装 与配置

  1. 1. 写入分配的子网段到etcd,供flanneld使用
  2. cd /opt/etcd/ssl/
  3. /opt/etcd/bin/etcdctl \
  4. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  5. --endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" \
  6. set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

image_1dipbmq451peav151d8ta8b1lkd26.png-199.5kB

  1. 2. 下载flannel 软件
  2. https://github.com/coreos/flannel/releases
  3. tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz
  4. mv flanneld /opt/kubernetes/bin/
  5. mv mk-docker-opts.sh /opt/kubernetes/bin/

image_1dipcv3p8hksg60pfd1at6gau33.png-345.9kB

  1. node节点 部署flannel
  2. mkdir /opt/kubernetes/{bin,cfg,ssl} -p
  3. cd /root/
  4. ./flannel.sh https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379

image_1dipd66vv1pbd1kv8ps1glmva53.png-206.2kB

  1. 启动 flannel docker
  2. service flanneld restart
  3. service docker restart

image_1dipdk82177s15b51lqu1bfbe1q5g.png-171.9kB

image_1dipdl89i13ch1t0tfumd8t1kg65t.png-765.6kB

image_1dipeu8su1gkuch81clq1v151rbu8a.png-244.1kB

2.png-206.5kB

  1. flannel 网络的测试
  2. 192.168.100.13 192.168.10.14 上面 创建 临时容器 查看 是不是可以ping
  3. 安装测试容器
  4. docker run ti bubsybox
  5. docker run ti busybox

image_1dipg02vv1stdnqh19261l2ct26b3.png-326.7kB

3.png-456kB

image_1dipg5mkogtl84l1k56mln113ncb.png-291.7kB

image_1dipg6sns1bm1kqi109adhfhdqco.png-185.4kB


  1. 查看路由
  2. /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" ls /coreos.com/network/
  3. /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379" ls /coreos.com/network/subnets

image_1dipgr40o1259mpd1pqs1s8a1l2vd5.png-145.4kB

image_1dipgrlmu1u2e12f6q94ar8cstdi.png-189.5kB

2.6 部署k8s 的 Master 部分

2.6.1 下载kubernetes

  1. 下载地址:
  2. 选择 1.13.4 版本
  3. https://dl.k8s.io/v1.13.4/kubernetes-server-linux-amd64.tar.gz

2.6.2 部署kube-apiserver

  1. tar -zxvf kubernetes-server-linux-amd64.tar.gz
  2. mkdir -p /opt/kubernetes/{bin,cfg,ssl}
  3. cd /root/kubernetes/server/bin
  4. cp -p kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/
  5. cd /root/Deploy
  6. chmod +x apiserver.sh
  7. ./apiserver.sh 192.168.100.11 https://192.168.100.11:2379,https://192.168.100.13:2379,https://192.168.100.14:2379

image_1diphnpahvf01jeo1quu133i32den.png-371.8kB

image_1diphoan276fopq11mpokae35fa.png-470.4kB

image_1dipiacdp1tog1r9jq2f1st262efn.png-162.9kB

image_1dipihpa911hbubn1o0aq141l8qgk.png-154.1kB

  1. 配置k8s 的认证
  2. cd /root/Deploy
  3. cp -p k8s-cert.sh /opt/kubernetes/ssl
  4. cd /opt/kubernetes/ssl
  5. chmod +x k8s-cert.sh
  6. ./k8s-cert.sh

image_1dipiv1bia451kn513fok831i2ph1.png-324kB

  1. 生成token 文件
  2. export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
  3. cat > token.csv <<EOF
  4. ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  5. EOF
  6. cat token.csv
  7. mv tokcen.csv /opt/kubernetes/cfg/

image_1dipjbqao1o0vgrq160kacm32bie.png-398.5kB

image_1dipjfkan6l34qa1390q8d7mnjb.png-417.6kB

  1. 重启动 kube-apiserver
  2. service kube-apiserver restart
  3. netstat -nultp |grep 8080
  4. netstat -nultp |grep 6443
  5. apiserver 问题排查命令
  6. /opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

image_1dipjqno31m1e1jhj1hr21tq33giko.png-580.1kB

2.6.3 部署controller-manager 与 scheduler

  1. cd /root/Deploy
  2. chmod +x controller-manager.sh
  3. chmod +x scheduler.sh
  4. ./conroller-manager.sh 127.0.0.1
  5. ./scheduler.sh 127.0.0.1

image_1dipk7m5d19at1k681mer1vfnb3vl5.png-468kB

image_1dipk971b1hcj124s1lug1r3ufali.png-372.2kB

  1. 检查集群 的状态
  2. cd /root/kubernetes/server/bin/
  3. cp -p kubectl /usr/bin/
  4. kubectl get cs

image_1dipkg7fo1shjkii3eu1g0ah0slv.png-337.3kB

  1. 部署Node组件 所需要的授权

image_1dipkjeht1t9lbvp1tjj1ao9vgpmc.png-162.7kB

  1. kubectl create clusterrolebinding kubelet-bootstrap \
  2. --clusterrole=system:node-bootstrapper \
  3. --user=kubelet-bootstrap

image_1dipkmbcflckj8h1mmot3ham3nc.png-138.1kB

  1. cd /root/Deploy
  2. cp -p kubeconfig.sh /root
  3. cd /root
  4. chmod +x kubeconfig.sh
  5. ./kubeconfig.sh 192.168.100.11 /opt/kubernetes/ssl
  6. 生成bootstrap.kubeconfig kube-proxy.kubeconfig 文件

image_1dipl8u491c1dmhe111rkkm1qq2o9.png-369.9kB

image_1diplaasn1a39h64kkc6c1kh3p6.png-429.4kB


  1. 查看证书颁发
  2. kubectl get csr

image_1dipmq1663k61ufh15fq4hf1krku3.png-183.8kB

  1. kubectl certificate approve node-csr-H38P-yvXaCa5GO7nXNg_2zegNT1BuSr-wCBzBXOPXBc
  2. kubectl certificate approve node-csr-kAJTeC6Biz8ZtNsbFCSoL8AF-DhAFBlocn8xDxzTr1s
  3. kubectl get csr
  4. kubectl get node

image_1dipn07cu3bf88mfp613rd1hl3ug.png-659.5kB

image_1dipn1jrlb4eb501u8115p2108tut.png-141.4kB

2.7 部署k8s的node部分组件

  1. 复制 bootstrap.kubeconfig kube-proxy.kubeconfig 文件到node 上面
  2. scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.100.13:/opt/kubernetes/cfg/
  3. scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.100.14:/opt/kubernetes/cfg/
  4. cp -p bootstrap.kubeconfig kube-proxy.kubeconfig /opt/kubernetes/cfg/
  5. cd /root/kubernetes/server/bin
  6. scp kubelet root@192.168.100.13:/opt/kubernetes/bin/
  7. scp kubelet root@192.168.100.14:/opt/kubernetes/bin/

image_1dipllvvf1ou81q7l172u93quevq3.png-305.8kB

image_1dipm2615nq1fjji1rntvsfhqg.png-385.4kB

  1. node 节点 执行
  2. cd /root
  3. chmod +x kubelet.sh
  4. ./kubelet 192.168.100.13
  5. ./kubelet 192.168.100.14

image_1dipm57qcmi28fu9vpj2n1q6er0.png-201.6kB

image_1dipm7h3318g49fe1kq41c0k1895rp.png-561.1kB

4.png-386.9kB

  1. kube-proxy 部署
  2. scp kube-proxy 192.168.100.13:/opt/kubernetes/bin/
  3. scp kube-proxy 192.168.100.14:/opt/kubernetes/bin/

image_1dipmh0mo1bo6l091gg41kq3154osm.png-151.5kB

  1. kube-proxy 部署设置
  2. 登录到node节点
  3. chmod +x proxy.sh
  4. ./proxy.sh 192.168.100.13
  5. ./proxy.sh 192.168.100.14
  6. ps -ef |grep proxy

image_1dipnbu7a1bu07lm9kerbn17dtvq.png-442kB

image_1dipnckqc1hmnjdsiv91bvf1sk107.png-459.4kB

  1. 运行一个nginx实例测试
  2. # kubectl run nginx --image=nginx --replicas=3
  3. # kubectl get pod
  4. # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
  5. # kubectl get svc nginx

image_1dipo9ela1mkb8ok1hln1f5m46v10k.png-644.8kB

image_1dipob4om1gjup171q231064skc121.png-196.9kB

image_1dipobvpgsv7jig1r5gbsp12ms12e.png-359kB

image_1dipojvu7jgt1fgh1b9oe68hcn13b.png-385.6kB

image_1dipokd381qr53u3m4d17nb1qe313o.png-180.5kB

image_1dipokvdrrpnhff1itoclr9j6145.png-164.7kB

  1. 主节点 无法 查看 日志
  2. error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log nginx-7cdbd8cdc9-z7jpk))
  3. ----
  4. 解决方法:
  5. 开放kubelet 匿名访问权限
  6. vim /opt/kubernetes/kubelet.config
  7. 在最后加上:
  8. authentication:
  9. anonymous:
  10. enabled: true
  11. ----
  12. service kubelet restart
  13. kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

image_1dipovssk1r7r1jr71jv1m1a1nda14i.png-134.6kB
image_1dkfjlqnn1bbo147210q31o553rg9.png-481.7kB
image_1dipplg6dh6tlsrav6bs1uau16c.png-76.2kB
image_1dkfjppep47n1lrqfc31gfvr7cm.png-467.9kB
image_1dipqfcpu19a11i1l1chrd6r3sh179.png-93.3kB


2.8 部署一个kubernetes UI 界面

  1. 下载地址链接:
  2. https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

  1. 找到K8S 源码包 解压
  2. cd /root/kubernetes/
  3. tar -zxvf kubernetes-src.tar.gz
  4. cd /root/kubernetes/cluster/addons/dashboard/

image_1dkfkd45m6rujcheu15lq1pms13.png-273.8kB

  1. kubectl create -f dashboard-configmap.yaml
  2. kubectl create -f dashboard-rbac.yaml
  3. kubectl create -f dashboard-secret.yaml

image_1dkfl3o5s16ja13jv7od36a1gqn1g.png-337.2kB

  1. vim dashboard-controller.yaml
  2. image
  3. image: registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.10.1

image_1dkfl5qik18m6oiu99c6vo1nm91t.png-125.3kB

  1. kubectl create -f dashboard-controller.yaml

image_1dkfl7h752rtnqh7fn112r1iao2a.png-126.6kB

  1. kubectl get pods -n kube-syetem

image_1dkfl9kmuj4116mbo5j1rvdhmu2n.png-160kB

  1. kubectl get pods -n kube-system --all-namespaces

image_1dkflca164nm1bup1qjk1r55kuk34.png-229.8kB

  1. 修改: dashboard-service.yaml
  2. vim dashborad-service.yaml
  3. 增加:
  4. type: NodePort
  5. kubectl create -f dashboard-service.yaml

image_1dkflkul13cvskqnfb1f41nq34h.png-183kB

image_1dkflnclhusv1lpv7s2k8klq25u.png-116.6kB

  1. kubectl get svc -n kube-system

image_1dkflp363u9gvf2in11laf177o6b.png-170.7kB

  1. kubectl get pods -o wide --all-namespaces

image_1dkfm2svm154610i2st6e9aaut78.png-227.4kB

  1. 打开浏览器 访问
  2. https://192.168.100.13:34392
  3. 使用 Firefox 浏览器

image_1dkfmhel21h8a1j131hdctjv1mnj85.png-300.1kB

  1. 使用 k8s-admin 令牌 登录
  2. k8s-admin.yaml
  3. ---
  4. apiVersion: v1
  5. kind: ServiceAccount
  6. metadata:
  7. name: dashboard-admin
  8. namespace: kube-system
  9. ---
  10. kind: ClusterRoleBinding
  11. apiVersion: rbac.authorization.k8s.io/v1beta1
  12. metadata:
  13. name: dashboard-admin
  14. subjects:
  15. - kind: ServiceAccount
  16. name: dashboard-admin
  17. namespace: kube-system
  18. roleRef:
  19. kind: ClusterRole
  20. name: cluster-admin
  21. apiGroup: rbac.authorization.k8s.io
  22. ---

  1. kubectl create -f k8s-admin.yaml

image_1dkfmq3gr1i884s31cg6rj01nhp8i.png-105.5kB

  1. kubectl get secret -n kube-system
  2. kubectl describe secret dashboard-admin-token-g64n4 -n kube-system
  3. 找到最下面的令牌:
  4. eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZzY0bjQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYTA4ZDA0OTQtZDQ2ZC0xMWU5LTkxMGYtMDAwYzI5ZjUyMjMxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.tqFByxlIY3eLjHWzA7nY5Sm3-cHz_vbNSSTCnbe91XKmwJDYSmN-b3XtkR2bWk0PC4UUyPr3HVXqW_tblgbBAOgmm22DI4yXmf0Rn82QBAYEHu-brCxb1u-9NRle09gjlsZtCiTggS5D7Pa-QNXZGYxDEwSPSi19kmvaNJIYVfmJCmTiyW3ObiSKYOLj_f21XOucdfr4lrIt0EA-TksfM3B0DfiEsu_nIGOWCEivh15XLm2hE-en45Y0cNH8XCTlMaOT-WmGUi9E1hZ9da9pKc0wKAuIUgtI25SrzhILabVxw9u-iar2YqFxUrsGf4u55TlJ74x9YKeCYFnqCVhsTg

image_1dkfn2dp8ii91pva1dnq10d714gp9.png-771.2kB

image_1dkfn4vs01uvs19j0i811khj17jcm.png-304.2kB

image_1dkfn7e4l14er41ff611pb1r6h23.png-421.8kB

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注