[关闭]
@zhangyy 2020-06-14T21:46:26.000000Z 字数 19206 阅读 152

kubernetes 容器介绍与安装(一)

kubernetes系列


  • 一:Kubernetes介绍与功能
  • 二:kubernetes基本对象概念
  • 三:kubernetes 部署环境准备 
  • 四:kubernetes 集群部署
  • 五:kubernetes 运行一个测试案例
  • 六:kubernetes UI 的配置

一:Kubernetes介绍与功能

1.1: kubernetes介绍

  1. KubernetesGoogle20146月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S
  2. K8SGoogle内部一个叫Borg的容器集群管理系统衍生出来的,Borg已经在Google大规模生产运行十年之久。
  3. K8S主要用于自动化部署、扩展和管理容器应用,提供了资源调度、部署管理、服务发现、扩容缩容、监控等一整套功能。
  4. 20157月,Kubernetes v1.0正式发布,截止到2018127日最新稳定版本是v1.9.2
  5. Kubernetes目标是让部署容器化应用简单高效。
  6. 官方网站:www.kubernetes.io

1.2 Kubernetes 主要功能

  1. 数据卷
  2. Pod中容器之间共享数据,可以使用数据卷。
  3. 应用程序健康检查
  4. 容器内服务可能进程堵塞无法处理请求,可以设置监控检查策略保证应用健壮性。
  5. 复制应用程序实例
  6. 控制器维护着Pod副本数量,保证一个Pod或一组同类的Pod数量始终可用。
  7. 弹性伸缩
  8. 根据设定的指标(CPU利用率)自动缩放Pod副本数。
  9. 服务发现
  10. 使用环境变量或DNS服务插件保证容器中程序发现Pod入口访问地址。
  11. 负载均衡
  12. 一组Pod副本分配一个私有的集群IP地址,负载均衡转发请求到后端容器。在集群内部其他Pod可通过这个ClusterIP访问应用。
  13. 滚动更新
  14. 更新服务不中断,一次更新一个Pod,而不是同时删除整个服务。
  15. 服务编排
  16. 通过文件描述部署服务,使得应用程序部署变得更高效。
  17. 资源监控
  18. Node节点组件集成cAdvisor资源收集工具,可通过Heapster汇总整个集群节点资源数据,然后存储到InfluxDB时序数据库,再由Grafana展示。
  19. 提供认证和授权
  20. 支持角色访问控制(RBAC)认证授权等策略。

二:kubernetes基本对象概念

2.1:基于基本对象更高层次抽象

  1. ReplicaSet
  2. 下一代Replication Controller。确保任何给定时间指定的Pod副本数量,并提供声明式更新等功能。
  3. RCRS唯一区别就是lable selector支持不同,RS支持新的基于集合的标签,RC仅支持基于等式的标签。
  4. Deployment
  5. Deployment是一个更高层次的API对象,它管理ReplicaSetsPod,并提供声明式更新等功能。
  6. 官方建议使用Deployment管理ReplicaSets,而不是直接使用ReplicaSets,这就意味着可能永远不需要直接操作ReplicaSet对象。
  7. StatefulSet
  8. StatefulSet适合持久性的应用程序,有唯一的网络标识符(IP),持久存储,有序的部署、扩展、删除和滚动更新。
  9. DaemonSet
  10. DaemonSet确保所有(或一些)节点运行同一个Pod。当节点加入Kubernetes集群中,Pod会被调度到该节点上运行,当节点从集群中
  11. 移除时,DaemonSetPod会被删除。删除DaemonSet会清理它所有创建的Pod
  12. Job
  13. 一次性任务,运行完成后Pod销毁,不再重新启动新容器。还可以任务定时运行。

2.2 系统架构及组件功能

image_1chg601nq1qhq19hla9m19nd1mcp9.png-124.2kB

  1. Master 组件:
  2. kube- - apiserver
  3. Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
  4. kube- - controller- - manager
  5. 处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
  6. kube- - scheduler
  7. 根据调度算法为新创建的Pod选择一个Node节点。
  8. Node 组件:
  9. kubelet
  10. kubeletMasterNode节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、
  11. 下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。
  12. kube- - proxy
  13. Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
  14. docker rocket/rkt
  15. 运行容器。
  16. 第三方服务:
  17. etcd
  18. 分布式键值存储系统。用于保持集群状态,比如PodService等对象信息。

三:安装kubernetes

3.1 集群的规划

  1. 系统:
  2. CentOS7.6x64
  3. 主机规划:
  4. master:
  5. 192.168.100.11
  6. kube-apiserver
  7. kube-controller-manager
  8. kube-scheduler
  9. etcd
  10. slave1:
  11. 192.168.100.12
  12. kubelet
  13. kube-proxy
  14. docker
  15. flannel
  16. etcd
  17. slave2:
  18. 192.168.100.13
  19. kubelet
  20. kube-proxy
  21. docker
  22. flannel
  23. etcd

3.2 安装kubernetes

3.2.1 集群安装dockers

  1. yum install -y yum-utils device-mapper-persistent-data lvm2
  2. #yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. # yum install -y docker-ce
  4. # cat << EOF > /etc/docker/daemon.json
  5. {
  6. "registry-mirrors": [ "https://registry.docker-cn.com"],
  7. "insecure-registries":["192.168.0.210:5000"]
  8. }
  9. EOF
  10. # systemctl start docker
  11. # systemctl enable docker
  12. docker 加速器
  13. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

image_1chg92pdc1chb1g66s45ckt1j3a16.png-440.5kB

3.2.2 集群部署 – 自签TLS证书

image_1chg945pfjkp17451d651ddk1gs11j.png-126.8kB

  1. 安装证书生成工具 cfssl
  2. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  3. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
  4. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  5. chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
  6. mv cfssl_linux-amd64 /usr/local/bin/cfssl
  7. mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
  8. mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

  1. mkdir /ssl
  2. cd /ssl
  3. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  4. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
  5. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

image_1chvjku4s1vs5117j1m5l1m713p09.png-650.1kB

image_1chvjljn4tcoj8n12hr1ons1n5sm.png-59.2kB

  1. chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
  2. mv cfssl_linux-amd64 /usr/local/bin/cfssl
  3. mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
  4. mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

image_1chvjp5rrach1j681vucc3c176713.png-284.1kB

  1. 准备好证书文件
  2. mv cfssl_linux-amd64 /usr/local/bin/cfssl
  3. mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
  4. mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
  5. mkdir /k8s/etcd-certs/ ## 上传 etcd-cert.sh
  6. 生成etcd的证书
  7. chmod +x etcd-cert.sh
  8. 生成证书的模板文件:
  9. 查看证书
  10. ./etcd-cert.sh
  11. ls *.pem |grep -v |xargs rm -rf {}

1.png-458.1kB

  1. etcd-cert.sh 文件
  2. ---
  3. cat > ca-config.json <<EOF
  4. {
  5. "signing": {
  6. "default": {
  7. "expiry": "87600h"
  8. },
  9. "profiles": {
  10. "www": {
  11. "expiry": "87600h",
  12. "usages": [
  13. "signing",
  14. "key encipherment",
  15. "server auth",
  16. "client auth"
  17. ]
  18. }
  19. }
  20. }
  21. }
  22. EOF
  23. cat > ca-csr.json <<EOF
  24. {
  25. "CN": "etcd CA",
  26. "key": {
  27. "algo": "rsa",
  28. "size": 2048
  29. },
  30. "names": [
  31. {
  32. "C": "CN",
  33. "L": "Beijing",
  34. "ST": "Beijing"
  35. }
  36. ]
  37. }
  38. EOF
  39. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  40. #-----------------------
  41. cat > server-csr.json <<EOF
  42. {
  43. "CN": "etcd",
  44. "hosts": [
  45. "192.168.100.11",
  46. "192.168.100.12",
  47. "192.168.100.13"
  48. ],
  49. "key": {
  50. "algo": "rsa",
  51. "size": 2048
  52. },
  53. "names": [
  54. {
  55. "C": "CN",
  56. "L": "BeiJing",
  57. "ST": "BeiJing"
  58. }
  59. ]
  60. }
  61. EOF
  62. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
  63. ---

3.2.3 部署ETCD 集群

  1. 二进制包下载地址:https://github.com/coreos/etcd/releases
  2. 查看集群状态:
  3. # /opt/etcd/bin/etcdctl \
  4. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  5. --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \
  6. cluster-health

  1. mkdir -p /opt/etcd/
  2. mkdir -p /opt/etcd/{bin,cfg,ssl}
  3. cd /soft/Soft
  4. tar -zxvf etcd-v3.3.12-linux-amd64.tar.gz
  5. cd etcd-v3.3.12-linux-amd64/
  6. mv etcdctl /opt/etcd/bin/
  7. mv etcd /opt/etcd/bin/

3.png-372.9kB

4.png-252.6kB

  1. vim /opt/etcd/cfg/etcd
  2. ---
  3. #[Member]
  4. ETCD_NAME="etcd01"
  5. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  6. ETCD_LISTEN_PEER_URLS="https://192.168.100.11:2380"
  7. ETCD_LISTEN_CLIENT_URLS="https://192.168.100.11:2379"
  8. #[Clustering]
  9. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.11:2380"
  10. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.11:2379"
  11. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.12:2380,etcd03=https://192.168.100.13:2380"
  12. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  13. ETCD_INITIAL_CLUSTER_STATE="new"
  14. ---
  1. vim /usr/lib/systemd/system/etcd.service
  2. ---
  3. [Unit]
  4. Description=Etcd Server
  5. After=network.target
  6. After=network-online.target
  7. Wants=network-online.target
  8. [Service]
  9. Type=notify
  10. EnvironmentFile=/opt/etcd/cfg/etcd
  11. ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
  12. Restart=on-failure
  13. LimitNOFILE=65536
  14. [Install]
  15. WantedBy=multi-user.target
  16. ---
  1. cd /root/k8s/etcd-certs
  2. cp -ap *pem /opt/etcd/ssl/
  1. 启动etcd
  2. systemctl start etcd
  3. ps -ef |grep etcd

6.png-194.1kB

  1. scp -r /opt/etcd 192.168.100.12:/opt/
  2. scp -r /opt/etcd 192.168.100.13:/opt/
  3. scp /usr/lib/systemd/system/etcd.service 192.168.100.12:/usr/lib/systemd/system/
  4. scp /usr/lib/systemd/system/etcd.service 192.168.100.13:/usr/lib/systemd/system/
  1. 192.168.100.12 更改 配置文件
  2. vim /opt/etcd/cfg/etcd
  3. ---
  4. #[Member]
  5. ETCD_NAME="etcd02"
  6. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  7. ETCD_LISTEN_PEER_URLS="https://192.168.100.12:2380"
  8. ETCD_LISTEN_CLIENT_URLS="https://192.168.100.12:2379"
  9. #[Clustering]
  10. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.12:2380"
  11. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.12:2379"
  12. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.12:2380,etcd03=https://192.168.100.13:2380"
  13. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  14. ETCD_INITIAL_CLUSTER_STATE="new"
  15. ---
  16. systemctl start etcd
  17. chkconfig etcd on

7.png-221.4kB

  1. 192.168.100.13 更改 配置文件
  2. vim /opt/etcd/cfg/etcd
  3. ---
  4. #[Member]
  5. ETCD_NAME="etcd03"
  6. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  7. ETCD_LISTEN_PEER_URLS="https://192.168.100.13:2380"
  8. ETCD_LISTEN_CLIENT_URLS="https://192.168.100.13:2379"
  9. #[Clustering]
  10. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.13:2380"
  11. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.13:2379"
  12. ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.12:2380,etcd03=https://192.168.100.13:2380"
  13. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  14. ETCD_INITIAL_CLUSTER_STATE="new"
  15. ---
  16. systemctl start etcd
  17. chkconfig etcd on

8.png-188kB

  1. vim /etc/profile
  2. export ETCD_HOME=/opt/etcd
  3. PATH=$PATH:$HOME/bin:$ETCD_HOME/bin
  4. ---
  5. source /etc/profile
  6. --
  7. etcdctl --help
  8. ectdctl --help |grep ca

image_1chvnvmgmj6of4c19tp1vkj2ds6s.png-212.3kB

image_1chvo3giis5g1vaj1s8nrg11j6179.png-126.3kB


  1. 查看集群状态:
  2. cd /opt/etcd/ssl/
  3. # /opt/etcd/bin/etcdctl \
  4. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  5. --endpoints="https://192.17.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \
  6. cluster-health

5.png-248.7kB


3.2.4 kubernetes 网络模型(CNI)

image_1daqlo28711bbnq1i1017v5l14v.png-427.9kB

  1. 实现容器网络

1.png-521.9kB

3.2.4 集群部署 – 部署Flannel网络

  1. Overlay Network :覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
  2. VXLAN :将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
  3. Flannel :是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDPVXLANAWS VPCGCE路由等数据转发方式。
  4. 多主机容器网络通信其他主流方案:隧道方案( WeaveOpenvSwitch ),路由方案(Calico)等。

image_1chvp2rr410bs19tg15td1a3onaa83.png-60.3kB

image_1chvp7s4g15n0134iv1tvag1mkd8g.png-374.6kB


  1. 1 )写入分配的子网段到 etcd ,供 flanneld 使用
  2. # /opt/etcd/bin/etcdctl \
  3. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  4. --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \
  5. set /coreos.com/network/config '{ "Network": "172.16.0.0/16", "Backend": {"Type": "vxlan"}}'
  6. 2 )下载二进制包
  7. # wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
  8. 3 )配置 Flannel
  9. 4 systemd 管理 Flannel
  10. 5 )配置 Docker 启动指定子网段
  11. 6

  1. 部署:
  2. mkdir /root/flannel
  3. mkdir -p /opt/kubernetes/{bin,cfg,ssl}
  4. ls -ld *
  5. cp -p flanneld mk-docker-opts.sh /opt/kubernetes/bin/
  6. tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz
  7. scp flanneld mk-docker-opts.sh 192.168.100.12:/opt/kubernetes/bin/
  8. scp flanneld mk-docker-opts.sh 192.168.100.13:/opt/kubernetes/bin/

image_1ci298f4fskk1jpnucld886n29.png-201.1kB

  1. flannel 配置的文件
  2. ---
  3. #!/bin/bash
  4. ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
  5. cat <<EOF >/opt/kubernetes/cfg/flanneld
  6. FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
  7. -etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  8. -etcd-certfile=/opt/kubernetes/ssl/server.pem \
  9. -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
  10. EOF
  11. cat <<EOF >/usr/lib/systemd/system/flanneld.service
  12. [Unit]
  13. Description=Flanneld overlay address etcd agent
  14. After=network-online.target network.target
  15. Before=docker.service
  16. [Service]
  17. Type=notify
  18. EnvironmentFile=/opt/kubernetes/cfg/flanneld
  19. ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
  20. ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
  21. Restart=on-failure
  22. [Install]
  23. WantedBy=multi-user.target
  24. EOF
  25. ---

  1. 配置192.168.100.12node节点的配置文件
  2. ---
  3. vim /opt/kubernetes/cfg/flanneld
  4. FLANNEL_OPTIONS="--etcd-endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  5. -etcd-certfile=/opt/kubernetes/ssl/server.pem \
  6. -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
  7. ---
  1. 配置启动flanned 文件
  2. vim /usr/lib/systemd/system/flanneld.service
  3. ---
  4. [Unit]
  5. Description=Flanneld overlay address etcd agent
  6. After=network-online.target network.target
  7. Before=docker.service
  8. [Service]
  9. Type=notify
  10. EnvironmentFile=/opt/kubernetes/cfg/flanneld
  11. ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
  12. ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
  13. Restart=on-failure
  14. [Install]
  15. WantedBy=multi-user.target
  16. ---
  1. 192.168.100.11 执行
  2. 设置VXLAN 网络
  3. cd /opt/kubernetes/ssl/
  4. etcdctl \
  5. --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
  6. --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \
  7. set /coreos.com/network/config '{ "Network": "172.16.0.0/16", "Backend": {"Type": "vxlan"}}'

3.png-148kB


  1. node 节点上面检查
  2. 192.168.100.12:
  3. cd /opt/kubernetes/ssl/
  4. etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" get /coreos.com/network/config

image_1ci3llr9qskp14bp1vfi1e3e1nse13.png-169.9kB

  1. 192.168.100.13
  2. cd /opt/kubernetes/ssl/
  3. etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" get /coreos.com/network/config

2.png-135.2kB

  1. ifconfig
  2. 会生成一个flannel.1 的网段
  3. cat /run/flannel/subnet.env

image_1ci3los7na4bslc17rt1la11mhr1t.png-729kB

image_1ci3m2cc41v5b1jjir7d1l511hsi2a.png-199kB


  1. 配置docker 的启动 加载应用flanneld 网络
  2. ---
  3. vim /usr/lib/systemd/system/docker.service
  4. [Unit]
  5. Description=Docker Application Container Engine
  6. Documentation=https://docs.docker.com
  7. After=network-online.target firewalld.service
  8. Wants=network-online.target
  9. [Service]
  10. Type=notify
  11. EnvironmentFile=/run/flannel/subnet.env
  12. ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ## 没有反斜杠
  13. ExecReload=/bin/kill -s HUP \$MAINPID
  14. LimitNOFILE=infinity
  15. LimitNPROC=infinity
  16. LimitCORE=infinity
  17. TimeoutStartSec=0
  18. Delegate=yes
  19. KillMode=process
  20. Restart=on-failure
  21. StartLimitBurst=3
  22. StartLimitInterval=60s
  23. [Install]
  24. WantedBy=multi-user.target
  25. ---
  26. 启动docker
  27. systemctl daemon-reload
  28. service docker restart

image_1ci3mbe8o1c3ikp1pa118n11ge02n.png-213kB

image_1da3m8422fl098g1948cogvh49.png-174kB


  1. 部署192.168.100.13
  2. ---
  3. 192.168.100.12 当中去同步文件数据
  4. cd /usr/lib/systemd/system
  5. scp /opt/kubernetes/cfg/flanneld 192.168.100.13:/opt/kubernetes/cfg/
  6. scp flanneld.service 192.168.100.13:/usr/lib/systemd/system
  7. scp docker.service 192.168.100.13:/usr/lib/systemd/system
  8. 启动 flanneld 重启docker
  9. service flanneld start
  10. service docker restart
  11. chkconfig flanneld on
  12. ifconfig |more
  13. 测试:
  14. 172.17.100.12 当中去测试flanneld 网络是否能通
  15. ping 10.0.23.1

image_1ci3mvg96ip8l567je117r5573k.png-211kB

image_1da3m948l1pnj9oe8fs1387132qm.png-112.7kB

image_1ci3n09mh1aaf12bu1v95k17o3741.png-403.9kB

image_1ci3n103l1ntjeus7gjbl2nf14e.png-395.6kB

6.png-456.9kB

7.png-512.5kB

  1. 在主节点上面查看flannel 的网络
  2. etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" ls /coreos.com/network/subnets
  3. 判断 flanneld 网络的 节点
  4. etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.1168.100.12:2379,https://192.168.100.13:2379" get /coreos.com/network/subnets

4.png-107.1kB

  1. route -n

5.png-132.3kB

3.2.5 集群部署 – 创建Node节点kubeconfig文件

  1. 集群部署 创建Node节点kubeconfig文件
  2. 将文件 kubeconfig.sh 上传到172.17.100.11 的主节点上面去
  3. 1、创建TLS Bootstrapping Token
  4. ---
  5. export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
  6. cat > token.csv << EOF
  7. ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  8. EOF
  9. ----
  10. 查看Token文件
  11. cat token.csv
  12. ---
  13. 6a694cc8d6e025e97ea74c1a14cff8bf,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  14. ---

image_1ci3qfhbf1glkevl1tmm1qn0s186s.png-550.3kB

  1. 2. 设置 bootstrap.kubeconfig 配置文件
  2. 设置kube_apiserver
  3. export KUBE_APISERVER="https://172.17.100.11:6443"
  4. 上传kubectl的命令到 /opt/kubernetes/bin 下面:
  5. cd /opt/kubernetes/bin
  6. chmod +x kubectl
  7. # 设置集群参数
  8. cd /opt/kubernetes/ssl/
  9. kubectl config set-cluster kubernetes \
  10. --certificate-authority=./ca.pem \
  11. --embed-certs=true \
  12. --server=${KUBE_APISERVER} \
  13. --kubeconfig=bootstrap.kubeconfig
  14. 生成bootstrap.kubeconfig 文件

image_1ci3rdp1r14lb1m7q1hj1lsrjls79.png-80.5kB

image_1ci3re75417521pjr72vqhr1jp77m.png-108.5kB

image_1ci3rgcvd1p2a16t61111fbe17ct93.png-355.4kB

image_1ci3rieu41r67ume180m1rps19i09g.png-184kB

image_1ci3rpl991g961oqtl8p19aa17t3aa.png-874.7kB

  1. 设置 证书的信息:
  2. kubectl config set-credentials kubelet-bootstrap \
  3. --token=${BOOTSTRAP_TOKEN} \
  4. --kubeconfig=bootstrap.kubeconfig

image_1ci3rt1bq13hl1t5v1kcu1jt67bdan.png-197.1kB

  1. # 设置上下文参数
  2. kubectl config set-context default \
  3. --cluster=kubernetes \
  4. --user=kubelet-bootstrap \
  5. --kubeconfig=bootstrap.kubeconfig

image_1ci3russjb251vpc1vs21ojs1hg1b4.png-162.1kB

  1. # 设置默认上下文
  2. kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

image_1ci3s52pg177h1f0o1kva115p1luabh.png-100.8kB

  1. cat bootstrap.kubeconfig

image_1ci3sbq864ct1gv01dc61mbmb8hbu.png-991.7kB

  1. 3. 创建kube-proxy kubeconfig文件
  2. kubectl config set-cluster kubernetes \
  3. --certificate-authority=./ca.pem \
  4. --embed-certs=true \
  5. --server=${KUBE_APISERVER} \
  6. --kubeconfig=kube-proxy.kubeconfig
  7. kubectl config set-credentials kube-proxy \
  8. --client-certificate=./kube-proxy.pem \
  9. --client-key=./kube-proxy-key.pem \
  10. --embed-certs=true \
  11. --kubeconfig=kube-proxy.kubeconfig
  12. kubectl config set-context default \
  13. --cluster=kubernetes \
  14. --user=kube-proxy \
  15. --kubeconfig=kube-proxy.kubeconfig
  16. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

image_1ci3sj0lc1t1fb7c64m186f13cb.png-496.5kB

  1. cat kube-proxy.kubeconfig

image_1ci3skkve9f9cr51fej15pqkcjd8.png-829.8kB

image_1ci3sm12eatkct5mnc19st33pdl.png-1580.8kB

  1. bootstrap.kubeconfig kube-proxy.kubeconfig 同步到其它节点
  2. cp -p *kubeconfig /opt/kubernetes/cfg
  3. scp *kubeconfig 172.17.100.12:/opt/kubernetes/cfg/
  4. scp *kubeconfig 172.17.100.13:/opt/kubernetes/cfg/

image_1ci41cnuhtok1pb91c2el5014teke.png-387.2kB


四:kubernetes 集群部署 – 获取K8S二进制包

  1. 下载k8s 软件包
  2. https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
  3. 下载 k8s 的版本:1.9.2 server
  4. kubernetes-server-linux-amd64.tar.gz

4.1 部署 master 节点

  1. 上传 master.zip 192.168.100.11 上面的/root/master 目录下面
  2. mkdir master
  3. cd master
  4. unzip master.zip
  5. cp -p kube-controller-manager kube-apiserver kube-scheduler /opt/kubernetes/bin/
  6. cd /opt/kubernetes/bin/
  7. chmod +x *

image_1ci3u8dpu1paee2t1p8b1gl96d0e2.png-378.7kB

image_1ci3u90781qanne025i10i6pn5ef.png-502.3kB

image_1ci3u9o346rs1o3gngt171l1dl9fc.png-709kB


  1. cp -p /root/token.csv /opt/kubernetes/cfg/
  2. cd /root/master/
  3. ./apiserver.sh 172.17.100.11 https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379

image_1ci3uoehokck1dsd1ibk1g641qgifp.png-161.2kB

  1. cd /opt/kubernetes/cfg/
  2. cat kube-apiserver
  3. ---
  4. KUBE_APISERVER_OPTS="--logtostderr=true \
  5. --v=4 \
  6. --etcd-servers=https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379 \
  7. --insecure-bind-address=127.0.0.1 \
  8. --bind-address=172.17.100.11 \
  9. --insecure-port=8080 \
  10. --secure-port=6443 \
  11. --advertise-address=172.17.100.11 \
  12. --allow-privileged=true \
  13. --service-cluster-ip-range=10.10.10.0/24 \
  14. --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \
  15. --kubelet-https=true \
  16. --enable-bootstrap-token-auth \
  17. --token-auth-file=/opt/kubernetes/cfg/token.csv \
  18. --service-node-port-range=30000-50000 \
  19. --tls-cert-file=/opt/kubernetes/ssl/server.pem \
  20. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
  21. --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  22. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  23. --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  24. --etcd-certfile=/opt/kubernetes/ssl/server.pem \
  25. --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
  26. ---

  1. cd /usr/lib/systemd/system/
  2. cat kube-apiserver.service
  3. ---
  4. [Unit]
  5. Description=Kubernetes API Server
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. [Service]
  8. EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
  9. ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
  10. Restart=on-failure
  11. [Install]
  12. WantedBy=multi-user.target
  13. ---
  14. 启动apiserver
  15. service kube-apiserver start
  16. ps -ef |grep apiserver

image_1ci3vrob6mvd1jhi199jr8i1tpeg6.png-696.6kB
image_1ci40albk1d2tode16hkle70nhd.png-332.2kB

  1. 执行contronller 脚本
  2. ./controller-manager.sh 127.0.0.1
  3. ps -ef |grep controller

image_1ci400uif1igb88to0c15i3gt3h0.png-130.1kB

image_1ci40bhcdp3eg721i5s8f1hdbia.png-236.7kB

  1. 启动调度
  2. ./scheduler.sh 127.0.0.1
  3. ps -ef |grep scheduler

image_1ci40c32hirqh3atif1i9ig08in.png-175.1kB

image_1ci40d9hm18t3fclo8b1hda1j1cj4.png-175.8kB

  1. cd /opt/kubernetes/cfg/
  2. cat kube-controller-manager
  3. ---
  4. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
  5. --v=4 \
  6. --master=127.0.0.1:8080 \
  7. --leader-elect=true \
  8. --address=127.0.0.1 \
  9. --service-cluster-ip-range=10.10.10.0/24 \
  10. --cluster-name=kubernetes \
  11. --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  12. --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  13. --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  14. --root-ca-file=/opt/kubernetes/ssl/ca.pem"
  15. ---
  16. cat kube-scheduler
  17. ---
  18. KUBE_SCHEDULER_OPTS="--logtostderr=true \
  19. --v=4 \
  20. --master=127.0.0.1:8080 \
  21. --leader-elect"
  22. ---
  1. 检查节点:
  2. kubectl get cs

image_1ci40lfjl3q182ev0j1bi416qqjh.png-178.7kB

4.2 部署node 节点

  1. 172.17.100.12 172.17.100.13
  2. mkdir /root/node
  3. 上传node.zip /root/node 目录下面
  4. cd /root/node
  5. unzip node.zip

image_1ci41j0v81h411tgo1imdte944nkr.png-527.4kB

image_1ci41k6o4fed1jlghoe157114hpl8.png-518.7kB

  1. cp -p kube-proxy kubelet /opt/kubernetes/bin/
  2. cd /opt/kubernetes/bin/
  3. chmod +x *

image_1ci41scrljc1buh1u751pi1vmll.png-494.5kB

111.png-618.7kB

  1. 172.17.100.12 node 节点
  2. cd /root/node
  3. chmod +x *.sh
  4. ./kubelet.sh 172.17.100.12 10.10.10.2
  5. ----
  6. 注意报错:
  7. Jul 11 15:40:18 node-02 kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
  8. -----
  9. master(172.17.100.11) 节点上面执行命令:
  10. kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  11. 然后重启
  12. systemctl daemon-reload
  13. systemctl enable kubelet
  14. systemctl restart kubelet

1.png-126.6kB

image_1ci455n51r4i1vgljqh1hp3e6sqt.png-126.5kB

image_1ci4573i8hb0i9teacumda26ra.png-342.1kB

  1. proxy的配置:
  2. ./proxy.sh 172.17.100.12
  3. ps -ef |grep proxy

3.png-324.8kB

  1. master 节点上面查看证书
  2. kubectl get csr

4.png-197kB

  1. kubectl certificate approve node-csr-3B70dKcCjJuitWcWTjqb2rjadH1ld4Tq0mU9QAd5j7I
  2. kubectl get csr

image_1ci46ar3910sa1fk91frroti1vcntv.png-260.1kB

  1. 节点加入
  2. kubectl get node

image_1ci46divfjk210ha1hgjsfphbsuc.png-83.3kB

  1. 172.17.100.12 多出了证书
  2. cd /opt/kubernetes/ssl/
  3. ls

6.png-113.6kB


  1. 172.17.100.13 节点:
  2. cd /root/node
  3. ./kubelet.sh 172.17.100.13 10.10.10.2
  4. ./proxy.sh 172.17.100.13

image_1ci46vq6h1hoa15epdje183uu8t10p.png-729.9kB

  1. master(172.17.100.11) 上面执行
  2. kubectl get csr
  3. kubectl certificate approve node-csr-ubm9Uq4P7VhzB_zryLhH3WM5SbpaunS5sg9cYqG5wLA

7.png-328.6kB

  1. kubectl get csr

image_1ci48dibi1jfqcn21oe6us015ca13v.png-224.5kB

  1. 172.17.100.13 上面会自动生成kubelet 证书
  2. cd /opt/kubernetes/ssl
  3. ls

image_1ci48ck2r12e311jbs2e1eju1urm132.png-111.3kB

  1. master 上面执行命令:
  2. kubectl get node

11.png-153.9kB


  1. kubectl get cs
  2. 至此kubernetes 部署完成

image_1ci494f0l1j0m1i5pl0g1v4610p115o.png-192.2kB

五:启动一个测试示例

  1. # kubectl run nginx --image=nginx --replicas=3
  2. # kubectl get pod
  3. # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
  4. # kubectl get svc nginx
  5. # kubectl get all

image_1ci49mdog1tvmjcq1e331a2g1as7175.png-472.2kB

image_1ci49vdr6hfv649510r3v1949p.png-429.6kB

  1. kubectl get pod -o wide

image_1ci4a2k5t1llr1s6bulq84o1uc116.png-181.1kB

  1. # kubectl get svg nginx

image_1ci4cksrv1hgeu3675v1d9h13r41j.png-133.3kB

  1. flanneld 网络访问
  2. curl -i 10.10.10.235:88

image_1ci4cmk3l17l213fr5fc6jndco20.png-413.1kB

  1. 外部网络访问:
  2. 172.17.100.12:40463

image_1ci4d83n3v1bre148n1g0j1ab2d.png-233kB

六:kubernetes 的UI界面

  1. mkdir -p /root/ui
  2. 上传文件
  3. dashboard-deployment.yaml dashboard-rbac.yaml dashboard-service.yaml
  4. /root/ui
  5. cd /root/ui
  6. ls

image_1ci6jkau0of13i41egecsl6plp.png-104.5kB

  1. 执行构建
  2. # kubectl create -f dashboard-rbac.yaml
  3. # kubectl create -f dashboard-deployment.yaml
  4. # kubectl create -f dashboard-service.yaml
  5. ### 查看容器
  6. kubectl get pods --all-namespaces
  7. kubectl get svc --all-namespaces

image_1ci6jofidam71kup1brff38157j1m.png-610.9kB

  1. 浏览器访问:
  2. http://172.17.100.12:41389/ui
  3. 至此 kubernetes UI 安装完成

image_1ci6jqlfj160h5g2ali2kajob23.png-371.5kB

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注