@zhangyy
2020-06-14T13:46:26.000000Z
字数 19206
阅读 377
kubernetes系列
- 一:Kubernetes介绍与功能
- 二:kubernetes基本对象概念
- 三:kubernetes 部署环境准备
- 四:kubernetes 集群部署
- 五:kubernetes 运行一个测试案例
- 六:kubernetes UI 的配置
Kubernetes是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S。K8S是Google内部一个叫Borg的容器集群管理系统衍生出来的,Borg已经在Google大规模生产运行十年之久。K8S主要用于自动化部署、扩展和管理容器应用,提供了资源调度、部署管理、服务发现、扩容缩容、监控等一整套功能。2015年7月,Kubernetes v1.0正式发布,截止到2018年1月27日最新稳定版本是v1.9.2。Kubernetes目标是让部署容器化应用简单高效。官方网站:www.kubernetes.io
数据卷Pod中容器之间共享数据,可以使用数据卷。应用程序健康检查容器内服务可能进程堵塞无法处理请求,可以设置监控检查策略保证应用健壮性。复制应用程序实例控制器维护着Pod副本数量,保证一个Pod或一组同类的Pod数量始终可用。弹性伸缩根据设定的指标(CPU利用率)自动缩放Pod副本数。服务发现使用环境变量或DNS服务插件保证容器中程序发现Pod入口访问地址。负载均衡一组Pod副本分配一个私有的集群IP地址,负载均衡转发请求到后端容器。在集群内部其他Pod可通过这个ClusterIP访问应用。滚动更新更新服务不中断,一次更新一个Pod,而不是同时删除整个服务。服务编排通过文件描述部署服务,使得应用程序部署变得更高效。资源监控Node节点组件集成cAdvisor资源收集工具,可通过Heapster汇总整个集群节点资源数据,然后存储到InfluxDB时序数据库,再由Grafana展示。提供认证和授权支持角色访问控制(RBAC)认证授权等策略。
ReplicaSet下一代Replication Controller。确保任何给定时间指定的Pod副本数量,并提供声明式更新等功能。RC与RS唯一区别就是lable selector支持不同,RS支持新的基于集合的标签,RC仅支持基于等式的标签。DeploymentDeployment是一个更高层次的API对象,它管理ReplicaSets和Pod,并提供声明式更新等功能。官方建议使用Deployment管理ReplicaSets,而不是直接使用ReplicaSets,这就意味着可能永远不需要直接操作ReplicaSet对象。StatefulSetStatefulSet适合持久性的应用程序,有唯一的网络标识符(IP),持久存储,有序的部署、扩展、删除和滚动更新。DaemonSetDaemonSet确保所有(或一些)节点运行同一个Pod。当节点加入Kubernetes集群中,Pod会被调度到该节点上运行,当节点从集群中移除时,DaemonSet的Pod会被删除。删除DaemonSet会清理它所有创建的Pod。Job一次性任务,运行完成后Pod销毁,不再重新启动新容器。还可以任务定时运行。

Master 组件:kube- - apiserverKubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。kube- - controller- - manager处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。kube- - scheduler根据调度算法为新创建的Pod选择一个Node节点。Node 组件:kubeletkubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。kube- - proxy在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。docker 或 rocket/rkt运行容器。第三方服务:etcd分布式键值存储系统。用于保持集群状态,比如Pod、Service等对象信息。
系统:CentOS7.6x64主机规划:master:192.168.100.11:kube-apiserverkube-controller-managerkube-scheduleretcdslave1:192.168.100.12kubeletkube-proxydockerflanneletcdslave2:192.168.100.13kubeletkube-proxydockerflanneletcd
yum install -y yum-utils device-mapper-persistent-data lvm2#yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# yum install -y docker-ce# cat << EOF > /etc/docker/daemon.json{"registry-mirrors": [ "https://registry.docker-cn.com"],"insecure-registries":["192.168.0.210:5000"]}EOF# systemctl start docker# systemctl enable dockerdocker 加速器curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io


安装证书生成工具 cfssl :wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
mkdir /sslcd /sslwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64


chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

准备好证书文件mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfomkdir /k8s/etcd-certs/ ## 上传 etcd-cert.sh生成etcd的证书chmod +x etcd-cert.sh生成证书的模板文件:查看证书./etcd-cert.shls *.pem |grep -v |xargs rm -rf {}

etcd-cert.sh 文件---cat > ca-config.json <<EOF{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}EOFcat > ca-csr.json <<EOF{"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]}EOFcfssl gencert -initca ca-csr.json | cfssljson -bare ca -#-----------------------cat > server-csr.json <<EOF{"CN": "etcd","hosts": ["192.168.100.11","192.168.100.12","192.168.100.13"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server---
二进制包下载地址:https://github.com/coreos/etcd/releases查看集群状态:# /opt/etcd/bin/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \cluster-health
mkdir -p /opt/etcd/mkdir -p /opt/etcd/{bin,cfg,ssl}cd /soft/Softtar -zxvf etcd-v3.3.12-linux-amd64.tar.gzcd etcd-v3.3.12-linux-amd64/mv etcdctl /opt/etcd/bin/mv etcd /opt/etcd/bin/


vim /opt/etcd/cfg/etcd---#[Member]ETCD_NAME="etcd01"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.100.11:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.100.11:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.11:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.11:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.12:2380,etcd03=https://192.168.100.13:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"---
vim /usr/lib/systemd/system/etcd.service---[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=/opt/etcd/cfg/etcdExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pemRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target---
cd /root/k8s/etcd-certscp -ap *pem /opt/etcd/ssl/
启动etcdsystemctl start etcdps -ef |grep etcd

scp -r /opt/etcd 192.168.100.12:/opt/scp -r /opt/etcd 192.168.100.13:/opt/scp /usr/lib/systemd/system/etcd.service 192.168.100.12:/usr/lib/systemd/system/scp /usr/lib/systemd/system/etcd.service 192.168.100.13:/usr/lib/systemd/system/
192.168.100.12 更改 配置文件vim /opt/etcd/cfg/etcd---#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.100.12:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.100.12:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.12:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.12:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.12:2380,etcd03=https://192.168.100.13:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"---systemctl start etcdchkconfig etcd on

192.168.100.13 更改 配置文件vim /opt/etcd/cfg/etcd---#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.100.13:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.100.13:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.13:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.13:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.11:2380,etcd02=https://192.168.100.12:2380,etcd03=https://192.168.100.13:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"---systemctl start etcdchkconfig etcd on

vim /etc/profileexport ETCD_HOME=/opt/etcdPATH=$PATH:$HOME/bin:$ETCD_HOME/bin---source /etc/profile--etcdctl --helpectdctl --help |grep ca


查看集群状态:cd /opt/etcd/ssl/# /opt/etcd/bin/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.17.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \cluster-health


实现容器网络

Overlay Network :覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。VXLAN :将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。Flannel :是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。多主机容器网络通信其他主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。


1 )写入分配的子网段到 etcd ,供 flanneld 使用# /opt/etcd/bin/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \set /coreos.com/network/config '{ "Network": "172.16.0.0/16", "Backend": {"Type": "vxlan"}}'2 )下载二进制包# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz3 )配置 Flannel4 ) systemd 管理 Flannel5 )配置 Docker 启动指定子网段6 )
部署:mkdir /root/flannelmkdir -p /opt/kubernetes/{bin,cfg,ssl}ls -ld *cp -p flanneld mk-docker-opts.sh /opt/kubernetes/bin/tar -zxvf flannel-v0.10.0-linux-amd64.tar.gzscp flanneld mk-docker-opts.sh 192.168.100.12:/opt/kubernetes/bin/scp flanneld mk-docker-opts.sh 192.168.100.13:/opt/kubernetes/bin/

flannel 配置的文件---#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/kubernetes/ssl/ca.pem \-etcd-certfile=/opt/kubernetes/ssl/server.pem \-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"EOFcat <<EOF >/usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOF---
配置192.168.100.12的node节点的配置文件---vim /opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem \-etcd-certfile=/opt/kubernetes/ssl/server.pem \-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"---
配置启动flanned 文件vim /usr/lib/systemd/system/flanneld.service---[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.target---
192.168.100.11 执行设置VXLAN 网络cd /opt/kubernetes/ssl/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" \set /coreos.com/network/config '{ "Network": "172.16.0.0/16", "Backend": {"Type": "vxlan"}}'

在node 节点上面检查192.168.100.12:cd /opt/kubernetes/ssl/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" get /coreos.com/network/config

192.168.100.13cd /opt/kubernetes/ssl/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" get /coreos.com/network/config

ifconfig会生成一个flannel.1 的网段cat /run/flannel/subnet.env


配置docker 的启动 加载应用flanneld 网络---vim /usr/lib/systemd/system/docker.service[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ## 没有反斜杠ExecReload=/bin/kill -s HUP \$MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target---启动dockersystemctl daemon-reloadservice docker restart


部署192.168.100.13---从 192.168.100.12 当中去同步文件数据cd /usr/lib/systemd/systemscp /opt/kubernetes/cfg/flanneld 192.168.100.13:/opt/kubernetes/cfg/scp flanneld.service 192.168.100.13:/usr/lib/systemd/systemscp docker.service 192.168.100.13:/usr/lib/systemd/system启动 flanneld 与 重启dockerservice flanneld startservice docker restartchkconfig flanneld onifconfig |more测试:从 172.17.100.12 当中去测试flanneld 网络是否能通ping 10.0.23.1






在主节点上面查看flannel 的网络etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.168.100.12:2379,https://192.168.100.13:2379" ls /coreos.com/network/subnets判断 flanneld 网络的 节点etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.11:2379,https://192.1168.100.12:2379,https://192.168.100.13:2379" get /coreos.com/network/subnets

route -n

集群部署 – 创建Node节点kubeconfig文件将文件 kubeconfig.sh 上传到172.17.100.11 的主节点上面去1、创建TLS Bootstrapping Token---export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')cat > token.csv << EOF${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF----查看Token文件cat token.csv---6a694cc8d6e025e97ea74c1a14cff8bf,kubelet-bootstrap,10001,"system:kubelet-bootstrap"---

2. 设置 bootstrap.kubeconfig的 配置文件设置kube_apiserverexport KUBE_APISERVER="https://172.17.100.11:6443"上传kubectl的命令到 /opt/kubernetes/bin 下面:cd /opt/kubernetes/binchmod +x kubectl# 设置集群参数cd /opt/kubernetes/ssl/kubectl config set-cluster kubernetes \--certificate-authority=./ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig生成bootstrap.kubeconfig 文件





设置 证书的信息:kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=bootstrap.kubeconfig

# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig

# 设置默认上下文kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

cat bootstrap.kubeconfig

3. 创建kube-proxy kubeconfig文件kubectl config set-cluster kubernetes \--certificate-authority=./ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

cat kube-proxy.kubeconfig


将 bootstrap.kubeconfig 和 kube-proxy.kubeconfig 同步到其它节点cp -p *kubeconfig /opt/kubernetes/cfgscp *kubeconfig 172.17.100.12:/opt/kubernetes/cfg/scp *kubeconfig 172.17.100.13:/opt/kubernetes/cfg/

下载k8s的 软件包https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md下载 k8s 的版本:1.9.2 server 包kubernetes-server-linux-amd64.tar.gz
上传 master.zip 到 192.168.100.11 上面的/root/master 目录下面mkdir mastercd masterunzip master.zipcp -p kube-controller-manager kube-apiserver kube-scheduler /opt/kubernetes/bin/cd /opt/kubernetes/bin/chmod +x *



cp -p /root/token.csv /opt/kubernetes/cfg/cd /root/master/./apiserver.sh 172.17.100.11 https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379

cd /opt/kubernetes/cfg/cat kube-apiserver---KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379 \--insecure-bind-address=127.0.0.1 \--bind-address=172.17.100.11 \--insecure-port=8080 \--secure-port=6443 \--advertise-address=172.17.100.11 \--allow-privileged=true \--service-cluster-ip-range=10.10.10.0/24 \--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/kubernetes/ssl/ca.pem \--etcd-certfile=/opt/kubernetes/ssl/server.pem \--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"---
cd /usr/lib/systemd/system/cat kube-apiserver.service---[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target---启动apiserverservice kube-apiserver startps -ef |grep apiserver

执行contronller 脚本./controller-manager.sh 127.0.0.1ps -ef |grep controller


启动调度./scheduler.sh 127.0.0.1ps -ef |grep scheduler


cd /opt/kubernetes/cfg/cat kube-controller-manager---KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.10.10.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem"---cat kube-scheduler---KUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect"---
检查节点:kubectl get cs

172.17.100.12 与 172.17.100.13mkdir /root/node上传node.zip 到 /root/node 目录下面cd /root/nodeunzip node.zip


cp -p kube-proxy kubelet /opt/kubernetes/bin/cd /opt/kubernetes/bin/chmod +x *


172.17.100.12 node 节点cd /root/nodechmod +x *.sh./kubelet.sh 172.17.100.12 10.10.10.2----注意报错:Jul 11 15:40:18 node-02 kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope-----去master(172.17.100.11) 节点上面执行命令:kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap然后重启systemctl daemon-reloadsystemctl enable kubeletsystemctl restart kubelet



proxy的配置:./proxy.sh 172.17.100.12ps -ef |grep proxy

去master 节点上面查看证书kubectl get csr

kubectl certificate approve node-csr-3B70dKcCjJuitWcWTjqb2rjadH1ld4Tq0mU9QAd5j7Ikubectl get csr

节点加入kubectl get node

172.17.100.12 多出了证书cd /opt/kubernetes/ssl/ls

172.17.100.13 节点:cd /root/node./kubelet.sh 172.17.100.13 10.10.10.2./proxy.sh 172.17.100.13

在master(172.17.100.11) 上面执行kubectl get csrkubectl certificate approve node-csr-ubm9Uq4P7VhzB_zryLhH3WM5SbpaunS5sg9cYqG5wLA

kubectl get csr

172.17.100.13 上面会自动生成kubelet的 证书cd /opt/kubernetes/sslls

在master 上面执行命令:kubectl get node

kubectl get cs至此kubernetes 部署完成

# kubectl run nginx --image=nginx --replicas=3# kubectl get pod# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort# kubectl get svc nginx# kubectl get all


kubectl get pod -o wide

# kubectl get svg nginx

flanneld 网络访问curl -i 10.10.10.235:88

外部网络访问:172.17.100.12:40463

mkdir -p /root/ui上传文件dashboard-deployment.yaml dashboard-rbac.yaml dashboard-service.yaml到 /root/uicd /root/uils

执行构建# kubectl create -f dashboard-rbac.yaml# kubectl create -f dashboard-deployment.yaml# kubectl create -f dashboard-service.yaml### 查看容器kubectl get pods --all-namespaceskubectl get svc --all-namespaces

浏览器访问:http://172.17.100.12:41389/ui至此 kubernetes UI 安装完成
