@zhangyy
2020-12-21T06:36:16.000000Z
字数 15929
阅读 361
kubernetes系列
系统:Centos7.8x64cat /etc/hosts-----192.168.100.11 node01.flyfish.cn192.168.100.12 node02.flyfish.cn192.168.100.13 node03.flyfish.cn192.168.100.14 node04.flyfish.cn192.168.100.15 node05.flyfish.cn192.168.100.16 node06.flyfish.cn192.168.100.17 node07.flyfish.cn192.168.100.18 node08.flyfish.cn-----本次安装以 前三台部署k8s 部署说明

安装基础工具yum install -y wget && yum install -y vim && yum install -y lsof && yum install -y net-tools

关闭防火墙或者阿里云开通安全组端口访问systemctl stop firewalldsystemctl disable firewalld执行关闭命令: systemctl stop firewalld.service再次执行查看防火墙命令:systemctl status firewalld.service执行开机禁用防火墙自启命令 : systemctl disable firewalld.service
关闭 selinux:sed -i 's/enforcing/disabled/' /etc/selinux/configsetenforce 0cat /etc/selinux/config

关闭 swapswapoff -a #临时sed -ri 's/.*swap.*/#&/' /etc/fstab #永久free -l -h

将桥接的 IPv4 流量传递到 iptables 的链如果没有/etc/sysctl.conf文件的话直接执行echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.confecho "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz以下在所有节点操作。这里采用二进制安装,用yum安装也一样。在 node01.flyfish,node02.flyfish 与 node03.flyfish 节点上面安装
3.1 解压二进制包tar zxvf docker-19.03.9.tgzmv docker/* /usr/bin

3.2 systemd管理dockercat > /usr/lib/systemd/system/docker.service << EOF[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyExecStart=/usr/bin/dockerdExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.targetEOF

3.3 创建配置文件mkdir /etc/dockercat > /etc/docker/daemon.json << EOF{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]}EOFregistry-mirrors 阿里云镜像加速器
3.4 启动并设置开机启动systemctl daemon-reloadsystemctl start dockersystemctl enable docker

安装k8s、kubelet、kubeadm、kubectl(所有节点)配置K8S的yum源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
安装kubelet、kubeadm、kubectlyum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

systemctl enable kubelet && systemctl start kubelet

初始化所有节点:下载镜像脚本:vim image.sh----#!/bin/bashimages=(kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1)for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageNamedone-----

初始化 master节点:注意,该操作只是在master节点之后构建环境。kubeadm init \--apiserver-advertise-address=192.168.100.11 \--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \--kubernetes-version v1.17.3 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=10.244.0.0/16

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

部署网络插件kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml



其他节点加入:kubeadm join 192.168.100.11:6443 --token y28jw9.gxstbcar3m4n5p1a \--discovery-token-ca-cert-hash sha256:769528577607a4024ead671ae01b694744dba16e0806e57ed1b099eb6c6c9350



yum install -y nfs-utilsecho "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/datasystemctl enable rpcbindsystemctl enable nfs-serversystemctl start rpcbindsystemctl start nfs-serverexportfs -r


测试Pod直接挂载NFS了(主节点操作)在opt目录下创建一个nginx.yaml的文件vim nginx.yaml----apiVersion: v1kind: Podmetadata:name: vol-nfsnamespace: defaultspec:volumes:- name: htmlnfs:path: /nfs/data #1000Gserver: 192.168.100.11 #自己的nfs服务器地址containers:- name: myappimage: nginxvolumeMounts:- name: htmlmountPath: /usr/share/nginx/html/----kubectl apply -f nginx.yamlcd /nfs/data/echo " 11111" >>> index.html

安装客户端工具(node节点操作)node02.flyfish.cnshowmount -e 192.168.100.11

创建同步文件夹mkdir /root/nfsmount
将客户端的/root/nfsmount和/nfs/data/做同步(node节点操作)mount -t nfs 192.168.100.11:/nfs/data/ /root/nfsmount




vim nfs-rbac.yaml----apiVersion: v1kind: ServiceAccountmetadata:name: nfs-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: nfs-provisioner-runnerrules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["watch", "create", "update", "patch"]- apiGroups: [""]resources: ["services", "endpoints"]verbs: ["get","create","list", "watch","update"]- apiGroups: ["extensions"]resources: ["podsecuritypolicies"]resourceNames: ["nfs-provisioner"]verbs: ["use"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-nfs-provisionersubjects:- kind: ServiceAccountname: nfs-provisionernamespace: defaultroleRef:kind: ClusterRolename: nfs-provisioner-runnerapiGroup: rbac.authorization.k8s.io---kind: DeploymentapiVersion: apps/v1metadata:name: nfs-client-provisionerspec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-provisionercontainers:- name: nfs-client-provisionerimage: lizhenliang/nfs-client-provisionervolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: storage.pri/nfs- name: NFS_SERVERvalue: 192.168.100.11- name: NFS_PATHvalue: /nfs/datavolumes:- name: nfs-client-rootnfs:server: 192.168.100.11path: /nfs/data----kubectl apply -f nfs-rbac.yamlkubectl get pod


创建storageclassvi storageclass-nfs.yaml----apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: storage-nfsprovisioner: storage.pri/nfsreclaimPolicy: Delete----kubectl apply -f storageclass-nfs.yaml

#扩展"reclaim policy"有三种方式:Retain、Recycle、Deleted。Retain#保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源:手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。手动清空后端存储volume上的数据。手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。Delete删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用户手动为创建后的动态PV编辑"reclaim policy"Recycle保留PV,但清空其上数据,已废弃
kubectl get storageclass

改变默认schttps://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-classkubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
验证nfs动态供应创建pvcvim pvc.yaml-----apiVersion: v1kind: PersistentVolumeClaimmetadata:name: pvc-claim-01#annotations:# volume.beta.kubernetes.io/storage-class: "storage-nfs"spec:storageClassName: storage-nfs #这个class一定注意要和sc的名字一样accessModes:- ReadWriteManyresources:requests:storage: 1Mi-----kubectl apply -f pvc.yaml

使用pvcvi testpod.yaml----kind: PodapiVersion: v1metadata:name: test-podspec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: pvc-claim-01-----kubectl apply -f testpod.yaml

1、先安装metrics-server(yaml如下,已经改好了镜像和配置,可以直接使用),这样就能监控到pod。node的资源情况(默认只有cpu、memory的资源审计信息哟,更专业的我们后面对接 Prometheus)vim 2222.yaml----apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: system:aggregated-metrics-readerlabels:rbac.authorization.k8s.io/aggregate-to-view: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-admin: "true"rules:- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: metrics-server:system:auth-delegatorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegatorsubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: metrics-server-auth-readernamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-readersubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: apiregistration.k8s.io/v1beta1kind: APIServicemetadata:name: v1beta1.metrics.k8s.iospec:service:name: metrics-servernamespace: kube-systemgroup: metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100---apiVersion: v1kind: ServiceAccountmetadata:name: metrics-servernamespace: kube-system---apiVersion: apps/v1kind: Deploymentmetadata:name: metrics-servernamespace: kube-systemlabels:k8s-app: metrics-serverspec:selector:matchLabels:k8s-app: metrics-servertemplate:metadata:name: metrics-serverlabels:k8s-app: metrics-serverspec:serviceAccountName: metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers- name: tmp-diremptyDir: {}containers:- name: metrics-serverimage: mirrorgooglecontainers/metrics-server-amd64:v0.3.6imagePullPolicy: IfNotPresentargs:- --cert-dir=/tmp- --secure-port=4443- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameports:- name: main-portcontainerPort: 4443protocol: TCPsecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- name: tmp-dirmountPath: /tmpnodeSelector:kubernetes.io/os: linuxkubernetes.io/arch: "amd64"---apiVersion: v1kind: Servicemetadata:name: metrics-servernamespace: kube-systemlabels:kubernetes.io/name: "Metrics-server"kubernetes.io/cluster-service: "true"spec:selector:k8s-app: metrics-serverports:- port: 443protocol: TCPtargetPort: main-port---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: system:metrics-serverrules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: system:metrics-serverroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-serversubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---kubectl apply -f 2222.yaml

kubetl top nodeskubectl top nodes


https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yamlwget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

vim cluster-configuration.yaml----apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.0.0spec:persistence:storageClass: "" # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here.authentication:jwtSecret: "" # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the host cluster.etcd:monitoring: true # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it.endpointIps: 192.168.100.11 # etcd cluster EndpointIps, it can be a bunch of IPs here.port: 2379 # etcd porttlsEnable: truecommon:mysqlVolumeSize: 20Gi # MySQL PVC size.minioVolumeSize: 20Gi # Minio PVC size.etcdVolumeSize: 20Gi # etcd PVC size.openldapVolumeSize: 2Gi # openldap PVC size.redisVolumSize: 2Gi # Redis PVC size.es: # Storage backend for logging, events and auditing.# elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number# elasticsearchDataReplicas: 1 # total number of data nodes.elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes.elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes.logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.console:enableMultiLogin: true # enable/disable multiple sing on, it allows an account can be used by different users at the same time.port: 30880alerting: # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.enabled: trueauditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.enabled: truedevops: # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.enabled: truejenkinsMemoryLim: 2Gi # Jenkins memory limit.jenkinsMemoryReq: 1500Mi # Jenkins memory request.jenkinsVolumeSize: 8Gi # Jenkins volume size.jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.jenkinsJavaOpts_Xmx: 512mjenkinsJavaOpts_MaxRAM: 2gevents: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.enabled: trueruler:enabled: truereplicas: 2logging: # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.enabled: truelogsidecarReplicas: 2metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).enabled: falsemonitoring:# prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.prometheusMemoryRequest: 400Mi # Prometheus request memory.prometheusVolumeSize: 20Gi # Prometheus PVC size.# alertmanagerReplicas: 1 # AlertManager Replicas.multicluster:clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster.networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.enabled: truenotification: # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.enabled: trueopenpitrix: # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.enabled: trueservicemesh: # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology.enabled: true----kubectl apply -f kubesphere-installer.yamlkubectl apply -f cluster-configuration1.yaml

查看安装进度:kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

kubectl get pod -A


kubesphere-monitoring-system prometheus-k8s-0 0/3 ContainerCreating 0 7m20skubesphere-monitoring-system prometheus-k8s-1 0/3 ContainerCreating 0 7m20sprometheus-k8s-1 这个一直在 ContainerCreating 这个 状态

kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-systemkube-etcd-client-certs 这个证书没有找到:

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.keykubectl get secret -A |grep etcd

kubectl get pod -n kubesphere-monitoring-systemprometheus-k8s-1 这个pod 就变成Running 状态了

下面根据日志提示打开kubesphere 的web 页面:





