[关闭]
@Rays 2019-11-04T11:55:14.000000Z 字数 11738 阅读 1137

Kubernetes在非云环境中的配置和运行(系列教程第六篇):主节点和工作节点


摘要:

作者: Marcos Vallim

正文:

本文将逐一介绍Kubernetes主节点和工作节点的各个组件,包括控制器管理(Controller Manager)、API服务器、etcd、调度器(Scheduler)、Kubelet等。强烈建议读者扩展阅读参考链接内容,以更好地理解每个组件的工作机制,以及它们在Kubernetes集群中的作用,

在前一篇文章中,我们介绍了Kubernetes,概述了其主要组件。这些文章构成了Kubernetes系列教程。希望读者有兴趣深入了解Kubernetes在非云环境中的安装和配置。

想要先睹为快的读者,不必等待所有文章更新,可直接克隆该项目的GitHub代码库。代码库中的文档正在持续改进中,并完全可用。代码库地址为:mvallim/kubernetes-under-the-hood

本教程适用于那些规划安装Kubernetes集群并希望理解各组件工作机制的读者。

主节点(Master)

主节点负责编排容器相关的所有活动,这些容器运行在工作节点(Worker)上。在各项任务中,主节点调度和部署集群应用,并采集工作节点和Pods的信息。
Masters are responsible for orchestrating all activities related to the containers that run on the worker nodes. It is responsible for scheduling and deploying clustered applications and collecting information about worker nodes and Pods, among many other activities.

主节点的配置方法 Some approaches for configuring Master nodes

使用etcd节点的堆叠控制平面(Stacked control plane),

此配置方法中,服务作为容器运行,由kubeadm自动配置。In this approach, the services run as containers and are automatically set up by kubeadm.

堆叠高可用集群的拓扑如下图所示。其中,etcd提供分布式数据存储集群,并堆叠在由运行控制平面组件的kubeadm管理节点所构成的集群之上。
A stacked HA cluster is a topology (see the image below) where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components.

每个控制平面节点均运行API服务器(api-server)、调度器(scheduler)和控制器管理(controller-manager)进程。其中,API服务器进程通过负载均衡器提供给工作节点使用(在此,使用的负载均衡器是HA Proxy),并创建本地etcd成员。该本地成员只与运行在同一节点上的API服务器进程通信。类似地,调度器和控制器管理进程也采用同样机制。

这种拓扑实现了控制平面和etc成员在运行节点上的耦合,相比于外部etcd节点而言更易于建立、管理和复制。
This topology couples the control planes and etcd members on the same node where they run. It is simpler to set up than a cluster with external etcd nodes, and simpler to manage for replication.

但是,堆叠集群存在耦合失败的风险。如果一个节点宕机,那么会丢失etcd成员和控制平面进程。这时需要增加冗余度,通过添加更多的控制平面降低此类风险。
However, a stacked cluster runs into the risk of failed coupling. If one node goes down, both an etcd member and a control plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.

由此建议对于高可用集群,至少运行三个堆叠控制平面节点。
You should therefore run a minimum of three stacked control plane nodes for an HA cluster.

kubeadm高可用拓扑:堆叠etcd

参考链接:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/

使用外部etcd节点的堆叠控制平面

在此配置方法中,服务作为容器运行,由kubeadm部分配置。In this approach, the services run as containers and are partially configured by kubeadm.

使用外部etcd节点的高可用集群拓扑如下图所示。其中,由etcd提供的分布式数据存储集群,独立于由运行控制平面组件的节点所组成的集群。
An HA cluster with external etcd nodes is a topology (see the image below) where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.

类似于堆叠etcd拓扑,在外部etcd拓扑中,每个控制平面节点均运行API服务器(api-server)、调度器(scheduler)和控制器管理(controller-manager)进程。其中,API服务器进程通过负载均衡器提供给工作节点使用。但是,etcd成员运行在独立的机器上,每个etcd主机与每个控制平面节点的API服务器通信。
Like in the stacked etcd topology, each control plane node in an external etcd topology runs an instance of the api-server, scheduler, and controller-manager. And the api-server is exposed to worker nodes using a load balancer. However, etcd members run on separate hosts, and each etcd host communicates with the api-server of each control plane node.

这种拓扑解耦了控制平面和etc成员。这样的设置相对与堆叠高可用拓扑而言,控制平面或etc成员的宕机对集群冗余性的影响甚微。
This topology decouples the control plane and etcd member. It therefore provides an HA setup where losing a control plane instance or an etcd member has less impact and does not affect the cluster redundancy as much as the stacked HA topology.

但是,该拓扑相比于堆叠高可用拓扑所需的主机数量增加了一倍。最小需要三台控制平面节点主机、三台etcd节点主机。
However, this topology requires twice the number of hosts as the stacked HA topology. A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology.

kubeadm高可用拓扑:外部etcd

参考链接:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/

使用外部etcd节点的控制平面服务, Control plane services and external etcd nodes

在此配置方法中,服务作为独立进程运行,不使用kubeadm而采用手工配置。该方法更为灵活,但是在建立集群中需要人工做更多工作。In this approach, the services run as standalone processes and should be manually configured, without using kubeadm. It provides more flexibility but also demands more work to be done by the person setting up the cluster.

使用外部etcd节点的高可用集群控制平面的拓扑如下图所示。其中,其中,由etcd提供的分布式数据存储集群,独立于由运行控制平面组件的节点所组成的集群。
An HA cluster Control Plane with external etcd nodes is a topology (see the image below) where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.

类似于使用外部etcd节点的堆叠控制平面拓扑,在该拓扑中,每个控制平面节点均运行API服务器(api-server)、调度器(scheduler)和控制器管理(controller-manager)进程。其中,API服务器进程通过负载均衡器提供给工作节点使用。etcd成员运行在独立主机上,每个etcd主机与与每个控制平面节点的API服务器通信。
Like in the stacked control plane and external etcd nodes topology, each control plane node in an external etcd topology runs an instance of the api-server, scheduler, and controller-manager. And the api-server is exposed to worker nodes using a load balancer. etcd members run on separate hosts, and each etcd host communicates with all api-server of the control plane nodes.

该拓扑在同一节点上以独立服务方式运行API服务器、调度器和控制器管理进程,而etcd采用单独节点运行。它提供的高可用设置相比于堆叠高可用拓扑而言,控制平面进程和etcd成员的影响甚微,并不影响集群冗余度。
This topology runs api-server, controller-manager and scheduler as standalone services in the same node, while etcd is ran on its own node. It therefore provides an HA setup where losing a control plane instance or an etcd member has less impact and does not affect the cluster redundancy as much as in the stacked HA topology.

但是,该拓扑相比于堆叠高可用拓扑所需的主机数量增加了一倍。最小需要三台控制平面节点主机、三台etcd节点主机。并且,各服务必须逐个手工安装和配置。
However, this topology requires twice the number of hosts as the stacked HA topology. A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology. Also, you must install and configure the services one-by-one.

使用外部etcd服务的Kubernetes控制平面

本教程使用的方法

推荐使用etcd节点的堆叠控制平面拓扑配置。因为此配置所需的手工配置最少,使用的服务实例也最少,

各组件概述

Pod创建流程(图片来自heptio.com

参考链接:
* https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
* https://kubernetes.io/docs/reference/glossary/?fundamental=true
* https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#introduction

工作节点(Worker)

工作节点是由Kubernetes有效管理运行容器所在的机器(即节点,可以是物理的,也可以是虚拟机)。为了使工作节点被Kubernetes管理,节点必须安装Kubelet代理。通过Kubelet代理实现所有与主节点的通信,由此可执行所有集群操作。
Workers are the machines (nodes, which can be physical or VMs) where the containers managed by Kubernetes effectively run. In order for worker nodes to be managed by Kubernetes, they must have Kubelet agents installed on them. It is through this agent that all communication with the master happens and, as a consequence, the cluster operations are performed.

工作节点的配置方法 Some approaches for configuring Worker nodes

堆叠工作节点 Stacked worker nodes

在这种配置方法中,服务作为容器运行,由kubeadm自动配置。In this approach, the services run as containers and are automatically set up by kubeadm.

堆叠工作节点的拓扑如下图所示。其中,每个节点均运行kubelet、kube-proxy、cni-plugins和containerd进程。
A stacked worker is a topology (see image above) where each node runs an instance of kubelet, kube-proxy, cni-plugins, and containerd.

这种拓扑结构的工作节点易于配置。只需要安装kubeadm、kubelet和containerd。kube-proxy和cni插件等其它组件将在工作节点加入集群时初始化,即在运行kubeadm join命令后。
It is simpler to configure a worker in this topology. It is necessary to have only kubeadm, kubelet and containerd installed. The other components (kube-proxy and cni-plugins) will be initialized when the worker joins the cluster when the kubeadm join command is executed.

该配置方法中,kube-proxy和cni-plugins作为容器运行。
This approach runs kube-proxy and cni-plugins as containers.

工作节点服务

在这种配置方法中,服务作为独立进程运行,需要手工配置,无需使用kubeadm。更为灵活,但是需要建立集群者做更多的工作。In this approach, the services run as standalone processes and should be manually configured, without using kubeadm. It provides more flexibility but also demands more work to be done by the person setting up the cluster.

工作节点服务的拓扑如下图所示。其中,每个节点均运行kubelet、kube-proxy、cni-plugins和containerd进程。每个服务需依次安装和配置。
A worker service is a topology (see image above) where each node runs an instance of kubelet, kube-proxy, cni-plugins, and containerd. Also you must install and configure the services one-by-one.

该配置方法中,kube-proxy和cni-plugins作为独立服务运行。
This approach runs kube-proxy and cni-plugins as standalone services.

本教程使用的方法

使用堆叠工作节点,因为这需要的配置工作更少。

组件

kubelet、kube-proxy、cni-plugins和containerd等组件在主节点和工作节点上工作机制相同,具体定义如上所示。
.

小结

希望读者喜欢该文,以及系列教程中的其它文章。下一篇文章将深入介绍Kubernetes架构,解释每个组件是如何应用的。

希望读者喜欢该系列教程所有文章。在下一篇文章中,我们将深入etcd的技术细节,介绍其组成及相互交互。

稍安勿躁,该系列教程的理论部分即将在两到三篇文章内结束。下面将给出Kubernetes集群的实际操作。

欢迎在文章下面给出反馈和评论,这对于改进系列教程非常重要。

建议读者关注[作者的Medium账号](Marcos Vallim),一览系列教程全貌,并第一时间掌握新文章的发布。

原文链接: Kubernetes Journey — Up and running out of the cloud — Master and Worker

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注