Finish translating Kubernetes on Fedora IoT with k3s_translated

This commit is contained in:
David Dai 2019-05-21 09:36:08 +08:00
parent a6b5bfd91c
commit 4929423dfc

View File

@ -7,100 +7,98 @@
[#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/)
[#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/)
Kubernetes on Fedora IoT with k3s
使用 k3s 在 Fedora IoT 上运行 Kubernetes
======
![][1]
Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article [How to turn on an LED with Fedora IoT][2]. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.
Fedora IoT 是一个即将发布的、面相物联网的 Fedora 版本。去年 Fedora Magazine 中的《如何使用 Fedora IOT 点亮 LED》一文第一次介绍了它。从那以后它与 Fedora Silverblue 一起不断改进,以提供针对面相容器的工作流程的不可变基础操作系统。
Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.
Kubernetes 是一个颇受欢迎的容器编排系统。它可能最常用在那些能够处理巨大负载的强劲硬件上。不过,它也能在像树莓派 3 这样轻量级的设备上运行。我们继续阅读,来了解如何运行它。
### Why Kubernetes?
### 为什么用 Kubernetes
While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are [tons of applications][3] that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.
虽然 Kubernetes 在云计算领域风靡一时,但让它在小型单板机上运行可能并不是显而易见的。不过,我们有非常明确的理由来做这件事。首先,这是一个不需要昂贵硬件就可以学习并熟悉 Kubernetes 的好方法;其次,由于它的流行性,市面上有[大量应用][2]进行了预先打包,以用于在 Kubernetes 集群中运行。更不用说,当你遇到问题时,会有大规模的社区用户为你提供帮助。
Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesnt matter if its a single node Raspberry Pi cluster or a large scale machine learning farm.
最后但同样重要的是,即使是在家庭实验室这样的小规模环境中,容器编排也确实能事情变得更加简单。虽然在学习曲线方面,这一点并不明显,但这些技能在你将来与任何集群打交道的时候都会有帮助。不管你面对的是一个单节点树莓派集群,还是一个大规模的机器学习场,它们的操作方式都是类似的。
#### K3s a lightweight Kubernetes
#### K3s - 轻量级的 Kubernetes
A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is [k3s][4] a lightweight Kubernetes distribution.
一个 Kubernetes 的“正常”安装如果有这么一说的话对于物联网来说有点沉重。K8s 的推荐内存配置,是每台机器 2GB不过我们也有一些替代品其中一个新人是 [k3s][4]——一个轻量级的 Kubernetes 发行版。
K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!
K3s 非常特殊,因为它将 etcd 替换成了 SQLite 以满足键值存储需求。还有一点,在于整个 k3s 将使用一个二进制文件分发,而不是每个组件一个。这减少了内存占用并简化了安装过程。基于上述原因,我们只需要 512MB 内存即可运行 k3s简直适合小型单板电脑
### What you will need
### 你需要的东西
1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide [here][5]. One machine is enough but two will allow you to test adding more nodes to the cluster.
2. [Configure the firewall][6] to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.
1. 在虚拟机或实体设备中运行的 Fedora IoT。在[这里][5]可以看到优秀的入门指南。一台机器就足够了,不过两台可以用来测试向集群添加更多节点。
2. [配置防火墙][6],允许 6443 和 8372 端口的通信。或者你也可以简单地运行“systemctl stop firewalld”来为这次实验关闭防火墙。
### 安装 k3s
### Install k3s
Installing k3s is very easy. Simply run the installation script:
安装 k3s 非常简单。直接运行安装脚本:
```
curl -sfL https://get.k3s.io | sh -
```
This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:
它会下载、安装并启动 k3s。安装完成后运行以下命令来从服务器获取节点列表
```
kubectl get nodes
```
Note that there are several options that can be passed to the installation script through environment variables. These can be found in the [documentation][7]. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.
需要注意的是,有几个选项可以通过环境变量传递给安装脚本。这些选项可以在[文档][7]中找到。当然,你也完全可以直接下载二进制文件来手动安装 k3s。
While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s
对于实验和学习来说,这样已经很棒了,不过单节点的集群也不是一个集群。幸运的是,添加另一个节点并不比设置第一个节点要难。只需要向安装脚本传递两个环境变量,它就可以找到第一个节点,避免运行 k3s 的服务器部分。
```
curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
K3S_TOKEN=XXX sh -
```
The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.
上面的 example-url 应被替换为第一个节点的 IP 地址,或一个经过完全限定的域名。在该节点中,(用 XXX 表示的)令牌可以在 /var/lib/rancher/k3s/server/node-token 文件中找到。
### Deploy some containers
### 部署一些容器
Now that we have a Kubernetes cluster, what can we actually do with it? Lets start by deploying a simple web server.
现在我们有了一个 Kubernetes 集群,我们可以真正做些什么呢?让我们从部署一个简单的 Web 服务器开始吧。
```
kubectl create deployment my-server --image nginx
```
This will create a [Deployment][8] named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.
这会从名为“nginx”的容器镜像中创建出一个名叫“my-server”的 [Deployment][8](镜像名默认使用 docker hub 注册中心,以及 latest 标签)。
```
kubectl get pods
```
In order to access the nginx server running in the pod, first expose the Deployment through a [Service][9]. The following command will create a Service with the same name as the deployment.
为了接触到 pod 中运行的 nginx 服务器,首先将 Deployment 通过一个 [Service][9] 来进行暴露。以下命令将创建一个与 Deployment 同名的 Service。
```
kubectl expose deployment my-server --port 80
```
The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to _curl_ the nginx server just by specifying _my-server_ (the name of the Service). See the example below for how to do this.
Service 将作为一种负载均衡器和 Pod 的 DNS 记录来工作。比如,当运行第二个 Pod 时,我们只需指定 _my-server_Service 名称)就可以通过 _curl_ 访问 nginx 服务器。有关如何操作,可以看下面的实例。
```
# Start a pod and run bash interactively in it
# 启动一个 pod在里面以交互方式运行 bash
kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
# Wait for the bash prompt to appear
# 等待 bash 提示符出现
curl my-server
# You should get the "Welcome to nginx!" page as output
# 你可以看到“Welcome to nginx!”的输出页面
```
### Ingress controller and external IP
### Ingress 控制器及外部 IP
By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to [LoadBalancer][10]. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an [Ingress][11], and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.
默认状态下,一个 Service 只能获得一个 ClusterIP只能从集群内部访问但你也可以通过把它的类型设置为 [LoadBalancer][10] 为服务申请一个外部 IP。不过并非所有应用都需要自己的 IP 地址。相反,通常可以通过基于 Host 请求头部或请求路径进行路由,从而使多个服务共享一个 IP 地址。你可以在 Kubernetes 使用 [Ingress][11] 完成此操作而这也是我们要做的。Ingress 也提供了额外的功能,比如无需配置应用,即可对流量进行 TLS 加密。
Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes [Traefik][12] for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The [documentation][13] describes the service like this:
Kubernetes 需要入口控制器来使 Ingress 资源工作k3s 包含 [Traefik][12] 正是出于此目的。它还包含了一个简单的服务负载均衡器,可以为集群中的服务提供外部 IP。这篇[文档][13]描述了这种服务:
> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.
> k3s 包含一个使用可用主机端口的基础服务负载均衡器。比如,如果你尝试创建一个监听 80 端口的负载均衡器,它会尝试在集群中寻找一个 80 端口空闲的节点。如果没有可用端口,那么负载均衡器将保持在 Pending 状态。
>
> k3s README
The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.
入口控制器已经通过这个负载均衡器暴露在外。你可以使用以下命令找到它正在使用的 IP 地址。
```
$ kubectl get svc --all-namespaces
@ -111,13 +109,13 @@ NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d
```
Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.
找到名为 traefik 的 Service。在上面的例子中我们感兴趣的 IP 是 10.0.0.8。
### Route incoming requests
### 路由传入的请求
Lets create an Ingress that routes requests to our web server based on the host header. This example uses [xip.io][14] to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.
让我们创建一个 Ingress使它通过基于 Host 头部的路由规则将请求路由至我们的服务器。这个例子中我们使用 [xip.io][14] 来避免必要的 DNS 记录配置工作。它的工作原理是将 IP 地址作为子域包含以使用10.0.0.8.xip.io的任何子域来达到IP 10.0.0.8。换句话说my-server.10.0.0.8.xip.io 被用于访问集群中的入口控制器。你现在就可以尝试(使用你自己的 IP而不是 10.0.0.8。如果没有入口你应该会访问到“默认后端”只是一个写着“404 page not found”的页面。
We can tell the ingress controller to route requests to our web server Service with the following Ingress.
我们可以使用以下 Ingress 让入口控制器将请求路由到我们的 Web 服务器 Service。
```
apiVersion: extensions/v1beta1
@ -135,47 +133,47 @@ spec:
servicePort: 80
```
Save the above snippet in a file named _my-ingress.yaml_ and add it to the cluster by running this command:
将以上片段保存到 _my-ingress.yaml_ 文件中,然后运行以下命令将其加入集群:
```
kubectl apply -f my-ingress.yaml
```
You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).
你现在应该能够在你选择的完全限定域名中访问到 nginx 的默认欢迎页面了。在我的例子中,它是 my-server.10.0.0.8.xip.io。入口控制器会通过 Ingress 中包含的信息来路由请求。对 my-server.10.0.0.8.xip.io 的请求将被路由到 Ingress 中定义为“后端”的 Service 和端口(在本例中为 my-server 和 80
### What about IoT then?
### 那么,物联网呢?
Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.
想象如下场景你的家伙农场周围有很多的设备。它是一个具有各种硬件功能传感器和执行器的物联网设备的异构集合。也许某些设备拥有摄像头天气或光线传感器。其它设备可能会被连接起来用来控制通风、灯光、百叶窗或闪烁的LED。
In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize whats going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?
这种情况下,你想从所有传感器中收集数据,在最终使用它来制定决策和控制执行器之前,也可能会对其进行处理和分析。除此之外,你可能还想配置一个仪表盘来可视化那些正在发生的事情。那么 Kubernetes 如何帮助我们来管理这样的事情呢?我们怎么保证 Pod 在合适的设备上运行?
The simple answer is labels. You can label the nodes according to capabilities, like this:
简单的答案就是“标签”。你可以根据功能来标记节点,如下所示:
```
kubectl label nodes <node-name> <label-key>=<label-value>
# Example
# 举例
kubectl label nodes node2 camera=available
```
Once they are labeled, it is easy to select suitable nodes for your workload with [nodeSelectors][15]. The final piece to the puzzle, if you want to run your Pods on _all_ suitable nodes is to use [DaemonSets][16] instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.
一旦它们被打上标签,我们就可以轻松地使用 [nodeSelector][15] 为你的工作负载选择合适的节点。拼图的最后一块如果你想在_所有_合适的节点上运行 Pod那应该使用 [DaemonSet][16] 而不是 Deployment。换句话说应为每个使用唯一传感器的数据收集应用程序创建一个 DaemonSet并使用 nodeSelectors 确保它们仅在具有适当硬件的节点上运行。
The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You dont need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.
服务发现功能允许 Pod 通过 Service 名称来寻找彼此,这项功能使得这类分布式系统的管理工作变得易如反掌。你不需要为应用配置 IP 地址或自定义端口,也不需要知道它们。相反,它们可以通过集群中的具名 Service 轻松找到彼此。
#### Utilize spare resources
#### 充分利用空闲资源
With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.
随着集群的启动并运行,收集数据并控制灯光和气候可能使你觉得你已经把它完成了。不过,集群中还有大量的计算资源可以用于其它项目。这才是 Kubernetes 真正出彩的地方。
You shouldnt have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.
你不必担心这些资源的确切位置,或者去计算是否有足够的内存来容纳额外的应用程序。这正是编排系统所解决的问题!你可以轻松地在集群中部署更多的应用,让 Kubernetes 来找出适合运行它们的位置(或是否适合运行它们)。
Why not run your own [NextCloud][17] instance? Or maybe [gitea][18]? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?
为什么不运行一个你自己的 [NextCloud][17] 实例呢?或者运行 [gitea][18]?你还可以为你所有的物联网容器设置一套 CI/CD 流水线。毕竟,如果你可以在集群中进行本地构建,为什么还要在主计算机上构建并交叉编译它们呢?
The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you dont have to. However, in order to help Kubernetes make reasonable decisions you should definitely add [resource requests][19] to your workloads.
这里的要点是Kubernetes 可以更容易地利用那些你可能浪费掉的“隐藏”资源。Kubernetes 根据可用资源和容错处理规则来调度 Pod因此你也无需手动完成这些工作。但是为了帮助 Kubernetes 做出合理的决定,你绝对应该为你的工作负载添加[资源请求][19]配置。
### Summary
### 总结
While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.
尽管 Kuberenetes 或一般的容器编排平台通常不会与物联网相关联但在管理分布式系统时使用一个编排系统肯定是有意义的。你不仅可以使用统一的方式来处理多样化和异构的设备还可以简化它们的通信方式。此外Kubernetes 还可以更好地对闲置资源加以利用。
Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.
容器技术使构建“随处运行”应用的想法成为可能。现在Kubernetes 可以更轻松地来负责“随处”的部分。作为构建一切的不可变基础,我们使用 Fedora IoT。
--------------------------------------------------------------------------------