Merge pull request #16453 from Morisun029/master

translated
This commit is contained in:
Xingyu.Wang 2019-11-27 22:19:10 +08:00 committed by GitHub
commit 25cfb1f23e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 231 additions and 236 deletions

View File

@ -1,236 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Demystifying Kubernetes)
[#]: via: (https://opensourceforu.com/2019/11/demystifying-kubernetes/)
[#]: author: (Abhinav Nath Gupta https://opensourceforu.com/author/abhinav-gupta/)
Demystifying Kubernetes
======
[![][1]][2]
_Kubernetes is a production grade open source system for automating deployment, scaling, and the management of containerised applications. This article is about managing containers with Kubernetes._
Containers has become one of the latest buzz words. But what does the term imply? Often associated with Docker, a container is defined as a standardised unit of software. Containers encapsulate the software and the environment required to run the software into a single unit that is easily shippable.
A container is a standard unit of software that packages the code and all its dependencies so that the application runs quickly and reliably from one computing environment to another. The container does this by creating something called an image, which is akin to an ISO image. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application — code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and, in the case of Docker containers, images become containers when they run on a Docker engine. Containers isolate software from the environment and ensure that it works uniformly despite differences in instances across environments.
**What is container management?**
Container management is the process of organising, adding or replacing large numbers of software containers. Container management uses software to automate the process of creating, deploying and scaling containers. This gives rise to the need for container orchestration—a tool that automates the deployment, management, scaling, networking and availability of container based applications.
**Kubernetes**
Kubernetes is a portable, extensible, open source platform for managing containerised workloads and services, and it facilitates both configuration and automation. It was originally developed by Google. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Google open sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google had with running production workloads at scale, combined with best-of-breed ideas and practices from the community, as well as the usage of declarative syntax.
Some of the common terminologies associated with the Kubernetes ecosystem are listed below.
_**Pods:**_ A pod is the basic execution unit of a Kubernetes application the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod represents processes running on a Kubernetes cluster.
A pod encapsulates the running container, storage, network IP (unique) and commands that govern how the container should run. It represents the single unit of deployment within the Kubernetes ecosystem, a single instance of an application which might consist of one or many containers running with tight coupling and shared resources.
Pods in a Kubernetes cluster can be used in two main ways. The first is pods that run a single container. The one-container-per-pod model is the most common Kubernetes use case. The second method involves pods that run multiple containers that need to work together.
A pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources.
_**ReplicaSet:**_ The purpose of a ReplicaSet is to maintain a stable set of replica pods running at any given time. A ReplicaSet contains information about how many copies of a particular pod should be running. To create multiple pods to match the ReplicaSet criteria, Kubernetes uses the pod template. The link a ReplicaSet has to its pods is via the latters metadata.ownerReferences field, which specifies which resource owns the current object.
_**Services:**_ Services are an abstraction to expose the functionality of a set of pods. With Kubernetes, you dont need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives pods their own IP addresses and a single DNS name for a set of pods, and can load-balance across them.
One major problem that services solve is the integration of the front-end and back-end of a Web application. Since Kubernetes provides IP addresses behind the scenes to pods, when the latter are killed and resurrected, the IP addresses are changed. This creates a big problem on the front-end side to connect a given back-end IP address to the corresponding front-end IP address. Services solve this problem by providing an abstraction over the pods — something akin to a load balancer.
_**Volumes:**_ A Kubernetes volume has an explicit lifetime — the same as the pod that encloses it. Consequently, a volume outlives any container that runs within the pod and the data is preserved across container restarts. Of course, when a pod ceases to exist, the volume will cease to exist, too. Perhaps more important than this is that Kubernetes supports many types of volumes, and a pod can use any number of them simultaneously.
At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it and its contents are determined by the particular volume type used.
**Why Kubernetes?**
Containers are a good way to bundle and run applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if one container goes down, another needs to start. Wouldnt it be nice if this could be automated by a system?
Thats where Kubernetes comes to the rescue! It provides a framework to run distributed systems resiliently. It takes care of scaling requirements, failover, deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
Kubernetes provides users with:
1\. Service discovery and load balancing
2\. Storage orchestration
3\. Automated roll-outs and roll-backs
4\. Automatic bin packing
5\. Self-healing
6\. Secret and configuration management
**What can Kubernetes do?**
In this section we will look at some code examples of how to use Kubernetes when building a Web application from scratch. We will create a simple back-end server using Flask in Python.
There are a few prerequisites for those who want to build a Web app from scratch. These are:
1\. Basic understanding of Docker, Docker containers and Docker images. A quick refresher can be found at _<https://www.docker.com/sites/default/files/Docker\_CheatSheet\_08.09.2016\_0.pdf>_.
2\. Docker should be installed in the system.
3\. Kubernetes should be installed in the system. Instructions on how to do so on a local machine can be found at _<https://kubernetes.io/docs/setup/learning-environment/minikube/>_.
Now, create a simple directory, as shown in the code snippet below:
```
mkdir flask-kubernetes/app && cd flask-kubernetes/app
```
Next, inside the _flask-kubernetes/app_ directory, create a file called main.py, as shown in the code snippet below:
```
touch main.py
```
In the newly created _main.py,_ paste the following code:
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Kubernetes!"
if __name__ == "__main__":
app.run(host='0.0.0.0')
```
Install Flask in your local using the command below:
```
pip install Flask==0.10.1
```
After installing Flask, run the following command:
```
python app.py
```
This should run the Flask server locally on port 5000, which is the default port for the Flask app, and you can see the output Hello from Kubernetes! on *<http://localhost:500*0>.
Once the server is running locally, we will create a Docker image to be used by Kubernetes.
Create a file with the name Dockerfile and paste the following code snippet in it:
```
FROM python:3.7
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
```
The instructions in _Dockerfile_ are explained below:
1\. Docker will fetch the Python 3.7 image from the Docker hub.
2\. It will create an app directory in the image.
3\. It will set an app as the working directory.
4\. Copy the contents from the app directory in the host to the image app directory.
5\. Expose Port 5000.
6\. Finally, it will run the command to start the Flask server.
In the next step, we will create the Docker image, using the command given below:
```
docker build -f Dockerfile -t flask-kubernetes:latest .
```
After creating the Docker image, we can test it by running it locally using the following command:
```
docker run -p 5001:5000 flask-kubernetes
```
Once we are done testing it locally by running a container, we need to deploy this in Kubernetes.
We will first verify that Kubernetes is running using the _kubectl_ command. If there are no errors, then it is working. If there are errors, do refer to _<https://kubernetes.io/docs/setup/learning-environment/minikube/>_.
Next, lets create a deployment file. This is a yaml file containing the instruction for Kubernetes about how to create pods and services in a very declarative fashion. Since we have a Flask Web application, we will create a _deployment.yaml_ file with both the pods and services declarations inside it.
Create a file named deployment.yaml and add the following contents to it, before saving it:
```
apiVersion: v1
kind: Service
metadata:
name: flask-kubernetes -service
spec:
selector:
app: flask-kubernetes
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-kubernetes
spec:
replicas: 4
template:
metadata:
labels:
app: flask-kubernetes
spec:
containers:
- name: flask-kubernetes
image: flask-kubernetes:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
```
Use _kubectl_ to send the _yaml_ file to Kubernetes by running the following command:
```
kubectl apply -f deployment.yaml
```
You can see the pods are running if you execute the following command:
```
kubectl get pods
```
Now navigate to _<http://localhost:6000>_, and you should see the Hello from Kubernetes! message.
Thats it! The application is now running in Kubernetes!
**What Kubernetes cannot do**
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. Kubernetes provides the building blocks for developer platforms, but preserves user choice and flexibility where it is important.
* Kubernetes does not limit the types of applications supported. If an application can run in a container, it should run great on Kubernetes.
* It does not deploy and build source code.
* It does not dictate logging, monitoring, or alerting solutions.
* It does not provide or mandate a configuration language/system. It provides a declarative API for everyones use.
* It does not provide or adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
![Avatar][3]
[Abhinav Nath Gupta][4]
The author is a software development engineer at Cleo Software India Pvt Ltd, Bengaluru. He is interested in cryptography, data security, cryptocurrency and cloud computing. He can be reached at [abhi.aec89@gmail.com][5].
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/demystifying-kubernetes/
作者:[Abhinav Nath Gupta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/abhinav-gupta/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Gear-kubernetes.jpg?resize=696%2C457&ssl=1 (Gear kubernetes)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Gear-kubernetes.jpg?fit=800%2C525&ssl=1
[3]: https://secure.gravatar.com/avatar/f65917facf5f28936663731fedf545c4?s=100&r=g
[4]: https://opensourceforu.com/author/abhinav-gupta/
[5]: mailto:abhi.aec89@gmail.com
[6]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,231 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Demystifying Kubernetes)
[#]: via: (https://opensourceforu.com/2019/11/demystifying-kubernetes/)
[#]: author: (Abhinav Nath Gupta https://opensourceforu.com/author/abhinav-gupta/)
揭开 Kubernetes 的神秘面纱
======
[![][1]][2]
_Kubernetes 是一款生产级的开源系统,用于容器化应用程序的自动部署,扩展和管理。本文关于使用 Kubernetes 来管理容器。_
“容器”已成为最新的流行语之一。 但是,这个词到底意味着什么呢? 说起“容器”,人们通常会把它和 Docker 联系起来Docker 是一个被定义为软件的标准化单元容器。 该容器将软件和运行软件所需的环境封装到一个易于交付的单元中。 容器是一个软件的标准单元,用它来打包代码及其所有依赖项,这样应用程序就可以从一个计算环境快速可靠地运行到另一个计算环境。 容器通过创建类似于ISO 映像的方式来实现此目的。 容器镜像是一个轻量级的,独立的,可执行的软件包,其中包含运行应用程序所需的所有信息,包括代码,运行时,系统工具,系统库和设置。
容器镜像在运行时变成容器对于Docker 容器,映像在 Docker 引擎上运行时变成容器。 容器将软件与环境隔离开来,确保不同环境下的实例,都可以正常运行。
**什么是容器管理?**
容器管理是组织,添加或替换大量软件容器的过程。 容器管理使用软件来自动化创建,部署和扩展容器。 这一过程就需要容器编排。容器编排是一个基于应用程序进行自动部署,管理,扩展,联网的可用容器。
**Kubernetes**
Kubernetes 是一个可移植的,可扩展的开源平台,用于管理容器化的工作负载和服务,它有助于配置和自动化。 它最初由 Google 开发, 拥有一个庞大且快速增长的生态系统。 Kubernetes 的服务,技术支持和工具得到广泛应用。
Google 在2014年将 Kubernetes 项目开源化。Kubernetes 建立在 Google 十五年大规模运行生产工作负载的经验基础上并结合了社区中最好的想法和实践以及声明式句法的使用。
下面列出了与Kubernetes生态系统相关的一些常用术语。
_**Pods:**_ pod 是 Kubernetes 应用程序的基本执行单元,是你创建或部署的 Kubernetes 对象模型中的最小和最简单的单元。pod 代表在 Kubernetes 集群上运行的进程。
Pod 将运行中的容器存储网络IP唯一和控制容器应如何运行的命令封装起来。它代表 Kubernetes 生态系统内的单个部署单元,代表一个应用程序的单个实例,该程序可能包含一个或多个紧密耦合并共享资源的容器。
Kubernetes 集群中的Pod有两种主要的使用方式。 第一种是运行单个容器。 即“一个容器一个pod”这种方式是最常见的。 第二种是运行多个需要一起工作的容器。
Pod 可能封装一个应用程序,该应用程序由紧密关联且需要共享资源的多个同位容器组成。
_**ReplicaSet:**_ ReplicaSet 的目的是维护在任何给定时间运行的一组稳定的副本容器集。 ReplicaSet 包含有关一个特定 Pod 应该运行多少个副本的信息。 为了创建多个Pod 以匹配 ReplicaSet 条件Kubernetes 使用 Pod 模板。 ReplicaSet 与其 pod 的链接是通过后者的 metas.ownerReferences 字段实现,该字段指定哪个资源拥有当前对象。
_**Services:**_ 服务是公开一组 Pod 功能的抽象。 使用 Kubernetes你无需修改应用程序即可使用陌生的服务发现机制。 Kubernetes 为 Pod 提供了自己的IP地址和一组Pod 的单个DNS 名称,并且可以在它们之间负载平衡。
服务解决的一个主要问题是Web应用程序前端和后端的集成。 由于 Kubernetes 将幕后 IP 地址提供给 Pod因此当 Pod 被杀死并复活时IP 地址会更改。 这给给定的后端 IP 地址连接到相应的前端 IP 地址带来一个大问题。 服务通过在 Pod 上提供抽象来解决此问题,类似于负载均衡器。
_**Volumes:**_ Kubernetes Volumes 具有明确的生命周期-与包围它的 Pod 相同。 因此Volumes 超过了pod 中运行的任何容器的寿命,并且在容器重新启动后保留了数据。 当然,当 pod 不存在时,该体积也将不再存在。 也许比这更重要的是 Kubernetes 支持多种类型的 Volumes并且 Pod 可以同时使用任意数量的 Volumes。
Volumes 的核心只是一个目录其中可能包含一些数据pod 中的容器可以访问该目录。 该目录是如何产生的, 它后端基于什么存储介质,其中的数据内容是什么,这些都由使用的特定 volumes 类型来决定的。
**为什么选择 Kubernetes?**
容器是捆绑和运行应用程序的好方法。 在生产环境中,你需要管理运行应用程序的容器,并确保没有停机时间。 例如,如果一个容器发生故障,则需要启动另一个容器。 如果由系统自动实现这一操作,岂不是更好? Kubernetes 就是来解决这个问题的! Kubernetes 提供了一个框架来弹性运行分布式系统。 该框架负责扩展需求,故障转移,部署模式等。 例如Kubernetes 可以轻松管理系统的 Canary 部署。
Kubernetes 为用户提供了:
1\. 服务发现和负载平衡
2\. 存储编排
3\. 自动退出和回退
4\. 自动打包
5\. 自我修复
6\. 秘密配置管理
**Kubernetes 可以做什么?**
在本文中,我们将会看到一些从头构建 Web 应用程序时如何使用 Kubernetes 的代码示例。我们将在 Python 中使用 Flask 创建一个简单的后端服务器。
对于那些想从头开始构建 Web 应用程序的人,有一些前提条件,即:
1\. 对 DockerDocker 容器和 Docker 映像的基本了解。可以访问该网站
_<https://www.docker.com/sites/default/files/Docker\_CheatSheet\_08.09.2016\_0.pdf>_快速了解。
2\. 系统中应该安装Docker。
3\. 系统中应该安装Kubernetes有关如何在本地计算机上安装的说明请访问网站 _<https://kubernetes.io/docs/setup/learning-environment/minikube/>_.
现在,创建一个目录,如下代码片段所示:
```
mkdir flask-kubernetes/app && cd flask-kubernetes/app
```
接下来,在 _flask-kubernetes/app_ 目录中,创建一个名为 main.py 的文件,如下面的代码片段所示:
```
touch main.py
```
在新创建的 _main.py,_ 文件中,粘贴下面代码:
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Kubernetes!"
if __name__ == "__main__":
app.run(host='0.0.0.0')
```
使用下面命令在本地安装 Flask:
```
pip install Flask==0.10.1
```
Flask 安装后,执行下面的命令:
```
python app.py
```
应该在本地运行Flask服务器Flask应用程序的默认端口是5000并且你可以在 *<http://localhost:500*0> 上看到输出Hello from Kubernetes!’。 一旦服务器在本地运行,我们就创建一个供 Kubernetes 使用的 Docker 映像。 创建一个名为 Dockerfile 的文件,并将以下代码片段粘贴到其中:
```
FROM python:3.7
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
```
_Dockerfile_文件的说明如下
1\. Docker 将从 Docker 集线器获取 Python 3.7 映像。
2\. 将在映像中创建一个应用程序目录。
3\. 它将一个应用程序设置为工作目录。
4\. 将内容从主机中的应用程序目录复制到映像应用程序目录。
5\. 暴露端口5000。
6\. 最后,它运行命令,启动 Flask 服务器。
接下来,我们将使用以下命令创建 Docker 映像:
```
docker build -f Dockerfile -t flask-kubernetes:latest .
```
创建Docker映像后我们可以使用以下命令在本地运行该映像进行测试
```
docker run -p 5001:5000 flask-kubernetes
```
通过运行容器在本地完成测试之后,我们需要在 Kubernetes 中部署它。 我们将首先使用 kubectl 命令验证 Kubernetes 是否正在运行。 如果没有报错,则说明它正在工作。 如果有报错,请参考该网站信息: _<https://kubernetes.io/docs/setup/learning-environment/minikube/>_.
接下来, 我们创建一个部署文件。 这是一个Yaml文件其中包含有关 Kubernetes 的说明,该说明涉及如何以声明性的方式创建 pod 和服务。 因为我们有 Flask Web 应用程序,我们将在其中包含 pod 和 services 声明的情况下创建一个deployment.yaml文件。
创建一个名为 deployment.yaml 的文件并向其中添加以下内容,然后保存:
```
apiVersion: v1
kind: Service
metadata:
name: flask-kubernetes -service
spec:
selector:
app: flask-kubernetes
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-kubernetes
spec:
replicas: 4
template:
metadata:
labels:
app: flask-kubernetes
spec:
containers:
- name: flask-kubernetes
image: flask-kubernetes:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
```
使用以下命令将 yaml 文件发送到 Kubernete
```
kubectl apply -f deployment.yaml
```
如果执行以下命令,你会看到 pods 正在运行:
```
kubectl get pods
```
现在导航至_<http://localhost:6000>_你应该会看到Hello from Kubernetes!’消息。 成功了! 该应用程序现在正在 Kubernetes 中运行!
**Kubernetes 做不了什么 **
Kubernetes 不是一个传统的,包罗万象的 PaaS平台即服务系统。 由于 Kubernetes 运行在容器级别而非硬件级别,因此它提供了 PaaS 产品共有的一些普遍适用功能,如部署,扩展,负载平衡,日志记录和监控。 Kubernetes 为开发人员平台提供了构建块,但在重要的地方保留了用户的选择和灵活性。
* Kubernetes 不限制所支持的应用程序的类型。 如果应用程序可以在容器中运行,那么它应该可以在 Kubernetes 上更好地运行。
* 它不部署和构建源代码。
* 它不决定日志记录,监视或警报解决方案。
* 它不提供或不要求配置语言/系统。 它提供了一个声明的API供所有人使用。
* 它不提供或不采用任何全面的机器配置,维护,管理或自我修复系统。
![Avatar][3]
[Abhinav Nath Gupta][4]
本文作者 Abhinav 是班加罗尔 Cleo 软件公司的一名软件开发工程师。他对密码学、数据安全、虚拟货币及云计算方面很感兴趣,可以通过 [abhi.aec89@gmail.com][5] 与他联系。.
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/demystifying-kubernetes/
作者:[Abhinav Nath Gupta][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/abhinav-gupta/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Gear-kubernetes.jpg?resize=696%2C457&ssl=1 (Gear kubernetes)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Gear-kubernetes.jpg?fit=800%2C525&ssl=1
[3]: https://secure.gravatar.com/avatar/f65917facf5f28936663731fedf545c4?s=100&r=g
[4]: https://opensourceforu.com/author/abhinav-gupta/
[5]: mailto:abhi.aec89@gmail.com
[6]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US