Merge pull request #4581 from LinuxBars/master

翻译完毕
This commit is contained in:
VicYu 2016-10-24 13:25:37 +08:00 committed by GitHub
commit 86d6149bb9
2 changed files with 74 additions and 75 deletions

View File

@ -1,75 +0,0 @@
LinuxBars 翻译中
8 best practices for building containerized applications
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/containers_2015-2-osdc-lead.png?itok=0yid3gFY)
Containers are a major trend in deploying applications in both public and private clouds. But what exactly are containers, why have they become a popular deployment mechanism, and how will you need to modify your application to optimize it for a containerized environment?
### What are containers?
The technology behind containers has a long history beginning with SELinux in 2000 and Solaris zones in 2005. Today, containers are a combination of several kernel features including SELinux, Linux namespaces, and control groups, providing isolation of end user processes, networking, and filesystem space.
### Why are they so popular?
The recent widespread adoption of containers is largely due to the development of standards aimed at making them easier to use, such as the Docker image format and distribution model. This standard calls for immutable images, which are the launching point for a container runtime. Immutable images guarantee that the same image the development team releases is what gets tested and deployed into the production environment.
The lightweight isolation that containers provide creates a better abstraction for an application component. Components running in containers won't interfere with each other the way they might running directly on a virtual machine. They can be prevented from starving each other of system resources, and unless they are sharing a persistent volume won't block attempting to write to the same files. Containers have helped to standardize practices like logging and metric collection, and they allow for increased multi-tenant density on physical and virtual machines, all of which leads to lower deployment costs.
### How do you build a container-ready application?
Changing your application to run inside of a container isn't necessarily a requirement. The major Linux distributions have base images that can run anything that runs on a virtual machine. But the general trend in containerized applications is following a few best practices:
- 1. Instances are disposable
Any given instance of your application shouldn't need to be carefully kept running. If one system running a bunch of containers goes down, you want to be able to spin up new containers spread out across other available systems.
- 2. Retry instead of crashing
When one service in your application depends on another service, it should not crash when the other service is unreachable. For example, your API service is starting up and detects the database is unreachable. Instead of failing and refusing to start, you design it to retry the connection. While the database connection is down the API can respond with a 503 status code, telling the clients that the service is currently unavailable. This practice should already be followed by applications, but if you are working in a containerized environment where instances are disposable, then the need for it becomes more obvious.
- 3. Persistent data is special
Containers are launched based on shared images using a copy-on-write (COW) filesystem. If the processes the container is running choose to write out to files, then those writes will only exist as long as the container exists. When the container is deleted, that layer in the COW filesystem is deleted. Giving a container a mounted filesystem path that will persist beyond the life of the container requires extra configuration, and extra cost for the physical storage. Clearly defining the abstraction for what storage is persisted promotes the idea that instances are disposable. Having the abstraction layer also allows a container orchestration engine to handle the intricacies of mounting and unmounting persistent volumes to the containers that need them.
- 4. Use stdout not log files
You may now be thinking, if persistent data is special, then what do I do with log files? The approach the container runtime and orchestration projects have taken is that processes should instead [write to stdout/stderr][1], and have infrastructure for archiving and maintaining [container logs][2].
- 5. Secrets (and other configurations) are special too
You should never hard-code secret data like passwords, keys, and certificates into your images. Secrets are typically not the same when your application is talking to a development service, a test service, or a production service. Most developers do not have access to production secrets, so if secrets are baked into the image then a new image layer will have to be created to override the development secrets. At this point, you are no longer using the same image that was created by your development team and tested by quality engineering (QE), and have lost the benefit of immutable images. Instead, these values should be abstracted away into environment variables or files that are injected at container startup.
- 6. Don't assume co-location of services
In an orchestrated container environment you want to allow the orchestrator to send your containers to whatever node is currently the best fit. Best fit could mean a number of things: it could be based on whichever node has the most space right now, the quality of service the container is requesting, whether the container requires persistent volumes, etc. This could easily mean your frontend, API, and database containers all end up on different nodes. While it is possible to force an API container to each node (see [DaemonSets][3] in Kubernetes), this should be reserved for containers that perform tasks like monitoring the nodes themselves.
- 7. Plan for redundancy / high availability
Even if you don't have enough load to require an HA setup, you shouldn't write your service in a way that prevents you from running multiple copies of it. This will allow you to use rolling deployments, which make it easy to move load off one node and onto another, or to upgrade from one version of a service to the next without taking any downtime.
- 8. Implement readiness and liveness checks
It is common for applications to have startup time before they are able to respond to requests, for example, an API server that needs to populate in-memory data caches. Container orchestration engines need a way to check that your container is ready to serve requests. Providing a readiness check for new containers allows a rolling deployment to keep an old container running until it is no longer needed, preventing downtime. Similarly, a liveness check is a way for the orchestration engine to continue to check that the container is in a healthy state. It is up to the application creator to decide what it means for their container to be healthy, or "live". A container that is no longer live will be killed, and a new container created in its place.
### Want to find out more?
I'll be at the Grace Hopper Celebration of Women in Computing in October, come check out my talk: [Containerization of Applications: What, Why, and How][4]. Not headed to GHC this year? Then read on about containers, orchestration, and applications on the [OpenShift][5] and [Kubernetes][6] project sites.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/9/8-best-practices-building-containerized-applications
作者:[Jessica Forrester ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jwforres
[1]: https://docs.docker.com/engine/reference/commandline/logs/
[2]: http://kubernetes.io/docs/getting-started-guides/logging/
[3]: http://kubernetes.io/docs/admin/daemons/
[4]: https://www.eiseverywhere.com/ehome/index.php?eventid=153076&tabid=351462&cid=1350690&sessionid=11443135&sessionchoice=1&
[5]: https://www.openshift.org/
[6]: http://kubernetes.io/

View File

@ -0,0 +1,74 @@
8个构建容器应用的最佳实践
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/containers_2015-2-osdc-lead.png?itok=0yid3gFY)
容器是未来在共有云和私有云进行应用开发的主要趋势,但是容器到底是什么,为什么他们成为了一种广受欢迎的部署机制,而且是否需要你来修改你的应用使得其对于容器环境来说是最优化。
### 什么是容器?
容器背后的技术具有一段以 2000 年的 SELinux 和 2005 年的 Solaris 开始的很久的历史。今天,容器是由包括 SELinuxLinux 命名空间和控制组,提供用户进程,网络和文件系统空间的隔离等几项内核特性构成。
### 为什么他们如此流行?
最近容器大规模的应用在很大程度上是由于旨在使容器更加易于使用的标准的发展,例如 Docker 镜像格式和分布模型,这个标准使用不可变镜像,这正是容器运行时的出发点,不可变的镜像可以保证开发团队使用的环境是经过在生产环境中部署和测试的镜像。
容器所提供的轻量的隔离为一个应用组件提供了一个更好的抽象。在容器中运行的组件将不会干扰其他可能直接运行在虚拟机的应用。他们可以被防止对系统资源的争夺,而且除非他们共享一个持久的卷将不会阻止对同一个文件的写请求。容器使得日志和指标采集的实践得以标准化,而且他们可以支持更大的用户密度在物理机和虚拟机上,所有的这些优点将导致更低的部署成本。
### 我们应该如何构建一个基于容器的应用呢?
将应用改为运行在容器中并不是什么很高的要求。主要的 Linux 发行版具有可以运行任何在虚拟机上运行程序的基础镜像。但是容器化应用的趋势是遵循如下最佳实践:
- 1. 实例是一次性的
你的应用的任何实例都不需要被小心地被保持运行。如果一个运行一个分支容器的系统崩溃了,你还能够通过其他可用地系统去创建一个新的容器。
- 2. 重试而不是崩溃
当你的应用的一个服务依赖于另一个服务的时候,另一个服务不可达的时候它应该不会崩溃。例如,你的 API 服务正在启动而且监测到数据库不能连接。你将设计他使得其不断重试连接,而不是运行失败和拒绝启动。当数据库连接断开的时候 API 可以返回 503 状态码,告诉客户端服务现在不可用。应用应该遵守这个实践,但是如果你正在一个实例是一次性的容器环境中工作,那么对这个实践的需要会更加明显。
- 3. 持久性数据很特殊
容器是基于使用写时复制文件系统的共享镜像。如果容器进程选择写入文件,那么这些写的内容会一直存在直到容器消失。当容器被删除的时候,写时复制文件系统中的那一层会被删除。提供容器一个挂哉的超越容器生命的文件系统目录需要另外的配置,而且会要求物理存储的花费。清楚地为什么存储是持久的定义抽象层催生出了实例是一次性的观点。拥有着一个抽象层也使得容器编制引擎可以为容器处理其需要的复杂的挂载和取消挂载临时的卷请求。
- 4. 使用 stdout 而不是日志文件
现在你或许会思考,如果持久的数据是特殊的,那么我用日志文件来做什么事情?容器运行时和编制项目所采用的方法是进程应该写入 stdout/stderr,而且具有实现和维护容器日志的基础设施。
- 5. 敏感信息(还有其他配合信息)也是很特殊的
你应该从来没有将敏感信息例如密码,密钥和证书硬编码到你的镜像中,敏感信息通常在你的应用在开发服务中,一个测试服务中,或者一个生产服务中都是不同的。大多数开发者并没有访问生产机密的权限,所以如果机密信息被打包到镜像中,然后一个新的镜像层就必须被创建来覆盖这个开发机密。基于这一点来看,你再也不能使用和你们开发团队创建和质量测试的相同的镜像了,而且也失去了不可修改的镜像的好处,相反的,这些值应该被存储在环境变量中或容器启动是读取的文件中。
- 6. 不要假设服务的协同定位
在一个精心策划的容器环境中你想让控制器发送容器到任何最适合的节点。最佳匹配应该意味着很多事情它应该基于那个节点现在拥有最多的空间容器所请求的服务数量容器是否请求永久卷。这可能意味这你的前端API 和数据库容器都会在不同的节点结束。尽管给每个节点强制分配一个 API 容器是可以做到的(参考 Kubernetes 的示例),但这种方式应该留给执行监控节点这类任务的容器
- 7. 亢余/高可用计划
即使你没有足够的负载去要求高可用性的配置,你在编写服务的时候不应该阻止它从多分拷贝同时运行。这将会允许你运用起伏部署,这将会使得将负载从一个节点移动到另外一个节点非常容易,或者将服务从一个版本更新到下一个版本而且不需要任何下线时间。
- 8. 实现准备和灵活性检查
应用在响应请求之前会有一定的启动时间是一件很正常的事情,例如,一个 API 服务器需要填充内存数据缓存。容器部署引擎需要一种方法来检测你的容器是否可以服务用户请求。为一个新的容器提供完备检查可以允许我们进行灵活的部署来使旧容器运行直到不再需要它,这可以防止宕机。类似的,一个存活检查也是一种容器编排引擎检查容器是否在健康可用状态的方法。由容器应用的创建者来决定健康或者说存活对于他们的容器来说意味着什么。一个不再存活的容器将会被结束,而且一个新的容器会被创建来替代它。
### 想查找更多资料?
我将会出席十月份的格雷丝霍普计算机女性峰会,你可以在这里来看一下关于我的访谈:应用的容器化:是什么,为什么,和如何实现。今年不去 GHC 吗?那你可以在 OpenShift 和 Kubernetes 的项目站点来了解关于容器,部署和应用的相关内容。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/9/8-best-practices-building-containerized-applications
作者:[Jessica Forrester ][a]
译者:[LinuxBars](https://github.com/LinuxBars)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jwforres
[1]: https://docs.docker.com/engine/reference/commandline/logs/
[2]: http://kubernetes.io/docs/getting-started-guides/logging/
[3]: http://kubernetes.io/docs/admin/daemons/
[4]: https://www.eiseverywhere.com/ehome/index.php?eventid=153076&tabid=351462&cid=1350690&sessionid=11443135&sessionchoice=1&
[5]: https://www.openshift.org/
[6]: http://kubernetes.io/