Merge pull request #2147 from disylee/master

Translated by  disylee <disylee@hotmail.com>
This commit is contained in:
Xingyu.Wang 2014-12-26 14:26:59 +08:00
commit 90e28a3b7c
2 changed files with 168 additions and 157 deletions

View File

@ -1,157 +0,0 @@
disylee占个坑
Docker: Present and Future
================================================================================
### Docker - the story so far ###
Docker is a toolset for Linux containers designed to build, ship and run distributed applications. It was first released as an open source project by DotCloud in March 2013. The project quickly became popular, leading to DotCloud rebranded as Docker Inc (and ultimately [selling off their original PaaS business][1]). [Docker 1.0][2] was released in June 2014, and the monthly release cadence that led up to the June release has been sustained since.
The 1.0 release marked the point where Docker Inc considered the platform sufficiently mature to be used in production (with the company and partners providing paid for support options). The monthly release of point updates shows that the project is still evolving quickly, adding new features, and addressing issues as they are found. The project has however successfully decoupled ship from run, so images sourced from any version of Docker can be used with any other version (with both forward and backward compatibility), something that provides a stable foundation for Docker use despite rapid change.
The growth of Docker into one of the most popular open source projects could be perceived as hype, but there is a great deal of substance. Docker has attracted support from many brand names across the industry, including Amazon, Canonical, CenturyLink, Google, IBM, Microsoft, New Relic, Pivotal, Red Hat and VMware. This is making it almost ubiquitously available wherever Linux can be found. In addition to the big names many startups are growing up around Docker, or changing direction to be better aligned with Docker. Those partnerships (large and small) are helping to drive rapid evolution of the core project and its surrounding ecosystem.
### A brief technical overview of Docker ###
Docker makes use of Linux kernel facilities such as [cGroups][3], namespaces and [SElinux][4] to provide isolation between containers. At first Docker was a front end for the [LXC][5] container management subsystem, but release 0.9 introduced [libcontainer][6], which is a native Go language library that provides the interface between user space and the kernel.
Containers sit on top of a union file system, such as [AUFS][7], which allows for the sharing of components such as operating system images and installed libraries across multiple containers. The layering approach in the filesystem is also exploited by the [Dockerfile][8] DevOps tool, which is able to cache operations that have already completed successfully. This can greatly speed up test cycles by taking out the wait time usually taken to install operating systems and application dependencies. Shared libraries between containers can also reduce RAM footprint.
A container is started from an image, which may be locally created, cached locally, or downloaded from a registry. Docker Inc operates the [Docker Hub public registry][9], which hosts official repositories for a variety of operating systems, middleware and databases. Organisations and individuals can host public repositories for images at Docker Hub, and there are also subscription services for hosting private repositories. Since an uploaded image could contain almost anything Docker Hub provides an automated build facility (that was previously called trusted build) where images are constructed from a Dockerfile that serves as a manifest for the contents of the image.
### Containers versus VMs ###
Containers are potentially much more efficient than VMs because theyre able to share a single kernel and share application libraries. This can lead to substantially smaller RAM footprints even when compared to virtualisation systems that can make use of RAM overcommitment. Storage footprints can also be reduced where deployed containers share underlying image layers. IBMs Boden Russel has done [benchmarking][10] that illustrates these differences.
Containers also present a lower systems overhead than VMs, so the performance of an application inside a container will generally be the same or better versus the same application running within a VM. A team of IBM researchers have published a [performance comparison of virtual machines and Linux containers][11].
One area where containers are weaker than VMs is isolation. VMs can take advantage of ring -1 [hardware isolation][12] such as that provided by Intels VT-d and VT-x technologies. Such isolation prevents VMs from breaking out and interfering with each other. Containers dont yet have any form of hardware isolation, which makes them susceptible to exploits. A proof of concept attack named [Shocker][13] showed that Docker versions prior to 1.0 were vulnerable. Although Docker 1.0 fixed the particular issue exploited by Shocker, Docker CTO Solomon Hykes [stated][14], “When we feel comfortable saying that Docker out-of-the-box can safely contain untrusted uid0 programs, we will say so clearly.”. Hykess statement acknowledges that other exploits and associated risks remain, and that more work will need to be done before containers can become trustworthy.
For many use cases the choice of containers or VMs is a false dichotomy. Docker works well within a VM, which allows it to be used on existing virtual infrastructure, private clouds and public clouds. Its also possible to run VMs inside containers, which is something that Google uses as part of its cloud platform. Given the widespread availability of infrastructure as a service (IaaS) that provides VMs on demand its reasonable to expect that containers and VMs will be used together for years to come. Its also possible that container management and virtualisation technologies might be brought together to provide a best of both worlds approach; so a hardware trust anchored micro virtualisation implementation behind libcontainer could integrate with the Docker tool chain and ecosystem at the front end, but use a different back end that provides better isolation. Micro virtualisation (such as Bromiums [vSentry][15] and VMwares [Project Fargo][16]) is already used in desktop environments to provide hardware based isolation between applications, so similar approaches could be used along with libcontainer as an alternative to the container mechanisms in the Linux kernel.
### Dockerizing applications ###
Pretty much any Linux application can run inside a Docker container. There are no limitations on choice of languages or frameworks. The only practical limitation is what a container is allowed to do from an operating system perspective. Even that bar can be lowered by running containers in privileged mode, which substantially reduces controls (and correspondingly increases risk of the containerised application being able to cause damage to the host operating system).
Containers are started from images, and images can be made from running containers. There are essentially two ways to get applications into containers - manually and Dockerfile..
#### Manual builds ####
A manual build starts by launching a container with a base operating system image. An interactive terminal can then be used to install applications and dependencies using the package manager offered by the chosen flavour of Linux. Zef Hemel provides a walk through of the process in his article [Using Linux Containers to Support Portable Application Deployment][17]. Once the application is installed the container can be pushed to a registry (such as Docker Hub) or exported into a tar file.
#### Dockerfile ####
Dockerfile is a system for scripting the construction of Docker containers. Each Dockerfile specifies the base image to start from and then a series of commands that are run in the container and/or files that are added to the container. The Dockerfile can also specify ports to be exposed, the working directory when a container is started and the default command on startup. Containers built with Dockerfiles can be pushed or exported just like manual builds. Dockerfiles can also be used in Docker Hubs automated build system so that images are built from scratch in a system under the control of Docker Inc with the source of that image visible to anybody that might use it.
#### One process? ####
Whether images are built manually or with Dockerfile a key consideration is that only a single process is invoked when the container is launched. For a container serving a single purpose, such as running an application server, running a single process isnt an issue (and some argue that containers should only have a single process). For situations where its desirable to have multiple processes running inside a container a [supervisor][18] process must be launched that can then spawn the other desired processes. There is no init system within containers, so anything that relies on systemd, upstart or similar wont work without modification.
### Containers and microservices ###
A full description of the philosophy and benefits of using a microservices architecture is beyond the scope of this article (and well covered in the [InfoQ eMag: Microservices][19]). Containers are however a convenient way to bundle and deploy instances of microservices.
Whilst most practical examples of large scale microservices deployments to date have been on top of (large numbers of) VMs, containers offer the opportunity to deploy at a smaller scale. The ability for containers to have a shared RAM and disk footprint for operating systems, libraries common application code also means that deploying multiple versions of services side by side can be made very efficient.
### Connecting containers ###
Small applications will fit inside a single container, but in many cases an application will be spread across multiple containers. Dockers success has spawned a flurry of new application compositing tools, orchestration tools and platform as a service (PaaS) implementations. Behind most of these efforts is a desire to simplify the process of constructing an application from a set of interconnected containers. Many tools also help with scaling, fault tolerance, performance management and version control of deployed assets.
#### Connectivity ####
Dockers networking capabilities are fairly primitive. Services within containers can be made accessible to other containers on the same host, and Docker can also map ports onto the host operating system to make services available across a network. The officially sponsored approach to connectivity is [libchan][20], which is a library that provides Go like [channels][21] over the network. Until libchan finds its way into applications theres room for third parties to provide complementary network services. For example, [Flocker][22] has taken a proxy based approach to make services portable across hosts (along with their underlying storage).
#### Compositing ####
Docker has native mechanisms for linking containers together where metadata about a dependency can be passed into the dependent container and consumed within as environment variables and hosts entries. Application compositing tools like [Fig][23] and [geard][24] express the dependency graph inside a single file so that multiple containers can be brought together into a coherent system. CenturyLinks [Panamax][25] compositing tool takes a similar underlying approach to Fig and geard, but adds a web based user interface, and integrates directly with GitHub so that applications can be shared.
#### Orchestration ####
Orchestration systems like [Decking][26], New Relics [Centurion][27] and Googles [Kubernetes][28] all aim to help with the deployment and life cycle management of containers. There are also numerous examples (such as [Mesosphere][29]) of [Apache Mesos][30] (and particularly its [Marathon][31] framework for long running applications) being used along with Docker. By providing an abstraction between the application needs (e.g. expressed as a requirement for CPU cores and memory) and underlying infrastructure, the orchestration tools provide decoupling thats designed to simplify both application development and data centre operations. There is such a variety of orchestration systems because many have emerged from internal systems previously developed to manage large scale deployments of containers; for example Kubernetes is based on Googles [Omega][32] system thats used to manage containers across the Google estate.
Whilst there is some degree of functional overlap between the compositing tools and the orchestration tools there are also ways that they can complement each other. For example Fig might be used to describe how containers interact functionally whilst Kubernetes pods might be used to provide monitoring and scaling.
#### Platforms (as a Service) ####
A number of Docker native PaaS implementations such as [Deis][33] and [Flynn][34] have emerged to take advantage of the fact that Linux containers provide a great degree of developer flexibility (rather than being opinionated about a given set of languages and frameworks). Other platforms such as CloudFoundry, OpenShift and Apcera Continuum have taken the route of integrating Docker based functionality into their existing systems, so that applications based on Docker images (or the Dockerfiles that make them) can be deployed and managed alongside of apps using previously supported languages and frameworks.
### All the clouds ###
Since Docker can run in any Linux VM with a reasonably up to date kernel it can run in pretty much every cloud offering IaaS. Many of the major cloud providers have announced additional support for Docker and its ecosystem.
Amazon have introduced Docker into their Elastic Beanstalk system (which is an orchestration service over underlying IaaS). Google have Docker enabled managed VMs, which provide a halfway house between the PaaS of App Engine and the IaaS of Compute Engine. Microsoft and IBM have both announced services based on Kubernetes so that multi container applications can be deployed and managed on their clouds.
To provide a consistent interface to the wide variety of back ends now available the Docker team have introduced [libswarm][35], which will integrate with a multitude of clouds and resource management systems. One of the stated aims of libswarm is to avoid vendor lock-in by swapping any service out with another. This is accomplished by presenting a consistent set of services (with associated APIs) that attach to implementation specific back ends. For example the Docker server service presents the Docker remote API to a local Docker command line tool so that containers can be managed on an array of service providers.
New service types based on Docker are still in their infancy. London based Orchard labs offered a Docker hosting service, but Docker Inc said that the service wouldnt be a priority after acquiring Orchard. Docker Inc has also sold its previous DotCloud PaaS business to cloudControl. Services based on older container management systems such as [OpenVZ][36] are already commonplace, so to a certain extent Docker needs to prove its worth to hosting providers.
### Docker and the distros ###
Docker has already become a standard feature of major Linux distributions like Ubuntu, Red Hat Enterprise Linux (RHEL) and CentOS. Unfortunately the distributions move at a different pace to the Docker project, so the versions found in a distribution can be well behind the latest available. For example Ubuntu 14.04 was released with Docker 0.9.1, and that didnt change on the point release upgrade to Ubuntu 14.04.1 (by which time Docker was at 1.1.2). There are also namespace issues in official repositories since Docker was also the name of a KDE system tray; so with Ubuntu 14.04 the package name and command line tool are both docker.io.
Things arent much different in the Enterprise Linux world. CentOS 7 comes with Docker 0.11.1, a development release that precedes Docker Incs announcement of production readiness with Docker 1.0. Linux distribution users that want the latest version for promised stability, performance and security will be better off following the [installation instructions][37] and using repositories hosted by Docker Inc rather than taking the version included in their distribution.
The arrival of Docker has spawned new Linux distributions such as [CoreOS][38] and Red Hats [Project Atomic][39] that are designed to be a minimal environment for running containers. These distributions come with newer kernels and Docker versions than the traditional distributions. They also have lower memory and disk footprints. The new distributions also come with new tools for managing large scale deployments such as [fleet][40] a distributed init system and [etcd][41] for metadata management. There are also new mechanisms for updating the distribution itself so that the latest versions of the kernel and Docker can be used. This acknowledges that one of the effects of using Docker is that it pushes attention away from the distribution and its package management solution, making the Linux kernel (and Docker subsystem using it) more important.
New distributions might be the best way of running Docker, but traditional distributions and their package managers remain very important within containers. Docker Hub hosts official images for Debian, Ubuntu, and CentOS. Theres also a semi-official repository for Fedora images. RHEL images arent available in Docker Hub, as theyre distributed directly from Red Hat. This means that the automated build mechanism on Docker Hub is only available to those using pure open source distributions (and willing to trust the provenance of the base images curated by the Docker Inc team).
Whilst Docker Hub integrates with source control systems such as GitHub and Bitbucket for automated builds the package managers used during the build process create a complex relationship between a build specification (in a Dockerfile) and the image resulting from a build. Non deterministic results from the build process isnt specifically a Docker problem - its a result of how package managers work. A build done one day will get a given version, and a build done another time may get a later version, which is why package managers have upgrade facilities. The container abstraction (caring less about the contents of a container) along with container proliferation (because of lightweight resource utilisation) is however likely to make this a pain point that gets associated with Docker.
### The future of Docker ###
Docker Inc has set a clear path on the development of core capabilities (libcontainer), cross service management (libswarm) and messaging between containers (libchan). Meanwhile the company has already shown a willingness to consume its own ecosystem with the Orchard Labs acquisition. There is however more to Docker than Docker Inc, with contributions to the project coming from big names like Google, IBM and Red Hat. With a benevolent dictator in the shape of CTO Solomon Hykes at the helm there is a clear nexus of technical leadership for both the company and the project. Over its first 18 months the project has shown an ability to move fast by using its own output, and there are no signs of that abating.
Many investors are looking at the features matrix for VMwares ESX/vSphere platform from a decade ago and figuring out where the gaps (and opportunities) lie between enterprise expectations driven by the popularity of VMs and the existing Docker ecosystem. Areas like networking, storage and fine grained version management (for the contents of containers) are presently underserved by the existing Docker ecosystem, and provide opportunities for both startups and incumbents.
Over time its likely that the distinction between VMs and containers (the run part of Docker) will become less important, which will push attention to the build and ship aspects. The changes here will make the question of what happens to Docker? much less important than what happens to the IT industry as a result of Docker?.
--------------------------------------------------------------------------------
via: http://www.infoq.com/articles/docker-future
作者:[Chris Swan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.infoq.com/author/Chris-Swan
[1]:http://blog.dotcloud.com/dotcloud-paas-joins-cloudcontrol
[2]:http://www.infoq.com/news/2014/06/docker_1.0
[3]:https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
[4]:http://selinuxproject.org/page/Main_Page
[5]:https://linuxcontainers.org/
[6]:http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/
[7]:http://aufs.sourceforge.net/aufs.html
[8]:https://docs.docker.com/reference/builder/
[9]:https://registry.hub.docker.com/
[10]:http://bodenr.blogspot.co.uk/2014/05/kvm-and-docker-lxc-benchmarking-with.html?m=1
[11]:http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf
[12]:https://en.wikipedia.org/wiki/X86_virtualization#Hardware-assisted_virtualization
[13]:http://stealth.openwall.net/xSports/shocker.c
[14]:https://news.ycombinator.com/item?id=7910117
[15]:http://www.bromium.com/products/vsentry.html
[16]:http://cto.vmware.com/vmware-docker-better-together/
[17]:http://www.infoq.com/articles/docker-containers
[18]:http://docs.docker.com/articles/using_supervisord/
[19]:http://www.infoq.com/minibooks/emag-microservices
[20]:https://github.com/docker/libchan
[21]:https://gobyexample.com/channels
[22]:http://www.infoq.com/news/2014/08/clusterhq-launch-flocker
[23]:http://www.fig.sh/
[24]:http://openshift.github.io/geard/
[25]:http://panamax.io/
[26]:http://decking.io/
[27]:https://github.com/newrelic/centurion
[28]:https://github.com/GoogleCloudPlatform/kubernetes
[29]:https://mesosphere.io/2013/09/26/docker-on-mesos/
[30]:http://mesos.apache.org/
[31]:https://github.com/mesosphere/marathon
[32]:http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41684.pdf
[33]:http://deis.io/
[34]:https://flynn.io/
[35]:https://github.com/docker/libswarm
[36]:http://openvz.org/Main_Page
[37]:https://docs.docker.com/installation/#installation
[38]:https://coreos.com/
[39]:http://www.projectatomic.io/
[40]:https://github.com/coreos/fleet
[41]:https://github.com/coreos/etcd

View File

@ -0,0 +1,168 @@
Docker的现状与未来
================================================================================
### Docker - 故事渊源流长 ###
Docker是一个专为Linux容器而设计的工具集用于构建交付和运行分布式应用。它最初是通过DotCloud作为一个开源项目在2013年3月的时候发布的。这个项目越来越受欢迎这使得DotCloud更名为Docker公司并最终 [出售了原有的PaaS业务][1]).[Docker 1.0][2]是在2014年6月发布的而且延续了之前每月更新一个版本的习惯。
1.0版本的发布标志着Docker公司认为这个平台的充分成熟已经足以用于生产环境中由本公司与合作伙伴提供付费支持选项.每个月发布的更新显示该项目正在迅速发展增添一些新特性、解决一些他们发现的问题。然而该项目已经成功地从运行交付实现分离所以任何版本的Docker镜像源都可以与其它版本共同使用具备向前和向后兼容的特性这为Docker使用的快速变化提供了稳定的保障。
Docker之所以能够成为最受欢迎的开源项目之一除了很多人会认为是的炒作成分也是由坚实的物质基础奠定的。Docker的影响力已经得到整个行业许多品牌的支持包括亚马逊, Canonical公司, 世纪互联, 谷歌, IBM, 微软, New Relic, Pivotal, 红帽和VMware. 这使只要Linux可使用的地方Docker的使用便无处不在。除了这些鼎鼎有名的大公司以外许多初创公司也在围绕着Docker在成长或者改变他们的发展方向来与Docker更好地结合起来。这些合作关系无论大于小都将帮助推动Docker核心项目及其周边生态环境的快速发展。
### Docker技术的简要综述 ###
Docker利用Linux的一些内核工具例如[cGroups][3],命名空间和[SElinux][4]来实现容器之间的隔离。起初Docker只是[LXC][5]容器管理器子系统的前端但是在0.9版本中引入了[libcontainer][6],这是原生go语言库用于提供用户空间和内核之间的接口。
容器位于联合文件系统的顶部,例如[AUFS][7],它允许跨多个容器共享例如操作系统镜和安装相关库的组件。在文件系统中的分层方法也利用[ Dockerfile ] [8]中的DevOps工具这些工具能够成功地完成高速缓存的操作。利用等待时间来安装操作系统和相关应用程序依赖包将会极大地加速测试周期。容器之间的共享库也能够减少内存的占用。
一个容器是从一个镜像开始运行的它可以本地创建本地缓存或者通过注册表来下载。Docker公司经营的 [Docker 公有注册库][9],这为各种操作系统、中间件和数据库提供了主机官方仓库。组织和个人可以在docker公司的为镜像创建公有库并且也有举办私人仓库的订阅服务。由于上传的镜像会包含几乎所有Docker提供的自动化构建工具以往称为“受信任的构建”它的镜像是从Dockerfile创建的而Dockerfile是镜像内容的清单。
### 容器 vs 虚拟机 ###
容器会比虚拟机更高效因为它们能够分享一个内核和分享共享应用程序库。相比虚拟机系统这也将使得Docker使用的内存空间很小即使虚拟机利用了内存超量使用的技术。部署容器时共享底层的镜像层也可以减少内存的占用。IBM的Boden Russel已经做了一些[基准测试][10]说明两者的不同。
相比虚拟机系统容器呈现出较低系统开销的优势所以在容器中应用程序的运行效率将会等效于在同样的应用程序在虚拟机中运行甚至效果更佳。IBM的一个研究团队已经发表了一本名为[虚拟机与Linux容器的性能比较]的文章[11].
容器在隔离特性上要比虚拟机逊色。虚拟机可以利用ring-1[硬件隔离][12]例如Intel的VT-d和VT-x技术。这种隔离可以防止虚拟机爆发和彼此交互。而容器至今还没有任何形式的硬件隔离这使它容易受到攻击。一个命名为[Shocker][13]的概念攻击验证表明在之前的1.0版本中Docker是存在这种脆弱性的。尽管Docker1.0修复了许多由于Shocker漏洞引发较为的严重问题Docker的CTO Solomon Hykes仍然[表态][14],“当我们自然而然地说Docker的开箱即用是安全的即便包含了不收信任的uid0程序我们将会很明确地这样表述。”Hykes的声明承认其它的漏洞及相关的风险依旧存在所以在容器成为受信任的工具之前将有更多的工作需要被完成。
对于许多用户案例而言在容器和虚拟机两者之间选择一种是一种错误的二分法。Docker同样可以在虚拟机中很好工作它可以被用于现有的虚拟基础措施、私有云或者公有云。同样也可以在容器里跑虚拟机这也是谷歌使用云平台的一部分。给予一个广泛可利用的基础设施例如IaaS服务可以为虚拟机提供合理的预期需求这个合理的预期即容器与虚拟机一起使用的情景将会在数年后出现。容器管理和虚拟机技术有可能被集成到一起提供一个两全其美的方案所以位于libcontainer 容器后面的硬件信任锚微虚拟化实施例,可与前端 Docker 工具链和生态系统整合而不同于后端使用的是能够提供更好绝缘性。微虚拟化例如Bromium的[vSentry][15]和VMware的 [Project Fargo][16])已经在桌面环境中使用以提供应用程序之间基于硬件的隔离所以类似的方法可以用于连接libcontainer代替Linux内核中的容器机制。
### Dockerizing 应用程序 ###
几乎所有Linux应用程序都可以在Docker容器中运行。它们不受任何语言的选择或框架的限制。唯一在实践中受限的是从操作系统的角度来允许容器做什么。即使如此bar可以在特权模式下通过运行容器从而大大减少了控制并相应地增加了容器中的应用程序这将会导致损坏主机操作系统存在的风险
容器都是从镜像开始运行的而镜像也可以从运行中的容器获取。通常使用2中方法从容器中获取应用程序分别是手动获取和Dockerfile..
#### 手动构建 ####
手动构建首先通过基础操作系统镜像启动一个基本操作。交互式的终端可以安装应用程序和用于包管理的依赖项来选择所需要的Linux风格。Zef Hemel在[使用Linux容器来支持便携式应用程序部署][17]的文章中讲述了他部署的过程。一旦应用程序被安装之后容器可以被推送至注册中心例如Docker Hub或者导出一个tar文件。
#### Dockerfile ####
Dockerfile是一个用于构建Docker容器的脚本化系统。每一个Dockerfile定义了开始的基础镜像从一系列的命令在容器中运行或者一些列的文件被添加到容器中。当容器启动时默认命令会在启动时被执行Dockerfile也可以指定对外的端口和当前工作目录。容器类似手工构建一样可以通过可推送或导出的Dockerfiles来构建。Dockerfiles也可以被用于Docker Hub的自动构建系统使用的镜像受Docker公司的控制并且该镜像源代码是任何人可视的。
####仅仅一个进程? ####
无论镜像是手动构建还是通过Dockerfile构建有一个关键的考虑因素是当容器启动时只有一个进程进程被启动。对于一个容器一对一服务的目的例如运行一个应用服务器运行一个单一的进程不是一个问题有些关于容器应该只有一个单独的进程的争议。对于一些容器需要启动多个进程的情况必须先启动 [supervisor][18]进程,才能生成其它内部所需的进程。
### 容器和微服务 ###
一个完整的关于使用微服务结构体系的原理和好处已经远远超出了这篇文章(并已经覆盖了[InfoQ eMag: Microservices][19])的范围).然而容器是微服务捆绑和部署实例的捷径。
尽管大多数实际案例表明大量的微服务目前还是大多数部署在虚拟机,容器相对拥有较小的部署机会。容器具备位操作系统共享内存和硬盘占用量的能力,库常见的应用程序代码也意味着并排部署多个办法的服务是非常高效的。
### 连接容器 ###
一些小的应用程序适合放在单独的容器中但在许多案例中应用程序将遍布多个容器。Docker的成功包括催生了一连串的新应用程序组合工具、业务流程工具和实现平台作为服务(PaaS)过程。许多工具还帮助实现缩放、容错、业务管理以及对已部署资产进行版本控制。
#### 连接 ####
Docker的网络功能是相当原始的。在同一主机容器内的服务和一互相访问而且Docker也可以通过端口映射到主机操作系统使服务可以通过网络服务被调用。官方的赞助方式是连接到[libchan][20],这是一个提供给Go语言的网络服务库类似于[channels][21]。直至libcan找到方法进入应用程序第三方应用仍然有很大空间可提供配套的网络服务。例如[Flocker][22]已经采取了基于代理的方法使服务实现跨主机(以及底层存储)移植。
#### 合成 ####
Docker本身拥有把容器连接在一起的机制与元数据相关的依赖项可以被传递到相依赖的容器并用于环境变量和主机入口的消耗。应用合成工具例如[Fig][23]和[geard][24]展示出其依赖关系图在一个独立的文件中,于是多个容器可以汇聚成一个连贯的系统。世纪互联公司的[Panamax][25]合成工具类似底层Fig和 geard的方法但新增了一些基于web的用户接口并直接与GitHub相结合以便于应用程序可以直接被共享。
#### 业务流程 ####
业务流程系统例如[Decking][26],New Relic公司的[Centurion][27]和谷歌公司的[Kubernetes][28]都是旨在帮助部署容器和管理其生命周期系统。也有无数的例子(例如[Apache Mesos][30](特别是[Marathon马拉松式持续运行很久的框架] 的 [Mesosphere][29]正在与Docker一起使用。通过为应用程序例如传递CPU核数和内存的需求与底层基础架构之间提供一个抽象的模型业务流程工具提供了解耦旨在简化应用程序开发和数据中心操作。还有各种各样的业务流程系统因为人们已经淘汰了以前开发的内部系统取而代之的是大量容器部署的管理系统例如Kubernetes是基于谷歌的[Omega][32]系统,这个系统用于管理谷歌区域内的容器。
虽然从某种程度上来说合成工具和业务流程工具的功能存在重叠另外这也是它们之间互补的一种方式。例如Fig可以被用于描述容器间如何实现功能交互而Kubernetes pods可能用于提供监控和缩放。
#### 平台 (类似一个服务) ####
大量的Docker已经实现本地PaaS安装部署例如[Deis][33] 和 [Flynn][34]的出现并在现实中得到利用Linux容器在很大程度上为开发人员提供了灵活性而不是“固执己见”地给出一组语言和框架。其它平台例如CloudFoundry, OpenShift 和 Apcera Continuum都已经采取Docker基础功能融入其现有的系统这样基于Docker镜像或者基于Dockerfile的应用程序也可以用之前支持的语言和框架一起部署和管理。
### 支持所有的云 ###
由于Docker能够在任何的Linux虚拟机中运行并合理地更新内核它几乎可以为所有云提供IaaS服务。大多数的云厂商已经宣布对码头及其生态系统提供附加支持。
亚马逊已经把Docker引入它们的Elastic Beanstalk系统这是在底层IaaS的一个业务流程系统。谷歌已经启用managed VMs',这是提供
程序引擎PaaS和计算引擎IaaS之间的中转站。微软和IBM都已经宣布基于Kubernetes的服务所以多容器应用程序可以在它们的云上被部署和管理。
为了给现有种类繁多的后端提供可用的一致接口Docker团队已经引进[libswarm][35], 它能用于集成众多云和资源管理系统。Libswarm所阐明的目标之一是避免供应商通过交换任何服务锁定另一个。这是通过呈现一组一致服务与API相关联的来完成的该服务会附加执行特定的后端服务。例如装有Docker服务的服务器将对Docker命令行工具展示Docker远程API这样容器就可以被托管在一些列的服务供应商。
基于Docker的新服务类型仍在起步阶段。总部位于伦敦的Orchard实验室提供了Docker的托管服务但是Docker公司表示收购后Orchard的服务将不会是一个有优先事项。Docker公司也出售之前DotCloud的PaaS业务给cloudControl。基于就更早前的容器管理系统的服务例如[OpenVZ][36]已经司空见惯了所以在一定程度上Docker需要向托管供应商证明其价值。
### Docker 及其发行版 ###
Docker已经成为大多数Linux发行版例如UbuntuRed Hat企业版RHEL)和CentOS的一个标准功能。遗憾的是发布是以不同的移动速度到Docker项目所以在发布版中找到的版本总是远远落后于可用版本。例如Ubuntu 14.04版本是对应Docker 0.9.1版本发布的但是并没有相应的版本更改点当Ubuntu升级至14.04.1这个时候Docker已经升至1.1.2版本。由于Docker也是一个KDE系统托盘所以在官方库同样存在命名问题所以在Ubuntu14.04版本中相关安装包的名字和命令行工具都是使用Docker.io命名。
在企业版的Linux世界中情况也并没有因此而不同。CentOS7伴随着Docker 0.11.1的到来该发行版本即是之前Docker公司宣布准备发行Docker 1.0版本的准备版。Linux发行版用户希望最新版本可以承诺其稳定性性能和安全性能够更完善并且更好地结合[安装说明][37]和使用Docker公司的库托管而不是采取包括其分布的版本库。
Docker的到来催生了新的Linux发行版本例如[CoreOS][38]和红帽被用于设计为运行容器最小环境的[Project Atomic][39]。这些发布版相比传统的发布版伴随着更多新内核和Docker版本的特性。它们对内存的使用和硬盘占用率更小。新的发行也配备了新的工具用于大型部署例如[fleet][40]这是一个分布式init系统和[etcd][41]是用于元数据管理。也有新机制用于更新发布版本身来使得内核和Docker可以被使用。这也意味着使用Docker的影响之一是它抛开分布版和相关的包管理解决方案的关注使Linux内核即Docker子系统正在使用更加重要。
新的发布版将是运行Docker的最好方式但是传统的发布版本和它们的包管理对容器来说仍然是非常重要的。Docker Hub托管的官方镜像有DebianUbuntu和CentOS。当然也有一个半官方的库用于Fedora镜像。RHEL镜像在Docker Hub中不可用因为是从Red Hat直接发布的。这意味着在Docker Hub的自动构建机制仅仅用于那些纯粹的开源发布版不并愿意信任基于Docker公司团队所策划镜像的出处
虽然Docker Hub与源代码控制系统相结合例如Git Hub和Bitbucket在构建过程中用于自动创建包管理及生成规范之间的复杂关系在Dockerfile中并在构建过程中建立镜像。在构建过程中的非确定性结果并非是Docker具体的问题——这个是由于软件包如何管理工作的结果。在构建完成的当天将会给出一个版本这个构建完成的另外一次将会得到最新版本这就是为什么软件包管理需要升级措施。容器的抽象较少关注一个容器的内容以及容器的分散因为轻量级资源利用率是更有可能与Docker获取关联的痛点。
### Docker的未来 ###
Docker公司对核心功能libcontainer跨服务管理(libswarm) 和容器间的信息传递libchan的发展提出了明确的路线。与此同时公司已经表明愿意利用自身生态系统和收购Orchard实验室。然而Docker相比Docker公司意味着更多随着项目的壮大越来越多对这个项目的
大牌贡献者其中不乏像谷歌、IBM和Red Hat这样的大公司。在仁慈独裁者CTO Solomon Hykes 掌舵的形势下为公司和项目明确了技术领导的关系。在前18个月的项目中通过成果输出展现了快速行动的能力而且这种趋势并没有减弱的迹象。
许多投资者正在寻找10年前VMware公司的ESX/vSphere平台的特征矩阵并找出虚拟机的普及而驱动的企业预期和当前Docker生态系统两者的距离和机会。目前Docker生态系统正缺乏类似网络、存储和版本细粒度的管理对容器的内容这些都为初创企业和在职人员提供机会。
随着时间的推移在虚拟机和容器Docker的运行部分之间的区别将变得不重要了而关注点将会转移到构建交付缓解。这些变化将会使Docker发生什么这个问题变得比Docker将会给IT产业带来什么更不重要了。
--------------------------------------------------------------------------------
via: http://www.infoq.com/articles/docker-future
作者:[Chris Swan][a]
译者:[disylee](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.infoq.com/author/Chris-Swan
[1]:http://blog.dotcloud.com/dotcloud-paas-joins-cloudcontrol
[2]:http://www.infoq.com/news/2014/06/docker_1.0
[3]:https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
[4]:http://selinuxproject.org/page/Main_Page
[5]:https://linuxcontainers.org/
[6]:http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/
[7]:http://aufs.sourceforge.net/aufs.html
[8]:https://docs.docker.com/reference/builder/
[9]:https://registry.hub.docker.com/
[10]:http://bodenr.blogspot.co.uk/2014/05/kvm-and-docker-lxc-benchmarking-with.html?m=1
[11]:http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf
[12]:https://en.wikipedia.org/wiki/X86_virtualization#Hardware-assisted_virtualization
[13]:http://stealth.openwall.net/xSports/shocker.c
[14]:https://news.ycombinator.com/item?id=7910117
[15]:http://www.bromium.com/products/vsentry.html
[16]:http://cto.vmware.com/vmware-docker-better-together/
[17]:http://www.infoq.com/articles/docker-containers
[18]:http://docs.docker.com/articles/using_supervisord/
[19]:http://www.infoq.com/minibooks/emag-microservices
[20]:https://github.com/docker/libchan
[21]:https://gobyexample.com/channels
[22]:http://www.infoq.com/news/2014/08/clusterhq-launch-flocker
[23]:http://www.fig.sh/
[24]:http://openshift.github.io/geard/
[25]:http://panamax.io/
[26]:http://decking.io/
[27]:https://github.com/newrelic/centurion
[28]:https://github.com/GoogleCloudPlatform/kubernetes
[29]:https://mesosphere.io/2013/09/26/docker-on-mesos/
[30]:http://mesos.apache.org/
[31]:https://github.com/mesosphere/marathon
[32]:http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41684.pdf
[33]:http://deis.io/
[34]:https://flynn.io/
[35]:https://github.com/docker/libswarm
[36]:http://openvz.org/Main_Page
[37]:https://docs.docker.com/installation/#installation
[38]:https://coreos.com/
[39]:http://www.projectatomic.io/
[40]:https://github.com/coreos/fleet
[41]:https://github.com/coreos/etcd