Merge pull request #6365 from geekpi/master

translated
This commit is contained in:
geekpi 2017-11-29 08:46:03 +08:00 committed by GitHub
commit e0615fa11c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 86 additions and 90 deletions

View File

@ -1,90 +0,0 @@
translating---geekpi
Proxy Models in Container Environments
============================================================
### Most of us are familiar with how proxies work, but is it any different in a container-based environment? See what's changed.
Inline, side-arm, reverse, and forward. These used to be the terms we used to describe the architectural placement of proxies in the network.
Today, containers use some of the same terminology, but they are introducing new ones. Thats an opportunity for me to extemporaneously expound* on my favorite of all topics: the proxy.
One of the primary drivers of cloud (once we all got past the pipedream of cost containment) has been scalability. Scale has challenged agility (and sometimes won) in various surveys over the past five years as the number one benefit organizations seek by deploying apps in cloud computing environments.
Thats in part because in a digital economy (in which we now operate), apps have become the digital equivalent of brick-and-mortar “open/closed” signs and the manifestation of digital customer assistance. Slow, unresponsive apps have the same effect as turning out the lights or understaffing the store.
Apps need to be available and responsive to meet demand. Scale is the technical response to achieving that business goal. Cloud not only provides the ability to scale, but offers the ability to scale  _automatically_ . To do that requires a load balancer. Because thats how we scale apps with proxies that load balance traffic/requests.
Containers are no different with respect to expectations around scale. Containers must scale and scale automatically and that means the use of load balancers (proxies).
If youre using native capabilities, youre doing primitive load balancing based on TCP/UDP. Generally speaking, container-based proxy implementations arent fluent in HTTP or other application layer protocols and dont offer capabilities beyond plain old load balancing ([POLB][1]). Thats often good enough, as container scale operates on a cloned, horizontal premise to scale an app, add another copy and distribute requests across it. Layer 7 (HTTP) routing capabilities are found at the ingress (in [ingress controllers][2] and API gateways) and are used as much (or more) for app routing as they are to scale applications.
In some cases, however, this is not enough. If you want (or need) more application-centric scale or the ability to insert additional services, youll graduate to more robust offerings that can provide programmability or application-centric scalability or both.
To do that means [plugging-in proxies][3]. The container orchestration environment youre working in largely determines the deployment model of the proxy in terms of whether its a reverse proxy or a forward proxy. Just to keep things interesting, theres also a third model sidecar that is the foundation of scalability supported by emerging service mesh implementations.
### Reverse Proxy
[![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed_thumb.png?ver=2017-09-12-082119-957 "Image title")][4]
A reverse proxy is closest to a traditional model in which a virtual server accepts all incoming requests and distributes them across a pool (farm, cluster) of resources.
There is one proxy per application. Any client that wants to connect to the application is instead connected to the proxy, which then chooses and forwards the request to an appropriate instance. If the green app wants to communicate with the blue app, it sends a request to the blue proxy, which determines which of the two instances of the blue app should respond to the request.
In this model, the proxy is only concerned with the app it is managing. The blue proxy doesnt care about the instances associated with the orange proxy, and vice-versa.
### Forward Proxy
[![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy_thumb.jpg?ver=2017-09-14-072422-213)][5]
This mode more closely models that of a traditional outbound firewall.
In this model, each container **node** has an associated proxy. If a client wants to connect to a particular application or service, it is instead connected to the proxy local to the container node where the client is running. The proxy then chooses an appropriate instance of that application and forwards the client's request.
Both the orange and the blue app connect to the same proxy associated with its node. The proxy then determines which instance of the requested app instance should respond.
In this model, every proxy must know about every application to ensure it can forward requests to the appropriate instance.
### Sidecar Proxy
[![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy_thumb.jpg?ver=2017-09-14-072425-620)][6]
This mode is also referred to as a service mesh router. In this model, each **container **has its own proxy.
If a client wants to connect to an application, it instead connects to the sidecar proxy, which chooses an appropriate instance of that application and forwards the client's request. This behavior is the same as a  _forward proxy _ model.
The difference between a sidecar and forward proxy is that sidecar proxies do not need to modify the container orchestration environment. For example, in order to plug-in a forward proxy to k8s, you need both the proxy  _and _ a replacement for kube-proxy. Sidecar proxies do not require this modification because it is the app that automatically connects to its “sidecar” proxy instead of being routed through the proxy.
### Summary
Each model has its advantages and disadvantages. All three share a reliance on environmental data (telemetry and changes in configuration) as well as the need to integrate into the ecosystem. Some models are pre-determined by the environment you choose, so careful consideration as to future needs service insertion, security, networking complexity need to be evaluated before settling on a model.
Were still in early days with respect to containers and their growth in the enterprise. As they continue to stretch into production environments its important to understand the needs of the applications delivered by containerized environments and how their proxy models differ in implementation.
*It was extemporaneous when I wrote it down. Now, not so much.
--------------------------------------------------------------------------------
via: https://dzone.com/articles/proxy-models-in-container-environments
作者:[Lori MacVittie ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://dzone.com/users/307701/lmacvittie.html
[1]:https://f5.com/about-us/blog/articles/go-beyond-polb-plain-old-load-balancing
[2]:https://f5.com/about-us/blog/articles/ingress-controllers-new-name-familiar-function-27388
[3]:http://clouddocs.f5.com/products/asp/v1.0/
[4]:https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed.png?ver=2017-09-12-082118-160
[5]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy.jpg?ver=2017-09-14-072419-667
[6]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy.jpg?ver=2017-09-14-072424-073
[7]:https://dzone.com/users/307701/lmacvittie.html
[8]:https://dzone.com/users/307701/lmacvittie.html
[9]:https://dzone.com/articles/proxy-models-in-container-environments#
[10]:https://dzone.com/cloud-computing-tutorials-tools-news
[11]:https://dzone.com/articles/proxy-models-in-container-environments#
[12]:https://dzone.com/go?i=243221&u=https%3A%2F%2Fget.platform9.com%2Fjzlp-kubernetes-deployment-models-the-ultimate-guide%2F

View File

@ -0,0 +1,86 @@
容器环境中的代理模型
============================================================
### 我们大多数人都熟悉代理如何工作,但在基于容器的环境中有什么不同?看看有什么改变。
内联side-arm反向和前向。这些曾经是我们用来描述网络代理架构布局的术语。
如今,容器使用一些相同的术语,但它们正在引入新的东西。这对我是个机会来阐述我最爱的所有主题:代理。
云的主要驱动之一(我们曾经有过成果控制的白日梦)就是可扩展性。在过去五年中,扩展在各种调查中面临着敏捷性的挑战(有时甚至获胜),因为这是机构在云计算环境中部署应用的最大追求。
这在一定程度上是因为在数字经济 (我们现在运营的) 中,应用已经成为数字等同于实体店的“开放/关闭”的标志和数字客户援助的体现。缓慢、无响应的应用程序等同于把灯关闭或者商店人员不足。
应用程序需要可用且响应满足需求。扩展是实现这一业务目标的技术响应。云不仅提供了扩展的能力而且还提供了_自动_扩展的能力。要做到这一点需要一个负载均衡器。因为这就是我们扩展应用程序的方式 - 使用代理负载均衡流量/请求。
容器在扩展上与预期没有什么不同。容器必须进行扩展 - 并自动扩展 - 这意味着使用负载均衡器(代理)。
如果你使用的是本机,则你正在基于 TCP/UDP 进行基本的负载平衡。一般来说,基于容器的代理实现在 HTTP 或其他应用层协议中不流畅,除了一般的旧的负载均衡([POLB][1])之外,不提供其他功能。这通常足够好,因为容器扩展是在一个克隆的水平预置环境中进行的 - 要扩展一个应用程序,添加另一个副本并在其上分发请求。在入口处(在[入口控制器][2]和 API 网关中)可以找到第 7 层HTTP路由功能并且可以使用尽可能多或更多的应用程序路由来扩展应用程序。
然而,在某些情况下,这还不够。如果你希望(或需要)更多以应用程序为中心的扩展或插入其他服务的能力,那么你将获得更健壮的产品,可提供可编程性或以应用程序为中心的可伸缩性,或者两者兼而有之。
这意味着[插入代理][3]。你正在使用的容器编排环境在很大程度上决定了代理的部署模型,无论它是反向代理还是前向代理。为了让事情有趣,还有第三个模型 - sidecar - 这是由新兴的服务网格实现支持的可扩展性的基础。
### 反向代理
[![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed_thumb.png?ver=2017-09-12-082119-957 "Image title")][4]
反向代理最接近于传统模型,在这种模型中,虚拟服务器接受所有传入请求,并将其分发到资源池(服务器中心,集群)中。
每个“应用程序”有一个代理。任何想要连接到应用程序的客户端连接到代理,代理然后选择并转发请求到适当的实例。如果绿色应用想要与蓝色应用通信,它会向蓝色代理发送请求,蓝色代理会确定蓝色应用的两个实例中的哪一个应该响应该请求。
在这个模型中,代理只关心它正在管理的应用程序。蓝色代理不关心与橙色代理关联的实例,反之亦然。
### 前向代理
[![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy_thumb.jpg?ver=2017-09-14-072422-213)][5]
这种模式更接近传统出站防火墙的模式。
在这个模型中,每个容器 **节点** 都有一个关联的代理。如果客户端想要连接到特定的应用程序或服务,它将连接到正在运行的客户端所在的容器节点的本地代理。代理然后选择一个适当的应用实例,并转发客户端的请求。
橙色和蓝色的应用连接到与其节点相关的同一个代理。代理然后确定所请求的应用实例的哪个实例应该响应。
在这个模型中,每个代理必须知道每个应用,以确保它可以将请求转发给适当的实例。
### sidecar 代理
[![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy_thumb.jpg?ver=2017-09-14-072425-620)][6]
这种模型也被称为服务网格路由。在这个模型中,每个**容器**都有自己的代理。
如果客户想要连接到一个应用,它将连接到 sidecar 代理它会选择一个合适的应用程序实例并转发客户端的请求。此行为与_前向代理_模型相同。
sidecar 和前向代理之间的区别在于sidecar 代理不需要修改容器编排环境。例如,为了插入一个前向代理到 k8s你需要代理_和_一个 kube-proxy 的替代。sidecar 代理不需要此修改,因为应用会自动连接到 “sidecar” 代理而不是通过代理路由。
### 总结
每种模式都有其优点和缺点。三者共同依赖环境数据(远程监控和配置变化),以及融入生态系统的需求。有些模型是根据你选择的环境预先确定的,因此需要仔细考虑将来的需求 - 服务插入、安全性、网络复杂性 - 在建立模型之前需要进行评估。
在容器及其在企业中的发展方面,我们还处于早期阶段。随着它们继续延伸到生产环境中,了解容器化环境发布的应用程序的需求以及它们在代理模型实现上的差异是非常重要的。
我是急性写下这篇文章的。现在就这么多。
--------------------------------------------------------------------------------
via: https://dzone.com/articles/proxy-models-in-container-environments
作者:[Lori MacVittie ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://dzone.com/users/307701/lmacvittie.html
[1]:https://f5.com/about-us/blog/articles/go-beyond-polb-plain-old-load-balancing
[2]:https://f5.com/about-us/blog/articles/ingress-controllers-new-name-familiar-function-27388
[3]:http://clouddocs.f5.com/products/asp/v1.0/
[4]:https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed.png?ver=2017-09-12-082118-160
[5]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy.jpg?ver=2017-09-14-072419-667
[6]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy.jpg?ver=2017-09-14-072424-073
[7]:https://dzone.com/users/307701/lmacvittie.html
[8]:https://dzone.com/users/307701/lmacvittie.html
[9]:https://dzone.com/articles/proxy-models-in-container-environments#
[10]:https://dzone.com/cloud-computing-tutorials-tools-news
[11]:https://dzone.com/articles/proxy-models-in-container-environments#
[12]:https://dzone.com/go?i=243221&u=https%3A%2F%2Fget.platform9.com%2Fjzlp-kubernetes-deployment-models-the-ultimate-guide%2F