Merge pull request #2961 from wyangsun/master

完成翻译 tech/20150612 Inside NGINX--How We Designed for Performance and Scale.md
This commit is contained in:
ictlyh 2015-06-20 13:07:15 +08:00
commit e8896b4e9e
2 changed files with 174 additions and 175 deletions

View File

@ -1,175 +0,0 @@
wyangsun 申领
Inside NGINX: How We Designed for Performance & Scale
================================================================================
NGINX leads the pack in web performance, and its all due to the way the software is designed. Whereas many web servers and application servers use a simple threaded or process-based architecture, NGINX stands out with a sophisticated event-driven architecture that enables it to scale to hundreds of thousands of concurrent connections on modern hardware.
The [Inside NGINX][1] infographic drills down from the high-level process architecture to illustrate how NGINX handles multiple connections within a single process. This blog explains how it all works in further detail.
### Setting the Scene the NGINX Process Model ###
![Master Process](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.36.30-PM.png)
To better understand this design, you need to understand how NGINX runs. NGINX has a master process (which performs the privileged operations such as reading configuration and binding to ports) and a number of worker and helper processes.
# service nginx restart
* Restarting nginx
# ps -ef --forest | grep nginx
root 32475 1 0 13:36 ? 00:00:00 nginx: master process /usr/sbin/nginx \
-c /etc/nginx/nginx.conf
nginx 32476 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32477 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32479 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32480 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32481 32475 0 13:36 ? 00:00:00 \_ nginx: cache manager process
nginx 32482 32475 0 13:36 ? 00:00:00 \_ nginx: cache loader process
On this 4-core server, the NGINX master process creates 4 worker processes and a couple of cache helper processes which manage the on-disk content cache.
### Why Is Architecture Important? ###
The fundamental basis of any Unix application is the thread or process. (From the Linux OS perspective, threads and processes are mostly identical; the major difference is the degree to which they share memory.) A thread or process is a self-contained set of instructions that the operating system can schedule to run on a CPU core. Most complex applications run multiple threads or processes in parallel for two reasons:
- They can use more compute cores at the same time.
- Threads and processes make it very easy to do operations in parallel (for example, to handle multiple connections at the same time).
Processes and threads consume resources. They each use memory and other OS resources, and they need to be swapped on and off the cores (an operation called a context switch). Most modern servers can handle hundreds of small, active threads or processes simultaneously, but performance degrades seriously once memory is exhausted or when high I/O load causes a large volume of context switches.
The common way to design network applications is to assign a thread or process to each connection. This architecture is simple and easy to implement, but it does not scale when the application needs to handle thousands of simultaneous connections.
### How Does NGINX Work? ###
NGINX uses a predictable process model that is tuned to the available hardware resources:
- The master process performs the privileged operations such as reading configuration and binding to ports, and then creates a small number of child processes (the next three types).
- The cache loader process runs at startup to load the disk-based cache into memory, and then exits. It is scheduled conservatively, so its resource demands are low.
- The cache manager process runs periodically and prunes entries from the disk caches to keep them within the configured sizes.
- The worker processes do all of the work! They handle network connections, read and write content to disk, and communicate with upstream servers.
The NGINX configuration recommended in most cases running one worker process per CPU core makes the most efficient use of hardware resources. You configure it by including the [worker_processes auto][2] directive in the configuration:
worker_processes auto;
When an NGINX server is active, only the worker processes are busy. Each worker process handles multiple connections in a non-blocking fashion, reducing the number of context switches.
Each worker process is single-threaded and runs independently, grabbing new connections and processing them. The processes can communicate using shared memory for shared cache data, session persistence data, and other shared resources.
### Inside the NGINX Worker Process ###
![](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.39.48-PM.png)
Each NGINX worker process is initialized with the NGINX configuration and is provided with a set of listen sockets by the master process.
The NGINX worker processes begin by waiting for events on the listen sockets ([accept_mutex][3] and [kernel socket sharding][4]). Events are initiated by new incoming connections. These connections are assigned to a state machine the HTTP state machine is the most commonly used, but NGINX also implements state machines for stream (raw TCP) traffic and for a number of mail protocols (SMTP, IMAP, and POP3).
![Internet Requests](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.40.32-PM.png)
The state machine is essentially the set of instructions that tell NGINX how to process a request. Most web servers that perform the same functions as NGINX use a similar state machine the difference lies in the implementation.
### Scheduling the State Machine ###
Think of the state machine like the rules for chess. Each HTTP transaction is a chess game. On one side of the chessboard is the web server a grandmaster who can make decisions very quickly. On the other side is the remote client the web browser that is accessing the site or application over a relatively slow network.
However, the rules of the game can be very complicated. For example, the web server might need to communicate with other parties (proxying to an upstream application) or talk to an authentication server. Third-party modules in the web server can even extend the rules of the game.
#### A Blocking State Machine ####
Recall our description of a process or thread as a self-contained set of instructions that the operating system can schedule to run on a CPU core. Most web servers and web applications use a process-per-connection or thread-per-connection model to play the chess game. Each process or thread contains the instructions to play one game through to the end. During the time the process is run by the server, it spends most of its time blocked waiting for the client to complete its next move.
![Blocking I/O](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.40.52-PM.png)
1. The web server process listens for new connections (new games initiated by clients) on the listen sockets.
1. When it gets a new game, it plays that game, blocking after each move to wait for the clients response.
1. Once the game completes, the web server process might wait to see if the client wants to start a new game (this corresponds to a keepalive connection). If the connection is closed (the client goes away or a timeout occurs), the web server process returns to listening for new games.
The important point to remember is that every active HTTP connection (every chess game) requires a dedicated process or thread (a grandmaster). This architecture is simple and easy to extend with third-party modules (new rules). However, theres a huge imbalance: the rather lightweight HTTP connection, represented by a file descriptor and a small amount of memory, maps to a separate thread or process, a very heavyweight operating system object. Its a programming convenience, but its massively wasteful.
#### NGINX is a True Grandmaster ####
Perhaps youve heard of [simultaneous exhibition][5] games, where one chess grandmaster plays dozens of opponents at the same time?
![Kiril Georgiev](http://cdn.nginx.com/wp-content/uploads/2015/06/Kiril-Georgiev.gif)
[Kiril Georgiev played 360 people simultaneously in Sofia, Bulgaria][6]. His final score was 284 wins, 70 draws and 6 losses.
Thats how an NGINX worker process plays “chess.” Each worker (remember theres usually one worker for each CPU core) is a grandmaster that can play hundreds (in fact, hundreds of thousands) of games simultaneously.
![Event-driven Architecture](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.41.13-PM.png)
1. The worker waits for events on the listen and connection sockets.
1. Events occur on the sockets and the worker handles them:
- An event on the listen socket means that a client has started a new chess game. The worker creates a new connection socket.
- An event on a connection socket means that the client has made a new move. The worker responds promptly.
A worker never blocks on network traffic, waiting for its “opponent” (the client) to respond. When it has made its move, the worker immediately proceeds to other games where moves are waiting to be processed, or welcomes new players in the door.
### Why Is This Faster than a Blocking, Multi-Process Architecture? ###
NGINX scales very well to support hundreds of thousands of connections per worker process. Each new connection creates another file descriptor and consumes a small amount of additional memory in the worker process. There is very little additional overhead per connection. NGINX processes can remain pinned to CPUs. Context switches are relatively infrequent and occur when there is no work to be done.
In the blocking, connection-per-process approach, each connection requires a large amount of additional resources and overhead, and context switches (swapping from one process to another) are very frequent.
For a more detailed explanation, check out this [article][7] about NGINX architecture, by Andrew Alexeev, VP of Corporate Development and Co-Founder at NGINX, Inc.
With appropriate [system tuning][8], NGINX can scale to handle hundreds of thousands of concurrent HTTP connections per worker process, and can absorb traffic spikes (an influx of new games) without missing a beat.
### Updating Configuration and Upgrading NGINX ###
NGINXs process architecture, with a small number of worker processes, makes for very efficient updating of the configuration and even the NGINX binary itself.
![Updating Configuration](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.41.33-PM.png)
Updating NGINX configuration is a very simple, lightweight, and reliable operation. It typically just means running the `nginx s reload` command, which checks the configuration on disk and sends the master process a SIGHUP signal.
When the master process receives a SIGHUP, it does two things:
- Reloads the configuration and forks a new set of worker processes. These new worker processes immediately begin accepting connections and processing traffic (using the new configuration settings).
- Signals the old worker processes to gracefully exit. The worker processes stop accepting new connections. As soon as each current HTTP request completes, the worker process cleanly shuts down the connection (that is, there are no lingering keepalives). Once all connections are closed, the worker processes exit.
This reload process can cause a small spike in CPU and memory usage, but its generally imperceptible compared to the resource load from active connections. You can reload the configuration multiple times per second (and many NGINX users do exactly that). Very rarely, issues arise when there are many generations of NGINX worker processes waiting for connections to close, but even those are quickly resolved.
NGINXs binary upgrade process achieves the holy grail of high-availability you can upgrade the software on the fly, without any dropped connections, downtime, or interruption in service.
![New Binary](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.41.51-PM.png)
The binary upgrade process is similar in approach to the graceful reload of configuration. A new NGINX master process runs in parallel with the original master process, and they share the listening sockets. Both processes are active, and their respective worker processes handle traffic. You can then signal the old master and its workers to gracefully exit.
The entire process is described in more detail in [Controlling NGINX][9].
### Conclusion ###
The [Inside NGINX infographic][10] provides a high-level overview of how NGINX functions, but behind this simple explanation is over ten years of innovation and optimization that enable NGINX to deliver the best possible performance on a wide range of hardware while maintaining the security and reliability that modern web applications require.
If youd like to read more about the optimizations in NGINX, check out these great resources:
- [Installing and Tuning NGINX for Performance][11] (webinar; [slides][12] at Speaker Deck)
- [Tuning NGINX for Performance][13]
- [The Architecture of Open Source Applications NGINX][14]
- [Socket Sharding in NGINX Release 1.9.1][15] (using the SO_REUSEPORT socket option)
--------------------------------------------------------------------------------
via: http://nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/
作者:[Owen Garrett][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://nginx.com/author/owen/
[1]:http://nginx.com/resources/library/infographic-inside-nginx/
[2]:http://nginx.org/en/docs/ngx_core_module.html#worker_processes
[3]:http://nginx.org/en/docs/ngx_core_module.html#accept_mutex
[4]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/
[5]:http://en.wikipedia.org/wiki/Simultaneous_exhibition
[6]:http://gambit.blogs.nytimes.com/2009/03/03/in-chess-records-were-made-to-be-broken/
[7]:http://www.aosabook.org/en/nginx.html
[8]:http://nginx.com/blog/tuning-nginx/
[9]:http://nginx.org/en/docs/control.html
[10]:http://nginx.com/resources/library/infographic-inside-nginx/
[11]:http://nginx.com/resources/webinars/installing-tuning-nginx/
[12]:https://speakerdeck.com/nginx/nginx-installation-and-tuning
[13]:http://nginx.com/blog/tuning-nginx/
[14]:http://www.aosabook.org/en/nginx.html
[15]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/

View File

@ -0,0 +1,174 @@
NGINX内部: 我们如何设计性能和扩展
================================================================================
NGINX是web应用中性能最好的一个这一切都是由于软件设计。而许多web服务器和应用程序服务器使用一个简单的线程或基于流程的架构NGINX立足于一个复杂的事件驱动的体系结构使它能够在现代硬件上扩展到成千上万的并发连接。
[NGINX内部][1]信息图从高级流程架构深度挖掘说明了NGINX如何在单进程里保持多连接。这个博客进一步详细地解释了这一切是如何工作的。
### 知识 NGINX进程模型 ###
![Master Process](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.36.30-PM.png)
为了更好的理解这个设计你需要理解NGINX如何运行的。NGINX有一个主进程它执行特权操作如读取配置和绑定端口和一些工作进程与辅助进程。
# service nginx restart
* Restarting nginx
# ps -ef --forest | grep nginx
root 32475 1 0 13:36 ? 00:00:00 nginx: master process /usr/sbin/nginx \
-c /etc/nginx/nginx.conf
nginx 32476 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32477 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32479 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32480 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
nginx 32481 32475 0 13:36 ? 00:00:00 \_ nginx: cache manager process
nginx 32482 32475 0 13:36 ? 00:00:00 \_ nginx: cache loader process
在四核服务器NGINX主进程创建4工作进程和两个管理磁盘内容缓存的缓存辅助进程。
### 为什么架构很重要? ###
任何Unix应用程序的根本基础是线程或进程。从Linux操作系统的角度来看线程和进程大多是相同的,主要的区别是他们共享内存的程度。)一个线程或进程是一个独立的指令,操作系统可以运行在一个CPU核心。大多数复杂的应用程序并行运行多个线程或进程有两个原因
- 他们可以同时使用更多的计算核心。
- 线程或进程可以轻松实现并行操作。(例如,在同一时刻保持多连接)。
进程和线程消耗资源。他们每个都使用内存和其他系统资源他们需要被交换进出核心一个操作可以叫做上下文切换。大多数现代服务器可以保持上百小型、活动的并发连接一旦内存耗尽或高I/O压力引起大量的上下文切换性能会严重下降。
网络应用程序设计的常用方法是为每个连接分配一个线程或进程。此体系结构简单、容易实现,但是当应用程序需要处理成千上万的并发连接时这种结构不具规模。
### NGINX如何工作 ###
NGINX使用一种可预测进程模式被调到了可使用的硬件资源上
- 主进程执行特权操作,如读取配置和绑定端口,然后创建一个小数量的子进程(接下来的三种类型)。
- 缓存加载程序进程在加载磁盘缓存到内存中时开始运行,然后退出。它被适当的安排,所以其资源需求很低。
- 缓存管理器进程定期修剪磁盘缓存中的记录来保持他们在配置的大小之内。
- 工作进程做所有的工作!他们保持网络连接、读写内容到磁盘,与上游服务器通信。
在大多数情况下建议NGINX的配置 每个CPU核心运行一个工作进程 最有效地利用硬件资源。你的配置包含了[worker_processes auto][2]指令配置:
worker_processes auto;
当一个NGINX服务处于活动状态只有工作进程是忙碌的。每个工作进程以非阻塞方式保持多连接减少上下文交换。
每个工作进程是一个单线程并且独立运行,获取新链接并处理他们。这些进程可以使用共享内存通信来共享缓存数据,会话持久性数据,和其他共享资源。
### NGINX工作进程内部 ###
![](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.39.48-PM.png)
NGINX主进程通过NGINX配置初始化每个工作进程并提供一组监听端口。
NGINX工作进程首先在监听套接字上等待事件[accept_mutex][3]和[内核套接字分片][4])。事件被新进来的链接初始化。这些连接被分配到一个状态机 HTTP状态机是最常用的,但NGINX也具备流式(原始的TCP)状态机和大量的邮件协议SMTP、IMAP和POP3
![Internet Requests](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.40.32-PM.png)
状态机本质上是一组指令,告诉NGINX如何处理一个请求。大多数web服务器执行相同的函数作为NGINX使用类似的状态机 - 区别在于实现。
### 调度状态机 ###
把状态机想象成国际象棋的规则。每个HTTP事务是一个象棋游戏。一方面棋盘是web服务器 一位大师可以非常迅速地做出决定。另一方面是远程客户端 在一个相对较慢的网络下web浏览器访问网站或应用程序。
不管咋地这个游戏规则很复杂。例如web服务器可能需要与各方沟通(代理一个上游的应用程序)或与身份验证服务器对话。web服务器的第三方模块甚至可以扩展游戏规则。
#### 一个阻塞状态机 ####
回忆我们描述一个进程或线程就像一个独立的指令集,操作系统可以调度运行它在一个CPU核心。大多数web服务器和web应用使用一个进程一个链接或者一个线程一个链接的模式来运行这个“象棋游戏”。每个进程或线程都包含指令来玩“一个游戏”直到结束。一次进程在服务器运行它花费大部分的时间“阻塞” - 等待客户端完成它的下一步行动。
![Blocking I/O](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.40.52-PM.png)
1. web服务器进程在监听套接字监听新连接客户端发起新“游戏”
1. 当它获得一个新游戏,就玩这个游戏,每走一步去等待客户端响应时就阻塞了。
1. 游戏完成后web服务器进程可能会等待是否有客户机想要开始一个新游戏这对应于一个保持连接。如果这个链接关闭了客户端离开或者发生超时web服务器进程返回监听一个新“游戏”。
要记住最重要的一点是每个活动的HTTP链接每局棋需要一个专用的进程或线程象棋高手。这个结构简单容并且易扩展第三方模块“新规则”。无论如何还是有巨大的不平衡尤其是轻量的HTTP链接作为代表是一个文件描述符和小量的内存映射到一个单独的线程或进程一个非常重量的系统对象。这种方式易于编程,但太过浪费。
#### NGINX是一个真正的象棋大师 ####
也许你听过[同时表演赛][5]游戏,有一个象棋大师同时对战许多对手?
![Kiril Georgiev](http://cdn.nginx.com/wp-content/uploads/2015/06/Kiril-Georgiev.gif)
[列夫·吉奥吉夫在保加利亚的索非亚同时对阵360人][6]。他的最终成绩是284胜70平6负。
这就是NGINX工作进程如何“下棋”的。每个工作进程记住 - 通常每个CPU核心上有一个工作进程是一个可同时对战上百人事实是成百上千的象棋大师。
![Event-driven Architecture](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.41.13-PM.png)
1. 工作进程在监听和链接套接字上等待事件。
1. 事件发生在套接字并且由工作进程处理它们:
- 在监听套接字的事件意味着一个客户端已经开始了一局新棋局。工作进程创建了一个新链接套接字。
- 在链接套接字的事件意味着客户端已经下了一步棋。工作进程快速的响应。
一个工作进程在网络流量上从不阻塞,等待它的“对手”(客户端)做出反应。当它下了一步,工作进程立即继续其他的游戏,在那里工作进程正在处理下一步,或者在门口欢迎一个新玩家。
### 为什么这个比阻塞式多进程架构更快? ###
NGINX每个工作进程很好的扩展支撑了成百上千的链接。每个链接在工作进程中创建另一个文件描述符和消耗一小部分额外内存。每个链接很少有额外的开销。NGINX进程可以固定在一些CPU上。上下文交换非常罕见只发生在没有工作要做。
阻塞方式,一个链接一个进程的方法中,每个连接需要大量额外的资源和开销,和上下文切换(从一个进程切换到另一个)非常频繁
更详细的解释,看看这个[文章][7]关于NGINX架构由NGINX公司发展副总裁和创始人之一 Andrew Alexeev 著。
适当的[系统优化][8],NGINX的每个工作进程可以扩展来处理成千上万的并发HTTP连接,并能脸不红心不跳的承受峰值流量(大量涌入的新“游戏”)。
### 更新配置和升级NGINX ###
NGINX是过程体系架构有一小部分工作进程有助于有效的更新配置文件甚至NGINX程序本身。
![Updating Configuration](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.41.33-PM.png)
更新NGINX配置文件是非常简单、轻量、可靠的操作。它通常意味着运行命令`nginx s reload`所做的就是检查磁盘上的配置并发送SIGHUP信号给主进程。
当主进程接收到一个SIGHUP信号它会做两件事
- 重载配置文件和分出一组新的工作进程。这些新的工作进程立即开始接受连接和处理流量(使用新的配置设置)
- 通知旧的工作进程优雅的退出。工作进程停止接受新的链接。当前的http请求一完成工作进程就彻底关闭这个链接那就是没有残存的保持链接。一旦所有链接关闭这个工作进程就退出。
这个重载过程能引发一个CPU和内存使用的小峰值但是跟活动链接加载的支援比它一般不易察觉。你可以每秒多次重载配置很多NGINX用户都这么做。很少情况下问题发生在很多代的工作进程等待关闭连接时但即使是那样也很快被解决了。
NGINX的程序升级过程中实现了高可用的圣杯 - 你可以随随时更新这个软件,不会丢失连接,停机,或者中断服务。
![New Binary](http://cdn.nginx.com/wp-content/uploads/2015/06/Screen-Shot-2015-06-08-at-12.41.51-PM.png)
程序升级过程很像优雅的重载配置的方法。一个新的NGINX主进程与原主进程并行运行然后他们共享监听套接字。两个进程都是活动的并且各自的工作进程处理流量。然后你可以通知旧主进程和他的工作进程优雅的退出。
整个进程的详细描述在[NGINX管理][9].
### 结论 ###
[NGINX内部信息图][10] 提供一个NGINX功能实现的高级的概况但在这简单的解释是超过十年的创新和优化使Nginx在一个范围广泛的硬件上提供尽可能最好的性能同时保持现代Web应用程序需要的安全性和可靠性。
如果你想阅读更多关于NGINX的优化查看这些优秀的资源
- [Installing and Tuning NGINX for Performance][11] (webinar; [slides][12] at Speaker Deck)
- [Tuning NGINX for Performance][13]
- [The Architecture of Open Source Applications NGINX][14]
- [Socket Sharding in NGINX Release 1.9.1][15] (using the SO_REUSEPORT socket option)
--------------------------------------------------------------------------------
via: http://nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/
作者:[Owen Garrett][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://nginx.com/author/owen/
[1]:http://nginx.com/resources/library/infographic-inside-nginx/
[2]:http://nginx.org/en/docs/ngx_core_module.html#worker_processes
[3]:http://nginx.org/en/docs/ngx_core_module.html#accept_mutex
[4]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/
[5]:http://en.wikipedia.org/wiki/Simultaneous_exhibition
[6]:http://gambit.blogs.nytimes.com/2009/03/03/in-chess-records-were-made-to-be-broken/
[7]:http://www.aosabook.org/en/nginx.html
[8]:http://nginx.com/blog/tuning-nginx/
[9]:http://nginx.org/en/docs/control.html
[10]:http://nginx.com/resources/library/infographic-inside-nginx/
[11]:http://nginx.com/resources/webinars/installing-tuning-nginx/
[12]:https://speakerdeck.com/nginx/nginx-installation-and-tuning
[13]:http://nginx.com/blog/tuning-nginx/
[14]:http://www.aosabook.org/en/nginx.html
[15]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/