From 3fadc7eaceee5d6dde121184b7b7af16f6e6ce62 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 22 Nov 2015 21:43:31 +0800 Subject: [PATCH 001/160] Update 20151028 10 Tips for 10x Application Performance.md --- ...10 Tips for 10x Application Performance.md | 28 +++++++++++-------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 99086f1163..9765949a2e 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -1,16 +1,20 @@ -translating by ezio - 10 Tips for 10x Application Performance + +将程序性能提高十倍的10条建议 ================================================================================ Improving web application performance is more critical than ever. The share of economic activity that’s online is growing; more than 5% of the developed world’s economy is now on the Internet (see Resources below for statistics). And our always-on, hyper-connected modern world means that user expectations are higher than ever. If your site does not respond instantly, or if your app does not work without delay, users quickly move on to your competitors. +提高web 应用的性能从来没有比现在更关键过。网络经济的比重一直在增长;全球经济超过5% 的价值是在因特网上产生的(数据参见下面的资料)。我们的永远在线、超级连接的世界意味着用户的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。 For example, a study done by Amazon almost 10 years ago proved that, even then, a 100-millisecond decrease in page-loading time translated to a 1% increase in its revenue. Another recent study highlighted the fact that that more than half of site owners surveyed said they lost revenue or customers due to poor application performance. +举一个例子,一份亚马逊十年前做过的研究可以证明,甚至在那个时候,网页加载时间每减少100毫秒,收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有着在调查中说他们会因为应用程序性能的问题流失用户。 How fast does a website need to be? For each second a page takes to load, about 4% of users abandon it. Top e-commerce sites offer a time to first interaction ranging from one to three seconds, which offers the highest conversion rate. It’s clear that the stakes for web application performance are high and likely to grow. +网站到底需要多块呢?对于页面加载,每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间,而这是提供最高舒适度的速度。很明显这种利害关系对于web 应用来说很高,而且在不断的增加。 Wanting to improve performance is easy, but actually seeing results is difficult. To help you on your journey, this blog post offers you ten tips to help you increase your website performance by as much as 10x. It’s the first in a series detailing how you can increase your application performance with the help of some well-tested optimization techniques, and with a little support from NGINX. This series also outlines potential improvements in security that you can gain along the way. +想要提高效率很简单,但是看到实际结果很难。要在旅途上帮助你,这篇blog 会给你提供10条最高可以10倍的提升网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章,包括测试充分的优化技术和一点NGIX 的帮助。这个系列给出了潜在的提高安全性的帮助。 -### Tip #1: Accelerate and Secure Applications with a Reverse Proxy Server ### +### Tip #1: 通过反向代理来提高性能和增加安全性 ### If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.) @@ -32,7 +36,7 @@ NGINX software is specifically designed for use as a reverse proxy server, with ![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png) -### Tip #2: Add a Load Balancer ### +### Tip #2: 添加负载平衡 ### Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes. @@ -46,7 +50,7 @@ The same server or servers used for load balancing can also handle several other NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol. -### Tip #3: Cache Static and Dynamic Content ### +### Tip #3: 缓存静态和动态的内容 ### Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination. @@ -75,7 +79,7 @@ For more information on caching with NGINX, see the [reference documentation][20 **Note**: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales. -### Tip #4: Compress Data ### +### Tip #4: 压缩数据 ### Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more. @@ -87,7 +91,7 @@ If you use SSL, compression reduces the amount of data that has to be SSL-encode Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive. -### Tip #5: Optimize SSL/TLS ### +### Tip #5: 优化 SSL/TLS ### The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings. @@ -108,7 +112,7 @@ In addition, see [this blog post][30] for details on ways to increase SSL/TLS pe NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections. -### Tip #6: Implement HTTP/2 or SPDY ### +### Tip #6: 使用 HTTP/2 或 SPDY ### For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view. @@ -128,7 +132,7 @@ As an example of support for these protocols, NGINX has supported SPDY from earl Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better. -### Tip #7: Update Software Versions ### +### Tip #7: 升级软件版本 ### One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware. @@ -138,7 +142,7 @@ Staying with older software can also prevent you from taking advantage of new ca NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can. -### Tip #8: Tune Linux for Performance ### +### Tip #8: linux 系统性能调优 ### Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance. @@ -150,7 +154,7 @@ Linux optimizations are web server-specific. Using NGINX as an example, here are For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat! -### Tip #9: Tune Your Web Server for Performance ### +### Tip #9: web 服务器性能调优 ### Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include: @@ -169,7 +173,7 @@ Whatever web server you use, you need to tune it for web application performance See this [blog post][45] for more details on tuning NGINX. -### Tip #10: Monitor Live Activity to Resolve Issues and Bottlenecks ### +### Tip #10: 监视系统活动来解决问题和瓶颈 ### The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure. From d4764701e186dc6a846e0540928ff9a026bc42e4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 22 Nov 2015 22:12:25 +0800 Subject: [PATCH 002/160] Update 20151028 10 Tips for 10x Application Performance.md --- ...10 Tips for 10x Application Performance.md | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 9765949a2e..84a046c682 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -17,20 +17,33 @@ Wanting to improve performance is easy, but actually seeing results is difficult ### Tip #1: 通过反向代理来提高性能和增加安全性 ### If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.) +如果你的web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要添加一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么这个办法还是很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。) Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O. +问题是,机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换:和用户的成千上万的连接,从磁盘访问文件,运行代码,等等。应用服务器可能会抖动-内存不足,将内存数据写会磁盘,以及多个请求等待一个任务完成,如磁盘I/O。 Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A [reverse proxy server][1] sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network. +你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和程序的通讯都是通过一个快速的内部网络完成的。 Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks. +使用反向代理服务器可以将应用服务器从等待用户与web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要在能带客户端的响应,可以运行与接近优化过的性能水平。 Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced. +添加方向代理服务器还可以给你的web 服务器安装带来灵活性。比如,一个已知类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易的被替代。 Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as: +因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如: -- **Load balancing** (see [Tip #2][2]) – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all. -- **Caching static files** (see [Tip #3][3]) – Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster. -- **Securing your site** – The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected. +- **Load balancing** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。 + +- A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all. + +- **Caching static files** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快 + +Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster. + +- **Securing your site** – 反响代理服务器可以被设置的提高安全性, +The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected. NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support. From 50438a3535f4260c330f74586fec2f22e01b12bf Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 22 Nov 2015 22:14:03 +0800 Subject: [PATCH 003/160] Update 20151028 10 Tips for 10x Application Performance.md --- .../tech/20151028 10 Tips for 10x Application Performance.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 84a046c682..8e696a12ae 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -235,7 +235,7 @@ We hope you try out these techniques for yourself. We want to hear the kind of a via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io 作者:[Floyd Smith][a] -译者:[译者ID](https://github.com/译者ID) +译者:[Ezio]](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3d6c93c457e15e06b9f84951e2efc18d6e506b81 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 22 Nov 2015 23:45:35 +0800 Subject: [PATCH 004/160] tips 1 done --- .../20151028 10 Tips for 10x Application Performance.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 8e696a12ae..e967a94e02 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -34,18 +34,19 @@ Adding a reverse proxy server also adds flexibility to your web server setup. Fo Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as: 因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如: -- **Load balancing** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。 +- **负载均衡** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。 - A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all. -- **Caching static files** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快 +- **缓存静态文件** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快 Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster. -- **Securing your site** – 反响代理服务器可以被设置的提高安全性, +- **网站安全** – 反响代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。 The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected. NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support. +NGINX 软件是一个专门设计的反响代理服务器,也包含了上述的多种功能。NGINX 使用事件驱动的方式处理问题,着回避传统的服务器更加有效率。NGINX plus 天价了更多高级的反向代理特性,比如程序[健康度检查][4],专门用来处理request 路由,高级缓冲和相关支持。 ![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png) From 7feb1a38e32e9b146ffc71aa6f149278b40ee84f Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 00:09:51 +0800 Subject: [PATCH 005/160] tips 2 done --- .../20151028 10 Tips for 10x Application Performance.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index e967a94e02..e40c6bdc5b 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -53,16 +53,22 @@ NGINX 软件是一个专门设计的反响代理服务器,也包含了上述 ### Tip #2: 添加负载平衡 ### Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes. +添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。使用负载均衡讲流量分配到多个服务器,是用来替代只使用一个巨大且高性能web 服务器的方案。即使程序写的不好,或者在扩容方面有困难,只使用负载均衡服务器就可以很好的提高用户体验。 A load balancer is, first, a reverse proxy server (see [Tip #1][6]) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using [a choice of algorithms][7] to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has [capabilities][8] for continuing a given user session on the same server, which is called session persistence. +负载均衡服务器首先是一个反响代理服务器(参见[Tip #1][6])——它接收来自互联网的流量,然后转发请求给另一个服务器。小戏法是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法,只需要将新的请求发给列表里的下一个服务器。其它的方法包括将请求发给负载最小的活动连接。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8]. Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use. +负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有流量来处理。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。 Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, [FastCGI][9], SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging. +可以进行负载均衡的协议包括HTTP, HTTPS, SPDY, HTTP/2, WebSocket,[FastCGI][9],SCGI,uwsgi, memcached,以及集中其它的应用类型,包括采用TCP 第4层协议的程序。分析你的web 应用来决定那些你要使用以及那些地方的性能不足。 The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files. +相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如SSL 终止,提供对客户端使用的HTTP/1/x 和 HTTP/2 ,以及缓存静态文件。 NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol. +NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访问我们的[overview blog post][10], [configuration blog post][11], [ebook][12] 以及相关网站 [webinar][13], 和 [documentation][14]。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性,如基于服务器响应时间的加载路由和Microsoft’s NTLM 协议上的负载均衡。 ### Tip #3: 缓存静态和动态的内容 ### @@ -295,3 +301,4 @@ via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=to [51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/ [52]:https://blog.kissmetrics.com/loading-time/?wide=1 [53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/ + From 4b96693e9fe5991d1d36c564dfc288fbb145f4d0 Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 00:35:01 +0800 Subject: [PATCH 006/160] update --- .../20151028 10 Tips for 10x Application Performance.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index e40c6bdc5b..27881a2b5b 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -73,19 +73,28 @@ NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访 ### Tip #3: 缓存静态和动态的内容 ### Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination. +缓存通过加速内容的传输速度来提高web 应用的性能。它可以采用一下集中策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者结合起来使用。 There are two different types of caching to consider: +下面要考虑两种不同类型数据的缓冲: - **Caching of static content**. Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk. +- **静态内容缓存**。不经常变化的文件,比如图像(JPEG,PNG) 和代码(CSS,JavaScript),可以保存在边缘服务器,这样就可以快速的从内存和磁盘上提取。 - **Caching of dynamic content**. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements. +- **动态内容缓存**。很多web 应用回针对每个网页请求生成不同的HTML 页面。在短时间内简单的缓存每个生成HTML 内容,就可以很好的减少要生成的内容的数量,这完全可以达到你的要求。 If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content. +举个例子,如果一个页面每秒会被浏览10次,你将它缓存1 秒,99%请求的页面都会直接从缓存提取。如果你将将数据分成静态内容,甚至新生成的页面可能都是由这些缓存构成的。 There are three main techniques for caching content generated by web applications: +下面由是web 应用发明的三种主要的缓存技术: - **Moving content closer to users**. Keeping a copy of content closer to the user reduces its transmission time. +- **缩短数据与用户的距离**。把一份内容的拷贝放的离用户更近点来减少传输时间。 - **Moving content to faster machines**. Content can be kept on a faster machine for faster retrieval. +- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。 - **Moving content off of overused machines**. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded. +- **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。 Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times. From ff441bdb17853306475eae063526f4e49c07634a Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 10:01:57 +0800 Subject: [PATCH 007/160] tip 3 done --- .../20151028 10 Tips for 10x Application Performance.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 27881a2b5b..143528a16f 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -97,16 +97,22 @@ There are three main techniques for caching content generated by web application - **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。 Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times. +对web 应用的缓存机制可以web 应用服务器内部实现。第一,缓存动态内容是用来减少应用服务器加载动态内容的时间。然后,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。 Improved caching can speed up applications tremendously. For many web pages, static data, such as large image files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally. +改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说,静态数据,比如大图像文件,构成了超过一半的内容。如果没有缓存,那么这可能会花费几秒的时间来提取和传输这类数据,但是采用了缓存之后不到1秒就可以完成。 As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to [set up caching][16]: proxy_cache_path and proxy_cache. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive, proxy_cache_use_stale, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime. +举一个在实际中缓存是如何使用的例子, NGINX 和NGINX Plus使用了两条指令来[设置缓存机制][16]:proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小,文件在缓存中保存的最长时间和其他一些参数。使用第三条(而且是相当受欢迎的一条)指令,proxy_cache_use_stale,如果服务器提供新鲜内容是忙或者挂掉之类的信息,你甚至可以让缓存提供旧的内容,这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的上线时间。 NGINX Plus has [advanced caching features][17], including support for [cache purging][18] and visualization of cache status on a [dashboard][19] for live activity monitoring. +NGINX plus 拥有[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。 For more information on caching with NGINX, see the [reference documentation][20] and [NGINX Content Caching][21] in the NGINX Plus Admin Guide. +要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。 -**Note**: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales. +**Note**: Caching crosses lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, such as completed transactions or sales. +**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][22]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。 ### Tip #4: 压缩数据 ### @@ -310,4 +316,3 @@ via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=to [51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/ [52]:https://blog.kissmetrics.com/loading-time/?wide=1 [53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/ - From 96c9448df87ff6a989c6787855dd03c8f682f218 Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 10:24:25 +0800 Subject: [PATCH 008/160] tip 4 done --- .../20151028 10 Tips for 10x Application Performance.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 143528a16f..195cb781c2 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -112,19 +112,24 @@ For more information on caching with NGINX, see the [reference documentation][20 要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。 **Note**: Caching crosses lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, such as completed transactions or sales. -**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][22]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。 +**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。 ### Tip #4: 压缩数据 ### Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more. +压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片(JPEG 和PNG)、视频(MPEG-4)和音乐(MP3)等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。 Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections. +文本数据 —— 包括HTML(包含了纯文本和HTL 标签),CSS和代码,比如Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生不成比例的影响。 That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time. +这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。聪明的内容压缩可以减少HTML,Javascript,CSS和其他文本内容对贷款的要求,通常可以减少30% 甚至更多的带宽和相应的页面加载时间。 If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data. +如果你是用SSL,压缩可以减少需要进行SSL 编码的的数据量,而这些编码操作会占用一些CPU时间而抵消了压缩数据减少的时间。 Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive. +压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的[HTTP/2 部分]就专门为适应头数据。另一个例子是可以在NGINX 里打开使用GZIP 压缩文本。你在你的服务里[预压缩文本数据][25]之后,你就可以直接使用gzip_static 指令来处理压缩过的.gz 版本。 ### Tip #5: 优化 SSL/TLS ### From 12593961ca71ca4db1ddc19f1bfc2a6a9316e91a Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 14:00:55 +0800 Subject: [PATCH 009/160] tip 5 done --- ...0151028 10 Tips for 10x Application Performance.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 195cb781c2..5c5d232017 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -134,23 +134,34 @@ Methods for compressing text data vary. For example, see the [section on HTTP/2] ### Tip #5: 优化 SSL/TLS ### The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings. +安全套接字([SSL][26]) 协议和它的继承者,传输层安全(TLS)协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是Google 正在使用SSL/TLS,这在搜索引擎排名上是一个正面的影响因素。 Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons: +尽管SSL/TLS 越来越流行,但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二: 1. The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit. +1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。 1. Ongoing overhead from encrypting data on the server and decrypting it on the client. +2. 数据在传输过程中需要不断的在服务器加密、在客户端解密。 To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the [next section][27]) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS. +要鼓励使用SSL/TLS,HTTP/2 和SPDY(在[下一章][27]会描述)的作者设计新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述两个原因中的一个浪费的时间。然而现在可以用来提高应用程序使用SSL/TLS 传输数据的性能的方法不止这些。 The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses [OpenSSL][28], running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX [SSL performance][29] is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption. +web 服务器有对应的机制优化SSL/TLS 传输。举个例子,NGINX 使用[OpenSSL][28]运行在普通的硬件上提供接近专用硬件的传输性能。NGINX [SSL 性能][29] 有详细的文档,而且把对SSL/TLS 数据进行加解密的时间和CPU 占用率降低了很多。 In addition, see [this blog post][30] for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are: +更进一步,在这篇[blog][30]有详细的说明如何提高SSL/TLS 性能,可以总结为一下几点: - **Session caching**. Uses the [ssl_session_cache][31] directive to cache the parameters used when securing each new connection with SSL/TLS. +- **会话缓冲**。使用指令[ssl_session_cache][31]可以缓存每个新的SSL/TLS 连接使用的参数。 - **Session tickets or IDs**. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking. +- **会话票据或者ID**。把SSL/TLS 的信息保存在一个票据或者ID 里可以流畅的复用而不需要重新握手。 - **OCSP stapling**. Cuts handshaking time by caching SSL/TLS certificate information. +- **OCSP 分割**。通过缓存SSL/TLS 证书信息来减少握手时间。 NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections. +NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的加密和解密,而同时和其他服务器进行明文通信。使用[这几步][32] 来设置NGINX 和NGINX Plus 处理SSL/TLS 终止。同时,这里还有一些NGINX Plus 和接收TCP 连接的服务器一起使用时的[特有的步骤][33] ### Tip #6: 使用 HTTP/2 或 SPDY ### From c3edee6f88d293946b3c1631f53d38e50d7e4fe1 Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 14:05:08 +0800 Subject: [PATCH 010/160] clear --- ...10 Tips for 10x Application Performance.md | 56 +------------------ 1 file changed, 1 insertion(+), 55 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 5c5d232017..9fa6a98bc2 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -2,165 +2,111 @@ 将程序性能提高十倍的10条建议 ================================================================================ -Improving web application performance is more critical than ever. The share of economic activity that’s online is growing; more than 5% of the developed world’s economy is now on the Internet (see Resources below for statistics). And our always-on, hyper-connected modern world means that user expectations are higher than ever. If your site does not respond instantly, or if your app does not work without delay, users quickly move on to your competitors. + 提高web 应用的性能从来没有比现在更关键过。网络经济的比重一直在增长;全球经济超过5% 的价值是在因特网上产生的(数据参见下面的资料)。我们的永远在线、超级连接的世界意味着用户的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。 -For example, a study done by Amazon almost 10 years ago proved that, even then, a 100-millisecond decrease in page-loading time translated to a 1% increase in its revenue. Another recent study highlighted the fact that that more than half of site owners surveyed said they lost revenue or customers due to poor application performance. 举一个例子,一份亚马逊十年前做过的研究可以证明,甚至在那个时候,网页加载时间每减少100毫秒,收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有着在调查中说他们会因为应用程序性能的问题流失用户。 -How fast does a website need to be? For each second a page takes to load, about 4% of users abandon it. Top e-commerce sites offer a time to first interaction ranging from one to three seconds, which offers the highest conversion rate. It’s clear that the stakes for web application performance are high and likely to grow. 网站到底需要多块呢?对于页面加载,每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间,而这是提供最高舒适度的速度。很明显这种利害关系对于web 应用来说很高,而且在不断的增加。 -Wanting to improve performance is easy, but actually seeing results is difficult. To help you on your journey, this blog post offers you ten tips to help you increase your website performance by as much as 10x. It’s the first in a series detailing how you can increase your application performance with the help of some well-tested optimization techniques, and with a little support from NGINX. This series also outlines potential improvements in security that you can gain along the way. 想要提高效率很简单,但是看到实际结果很难。要在旅途上帮助你,这篇blog 会给你提供10条最高可以10倍的提升网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章,包括测试充分的优化技术和一点NGIX 的帮助。这个系列给出了潜在的提高安全性的帮助。 ### Tip #1: 通过反向代理来提高性能和增加安全性 ### -If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.) 如果你的web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要添加一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么这个办法还是很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。) -Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O. 问题是,机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换:和用户的成千上万的连接,从磁盘访问文件,运行代码,等等。应用服务器可能会抖动-内存不足,将内存数据写会磁盘,以及多个请求等待一个任务完成,如磁盘I/O。 -Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A [reverse proxy server][1] sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network. 你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和程序的通讯都是通过一个快速的内部网络完成的。 -Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks. 使用反向代理服务器可以将应用服务器从等待用户与web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要在能带客户端的响应,可以运行与接近优化过的性能水平。 -Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced. 添加方向代理服务器还可以给你的web 服务器安装带来灵活性。比如,一个已知类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易的被替代。 -Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as: 因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如: - **负载均衡** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。 - -- A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all. - - **缓存静态文件** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快 - -Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster. - - **网站安全** – 反响代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。 -The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected. -NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support. NGINX 软件是一个专门设计的反响代理服务器,也包含了上述的多种功能。NGINX 使用事件驱动的方式处理问题,着回避传统的服务器更加有效率。NGINX plus 天价了更多高级的反向代理特性,比如程序[健康度检查][4],专门用来处理request 路由,高级缓冲和相关支持。 ![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png) ### Tip #2: 添加负载平衡 ### -Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes. 添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。使用负载均衡讲流量分配到多个服务器,是用来替代只使用一个巨大且高性能web 服务器的方案。即使程序写的不好,或者在扩容方面有困难,只使用负载均衡服务器就可以很好的提高用户体验。 -A load balancer is, first, a reverse proxy server (see [Tip #1][6]) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using [a choice of algorithms][7] to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has [capabilities][8] for continuing a given user session on the same server, which is called session persistence. 负载均衡服务器首先是一个反响代理服务器(参见[Tip #1][6])——它接收来自互联网的流量,然后转发请求给另一个服务器。小戏法是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法,只需要将新的请求发给列表里的下一个服务器。其它的方法包括将请求发给负载最小的活动连接。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8]. -Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use. 负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有流量来处理。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。 -Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, [FastCGI][9], SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging. 可以进行负载均衡的协议包括HTTP, HTTPS, SPDY, HTTP/2, WebSocket,[FastCGI][9],SCGI,uwsgi, memcached,以及集中其它的应用类型,包括采用TCP 第4层协议的程序。分析你的web 应用来决定那些你要使用以及那些地方的性能不足。 -The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files. 相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如SSL 终止,提供对客户端使用的HTTP/1/x 和 HTTP/2 ,以及缓存静态文件。 -NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol. NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访问我们的[overview blog post][10], [configuration blog post][11], [ebook][12] 以及相关网站 [webinar][13], 和 [documentation][14]。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性,如基于服务器响应时间的加载路由和Microsoft’s NTLM 协议上的负载均衡。 ### Tip #3: 缓存静态和动态的内容 ### -Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination. 缓存通过加速内容的传输速度来提高web 应用的性能。它可以采用一下集中策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者结合起来使用。 -There are two different types of caching to consider: 下面要考虑两种不同类型数据的缓冲: -- **Caching of static content**. Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk. - **静态内容缓存**。不经常变化的文件,比如图像(JPEG,PNG) 和代码(CSS,JavaScript),可以保存在边缘服务器,这样就可以快速的从内存和磁盘上提取。 -- **Caching of dynamic content**. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements. - **动态内容缓存**。很多web 应用回针对每个网页请求生成不同的HTML 页面。在短时间内简单的缓存每个生成HTML 内容,就可以很好的减少要生成的内容的数量,这完全可以达到你的要求。 -If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content. 举个例子,如果一个页面每秒会被浏览10次,你将它缓存1 秒,99%请求的页面都会直接从缓存提取。如果你将将数据分成静态内容,甚至新生成的页面可能都是由这些缓存构成的。 -There are three main techniques for caching content generated by web applications: 下面由是web 应用发明的三种主要的缓存技术: -- **Moving content closer to users**. Keeping a copy of content closer to the user reduces its transmission time. - **缩短数据与用户的距离**。把一份内容的拷贝放的离用户更近点来减少传输时间。 -- **Moving content to faster machines**. Content can be kept on a faster machine for faster retrieval. - **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。 -- **Moving content off of overused machines**. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded. - **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。 -Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times. 对web 应用的缓存机制可以web 应用服务器内部实现。第一,缓存动态内容是用来减少应用服务器加载动态内容的时间。然后,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。 -Improved caching can speed up applications tremendously. For many web pages, static data, such as large image files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally. 改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说,静态数据,比如大图像文件,构成了超过一半的内容。如果没有缓存,那么这可能会花费几秒的时间来提取和传输这类数据,但是采用了缓存之后不到1秒就可以完成。 -As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to [set up caching][16]: proxy_cache_path and proxy_cache. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive, proxy_cache_use_stale, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime. 举一个在实际中缓存是如何使用的例子, NGINX 和NGINX Plus使用了两条指令来[设置缓存机制][16]:proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小,文件在缓存中保存的最长时间和其他一些参数。使用第三条(而且是相当受欢迎的一条)指令,proxy_cache_use_stale,如果服务器提供新鲜内容是忙或者挂掉之类的信息,你甚至可以让缓存提供旧的内容,这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的上线时间。 -NGINX Plus has [advanced caching features][17], including support for [cache purging][18] and visualization of cache status on a [dashboard][19] for live activity monitoring. NGINX plus 拥有[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。 -For more information on caching with NGINX, see the [reference documentation][20] and [NGINX Content Caching][21] in the NGINX Plus Admin Guide. 要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。 -**Note**: Caching crosses lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, such as completed transactions or sales. **注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。 ### Tip #4: 压缩数据 ### -Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more. 压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片(JPEG 和PNG)、视频(MPEG-4)和音乐(MP3)等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。 -Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections. 文本数据 —— 包括HTML(包含了纯文本和HTL 标签),CSS和代码,比如Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生不成比例的影响。 -That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time. 这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。聪明的内容压缩可以减少HTML,Javascript,CSS和其他文本内容对贷款的要求,通常可以减少30% 甚至更多的带宽和相应的页面加载时间。 -If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data. 如果你是用SSL,压缩可以减少需要进行SSL 编码的的数据量,而这些编码操作会占用一些CPU时间而抵消了压缩数据减少的时间。 -Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive. 压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的[HTTP/2 部分]就专门为适应头数据。另一个例子是可以在NGINX 里打开使用GZIP 压缩文本。你在你的服务里[预压缩文本数据][25]之后,你就可以直接使用gzip_static 指令来处理压缩过的.gz 版本。 ### Tip #5: 优化 SSL/TLS ### -The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings. 安全套接字([SSL][26]) 协议和它的继承者,传输层安全(TLS)协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是Google 正在使用SSL/TLS,这在搜索引擎排名上是一个正面的影响因素。 -Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons: 尽管SSL/TLS 越来越流行,但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二: -1. The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit. 1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。 -1. Ongoing overhead from encrypting data on the server and decrypting it on the client. 2. 数据在传输过程中需要不断的在服务器加密、在客户端解密。 -To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the [next section][27]) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS. 要鼓励使用SSL/TLS,HTTP/2 和SPDY(在[下一章][27]会描述)的作者设计新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述两个原因中的一个浪费的时间。然而现在可以用来提高应用程序使用SSL/TLS 传输数据的性能的方法不止这些。 -The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses [OpenSSL][28], running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX [SSL performance][29] is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption. web 服务器有对应的机制优化SSL/TLS 传输。举个例子,NGINX 使用[OpenSSL][28]运行在普通的硬件上提供接近专用硬件的传输性能。NGINX [SSL 性能][29] 有详细的文档,而且把对SSL/TLS 数据进行加解密的时间和CPU 占用率降低了很多。 -In addition, see [this blog post][30] for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are: 更进一步,在这篇[blog][30]有详细的说明如何提高SSL/TLS 性能,可以总结为一下几点: -- **Session caching**. Uses the [ssl_session_cache][31] directive to cache the parameters used when securing each new connection with SSL/TLS. - **会话缓冲**。使用指令[ssl_session_cache][31]可以缓存每个新的SSL/TLS 连接使用的参数。 -- **Session tickets or IDs**. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking. - **会话票据或者ID**。把SSL/TLS 的信息保存在一个票据或者ID 里可以流畅的复用而不需要重新握手。 -- **OCSP stapling**. Cuts handshaking time by caching SSL/TLS certificate information. - **OCSP 分割**。通过缓存SSL/TLS 证书信息来减少握手时间。 -NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections. NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的加密和解密,而同时和其他服务器进行明文通信。使用[这几步][32] 来设置NGINX 和NGINX Plus 处理SSL/TLS 终止。同时,这里还有一些NGINX Plus 和接收TCP 连接的服务器一起使用时的[特有的步骤][33] ### Tip #6: 使用 HTTP/2 或 SPDY ### From 6d681a315bc337828050f382c2b9bca847127840 Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 23 Nov 2015 14:19:08 +0800 Subject: [PATCH 011/160] tip 7 done(6 not done) --- .../tech/20151028 10 Tips for 10x Application Performance.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 9fa6a98bc2..1bbc4870c8 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -132,12 +132,16 @@ Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP ### Tip #7: 升级软件版本 ### One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware. +一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。 Stable new releases are typically more compatible and higher-performing than older releases. It’s also easier to keep on top of tuning optimizations, bug fixes, and security alerts when you stay on top of software updates. +稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉bug,以及安全性的提高。 Staying with older software can also prevent you from taking advantage of new capabilities. For example, HTTP/2, described above, currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015. +一直使用旧版软件也会组织你利用新的特性。比如上面说到的HTTP/2,目前要求OpenSSL 1.0.1.在2016 年中期开始将会要求1.0.2 ,而这是在2015年1月才发布的。 NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can. +NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX Plus][39];他们都包含了罪行的能力,如socket分区和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把他们升级到你能能升级道德最新版本吧。 ### Tip #8: linux 系统性能调优 ### From 165237a1651783e78b81bc5b533b352a3d5f3ce9 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 23 Nov 2015 22:21:09 +0800 Subject: [PATCH 012/160] tip 6 done --- .../20151028 10 Tips for 10x Application Performance.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 1bbc4870c8..c570706ef4 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -112,22 +112,30 @@ NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的 ### Tip #6: 使用 HTTP/2 或 SPDY ### For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view. +对于已经使用了SSL/TLS 的站点,HTTP/2 和SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用SSL/TLS 的站点来说,HTTP/2 和SPDY会在响应速度上有些影响(通常会将度效率)。 Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2. +Google 在2012年开始把SPDY 作为一个比HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前IETF 标准,他也基于SPDY。SPDY 已经被广泛的支持了,但是很快就会被HTTP/2 替代。 The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multiple requests and responses at the same time. +SPDY 和HTTP/2 的关键是用单连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。 By getting the most out of one connection, these protocols avoid the overhead of setting up and managing multiple connections, as required by the way browsers implement HTTP/1.x. The use of a single connection is especially helpful with SSL, because it minimizes the time-consuming handshaking that SSL/TLS needs to set up a secure connection. +通过使用一个连接这些协议可以避免过多的设置和管理多个连接,就像浏览器实现了HTTP/1.x 一样。单连接在对SSL 特别有效,这是因为它可以最小化SSL/TLS 建立安全链接时的握手时间。 The SPDY protocol required the use of SSL/TLS; HTTP/2 does not officially require it, but all browsers so far that support HTTP/2 use it only if SSL/TLS is enabled. That is, a browser that supports HTTP/2 uses it only if the website is using SSL and its server accepts HTTP/2 traffic. Otherwise, the browser communicates over HTTP/1.x. +SPDY 协议需要使用SSL/TLS, 而HTTP/2 官方并不需要,但是目前所有支持HTTP/2的浏览器只有在使能了SSL/TLS 的情况下才会使用它。这就意味着支持HTTP/2 的浏览器只有在网站使用了SSL 并且服务器接收HTTP/2 流量的情况下才会启用HTTP/2。否则的话浏览器就会使用HTTP/1.x 协议。 When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations such as domain sharding, resource merging, and image spriting. These changes make your code and deployments simpler and easier to manage. To learn more about the changes that HTTP/2 is bringing about, read our [white paper][34]. +当你实现SPDY 或者HTTP/2时,你不再需要通常的HTTP 性能优化方案,比如域分隔资源聚合,以及图像登记。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。 ![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png) As an example of support for these protocols, NGINX has supported SPDY from early on, and [most sites][35] that use SPDY today run on NGINX. NGINX is also [pioneering support][36] for HTTP/2, with [support][37] for HTTP/2 in NGINX open source and NGINX Plus as of September 2015. +作为支持这些协议的一个样例,NGINX 已经从一开始就支持了SPDY,而且[大部分使用SPDY 协议的网站][35]都运行的是NGINX。NGINX 同时也[很早][36]对HTTP/2 的提供了支持,从2015 年9月开始开源NGINX 和NGINX Plus 就[支持][37]它了。 Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better. +经过一段时间,我们NGINX 希望更多的站点完全是能SSL 并且向HTTP/2 迁移。这将会提高安全性,同时新的优化手段也会被发现和实现,更简单的代码表现的更加优异。 ### Tip #7: 升级软件版本 ### From 3fd404a802f7983527f48fddfabbe8a07f003627 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 23 Nov 2015 22:39:09 +0800 Subject: [PATCH 013/160] tip 8 done --- .../20151028 10 Tips for 10x Application Performance.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index c570706ef4..7fc96c65ea 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -154,14 +154,20 @@ NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX ### Tip #8: linux 系统性能调优 ### Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance. +linux 是大多数web 服务器使用操作系统,而且作为你的架构的基础,Linux 表现出明显可以提高性能的机会。默认情况下,很多linux 系统都被设置为使用很少的资源,匹配典型的桌面应用负载。这就意味着web 应用需要最少一些等级的调优才能达到最大效能。 Linux optimizations are web server-specific. Using NGINX as an example, here are a few highlights of changes you can consider to speed up Linux: +Linux 优化是转变们针对web 服务器方面的。以NGINX 为例,这里有一些在加速linux 时需要强调的变化: - **Backlog queue**. If you have connections that appear to be stalling, consider increasing net.core.somaxconn, the maximum number of connections that can be queued awaiting attention from NGINX. You will see error messages if the existing connection limit is too small, and you can gradually increase this parameter until the error messages stop. +- **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接线直太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数知道错误信息停止出现。 - **File descriptors**. NGINX uses up to two file descriptors for each connection. If your system is serving a lot of connections, you might need to increase sys.fs.file_max, the system-wide limit for file descriptors, and nofile, the user file descriptor limit, to support the increased load. +- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接,你可能就需要提高sys.fs.file_max ,增加系统对文件描述符数量整体的限制,这样子才能支持不断增加的负载需求。 - **Ephemeral ports**. When used as a proxy, NGINX creates temporary (“ephemeral”) ports for each upstream server. You can increase the range of port values, set by net.ipv4.ip_local_port_range, to increase the number of ports available. You can also reduce the timeout before an inactive port gets reused with the net.ipv4.tcp_fin_timeout setting, allowing for faster turnover. +- **短暂端口**。当使用代理时,NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。 For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat! +对于NGINX 来说,可以查阅[NGINX 性能调优指南][40]来学习如果优化你的Linux 系统,这样子它就可以很好的适应大规模网络流量而不会超过工作极限。 ### Tip #9: web 服务器性能调优 ### From 3d653e9bef89192b05da09cc3ea711430974c7d4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 23 Nov 2015 22:52:31 +0800 Subject: [PATCH 014/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=B4=AF=E4=BA=86?= =?UTF-8?q?=EF=BC=8C=E4=BC=91=E6=81=AF=E4=B8=80=E4=BC=9A?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20151028 10 Tips for 10x Application Performance.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 7fc96c65ea..62891f33d9 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -172,19 +172,29 @@ For NGINX, check out the [NGINX performance tuning guides][40] to learn how to o ### Tip #9: web 服务器性能调优 ### Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include: +无论你是用哪种web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何web 服务器,但是一些设置是针对NGINX的。关键的优化手段包括: - **Access logging**. Instead of writing a log entry for every request to disk immediately, you can buffer entries in memory and write them to disk as a group. For NGINX, add the *buffer=size* parameter to the *access_log* directive to write log entries to disk when the memory buffer fills up. If you add the **flush=time** parameter, the buffer contents are also be written to disk after the specified amount of time. +- **f访问日志**。不要把每个请求的日志都直接写回磁盘,你可以在内存将日志缓存起来然后一批写回磁盘。对于NGINX 来说添加给指令*access_log* 添加参数 *buffer=size* 可以让系统在缓存满了的情况下才把日志写到此哦按。如果你添加了参数**flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。 - **Buffering**. Buffering holds part of a response in memory until the buffer fills, which can make communications with the client more efficient. Responses that don’t fit in memory are written to disk, which can slow performance. When NGINX buffering is [on][42], you use the *proxy_buffer_size* and *proxy_buffers* directives to manage it. +- **缓存**。缓存掌握了内存中的部分资源知道满了位置,这可以让与客户端的通信更加高效。与内存中缓存不匹配的响应会写回磁盘,而这就会降低效能。当NGINX [启用][42]了缓存机制后,你可以使用指令*proxy_buffer_size* 和 *proxy_buffers* 来管理缓存。 - **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests. +- **客户端保活**。 - **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41]. +- **上游保活**。 - **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached. +- **限制**。 - **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system. +- **工人进程**。 - **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive. +- **套接字分割**。 - **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44]. +- **线程池**。 ![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png) **Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back. +**技巧**。 See this [blog post][45] for more details on tuning NGINX. From 5d3504c14e8be6954950bd29595426989dcf32a4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 23 Nov 2015 22:59:54 +0800 Subject: [PATCH 015/160] =?UTF-8?q?20151123=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 多一些硬文。 --- ...123 Data Structures in the Linux Kernel.md | 201 ++++++++++++++++++ 1 file changed, 201 insertions(+) create mode 100644 sources/tech/20151123 Data Structures in the Linux Kernel.md diff --git a/sources/tech/20151123 Data Structures in the Linux Kernel.md b/sources/tech/20151123 Data Structures in the Linux Kernel.md new file mode 100644 index 0000000000..187b3ce9cd --- /dev/null +++ b/sources/tech/20151123 Data Structures in the Linux Kernel.md @@ -0,0 +1,201 @@ +Data Structures in the Linux Kernel +================================================================================ + +Radix tree +-------------------------------------------------------------------------------- + +As you already know linux kernel provides many different libraries and functions which implement different data structures and algorithms. In this part we will consider one of these data structures - [Radix tree](http://en.wikipedia.org/wiki/Radix_tree). There are two files which are related to `radix tree` implementation and API in the linux kernel: + +* [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h) +* [lib/radix-tree.c](https://github.com/torvalds/linux/blob/master/lib/radix-tree.c) + +Lets talk about what a `radix tree` is. Radix tree is a `compressed trie` where a [trie](http://en.wikipedia.org/wiki/Trie) is a data structure which implements an interface of an associative array and allows to store values as `key-value`. The keys are usually strings, but any data type can be used. A trie is different from an `n-tree` because of its nodes. Nodes of a trie do not store keys; instead, a node of a trie stores single character labels. The key which is related to a given node is derived by traversing from the root of the tree to this node. For example: + + +``` +               +-----------+ +               |           | +               |    " "    | + | | +        +------+-----------+------+ +        |                         | +        |                         | +   +----v------+            +-----v-----+ +   |           |            |           | +   |    g      |            |     c     | + | | | | +   +-----------+            +-----------+ +        |                         | +        |                         | +   +----v------+            +-----v-----+ +   |           |            |           | +   |    o      |            |     a     | + | | | | +   +-----------+            +-----------+ +                                  | +                                  | +                            +-----v-----+ +                            |           | +                            |     t     | + | | +                            +-----------+ +``` + +So in this example, we can see the `trie` with keys, `go` and `cat`. The compressed trie or `radix tree` differs from `trie` in that all intermediates nodes which have only one child are removed. + +Radix tree in linux kernel is the datastructure which maps values to integer keys. It is represented by the following structures from the file [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h): + +```C +struct radix_tree_root { + unsigned int height; + gfp_t gfp_mask; + struct radix_tree_node __rcu *rnode; +}; +``` + +This structure presents the root of a radix tree and contains three fields: + +* `height` - height of the tree; +* `gfp_mask` - tells how memory allocations will be performed; +* `rnode` - pointer to the child node. + +The first field we will discuss is `gfp_mask`: + +Low-level kernel memory allocation functions take a set of flags as - `gfp_mask`, which describes how that allocation is to be performed. These `GFP_` flags which control the allocation process can have following values: (`GF_NOIO` flag) means sleep and wait for memory, (`__GFP_HIGHMEM` flag) means high memory can be used, (`GFP_ATOMIC` flag) means the allocation process has high-priority and can't sleep etc. + +* `GFP_NOIO` - can sleep and wait for memory; +* `__GFP_HIGHMEM` - high memory can be used; +* `GFP_ATOMIC` - allocation process is high-priority and can't sleep; + +etc. + +The next field is `rnode`: + +```C +struct radix_tree_node { + unsigned int path; + unsigned int count; + union { + struct { + struct radix_tree_node *parent; + void *private_data; + }; + struct rcu_head rcu_head; + }; + /* For tree user */ + struct list_head private_list; + void __rcu *slots[RADIX_TREE_MAP_SIZE]; + unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS]; +}; +``` + +This structure contains information about the offset in a parent and height from the bottom, count of the child nodes and fields for accessing and freeing a node. This fields are described below: + +* `path` - offset in parent & height from the bottom; +* `count` - count of the child nodes; +* `parent` - pointer to the parent node; +* `private_data` - used by the user of a tree; +* `rcu_head` - used for freeing a node; +* `private_list` - used by the user of a tree; + +The two last fields of the `radix_tree_node` - `tags` and `slots` are important and interesting. Every node can contains a set of slots which are store pointers to the data. Empty slots in the linux kernel radix tree implementation store `NULL`. Radix trees in the linux kernel also supports tags which are associated with the `tags` fields in the `radix_tree_node` structure. Tags allow individual bits to be set on records which are stored in the radix tree. + +Now that we know about radix tree structure, it is time to look on its API. + +Linux kernel radix tree API +--------------------------------------------------------------------------------- + +We start from the datastructure initialization. There are two ways to initialize a new radix tree. The first is to use `RADIX_TREE` macro: + +```C +RADIX_TREE(name, gfp_mask); +```` + +As you can see we pass the `name` parameter, so with the `RADIX_TREE` macro we can define and initialize radix tree with the given name. Implementation of the `RADIX_TREE` is easy: + +```C +#define RADIX_TREE(name, mask) \ + struct radix_tree_root name = RADIX_TREE_INIT(mask) + +#define RADIX_TREE_INIT(mask) { \ + .height = 0, \ + .gfp_mask = (mask), \ + .rnode = NULL, \ +} +``` + +At the beginning of the `RADIX_TREE` macro we define instance of the `radix_tree_root` structure with the given name and call `RADIX_TREE_INIT` macro with the given mask. The `RADIX_TREE_INIT` macro just initializes `radix_tree_root` structure with the default values and the given mask. + +The second way is to define `radix_tree_root` structure by hand and pass it with mask to the `INIT_RADIX_TREE` macro: + +```C +struct radix_tree_root my_radix_tree; +INIT_RADIX_TREE(my_tree, gfp_mask_for_my_radix_tree); +``` + +where: + +```C +#define INIT_RADIX_TREE(root, mask) \ +do { \ + (root)->height = 0; \ + (root)->gfp_mask = (mask); \ + (root)->rnode = NULL; \ +} while (0) +``` + +makes the same initialziation with default values as it does `RADIX_TREE_INIT` macro. + +The next are two functions for inserting and deleting records to/from a radix tree: + +* `radix_tree_insert`; +* `radix_tree_delete`; + +The first `radix_tree_insert` function takes three parameters: + +* root of a radix tree; +* index key; +* data to insert; + +The `radix_tree_delete` function takes the same set of parameters as the `radix_tree_insert`, but without data. + +The search in a radix tree implemented in two ways: + +* `radix_tree_lookup`; +* `radix_tree_gang_lookup`; +* `radix_tree_lookup_slot`. + +The first `radix_tree_lookup` function takes two parameters: + +* root of a radix tree; +* index key; + +This function tries to find the given key in the tree and return the record associated with this key. The second `radix_tree_gang_lookup` function have the following signature + +```C +unsigned int radix_tree_gang_lookup(struct radix_tree_root *root, + void **results, + unsigned long first_index, + unsigned int max_items); +``` + +and returns number of records, sorted by the keys, starting from the first index. Number of the returned records will not be greater than `max_items` value. + +And the last `radix_tree_lookup_slot` function will return the slot which will contain the data. + +Links +--------------------------------------------------------------------------------- + +* [Radix tree](http://en.wikipedia.org/wiki/Radix_tree) +* [Trie](http://en.wikipedia.org/wiki/Trie) + +-------------------------------------------------------------------------------- + +via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/radix-tree.md + +作者:[0xAX] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + From 1b51f163156a8d4c53555f9415ff161bcba7b06d Mon Sep 17 00:00:00 2001 From: ictlyh Date: Mon, 23 Nov 2015 23:30:24 +0800 Subject: [PATCH 016/160] Translated sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md --- ...eSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md | 319 ------------------ ...eSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md | 317 +++++++++++++++++ 2 files changed, 317 insertions(+), 319 deletions(-) delete mode 100644 sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md create mode 100644 translated/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md diff --git a/sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md b/sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md deleted file mode 100644 index de05f067b5..0000000000 --- a/sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md +++ /dev/null @@ -1,319 +0,0 @@ -ictlyh Translating -Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10 -================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png) - -### Introduction ### - -[PostgreSQL][1] is a powerful, open-source object-relational database system. It runs under all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS, Solaris, Tru64), and Windows OS. - -Here is what **Mark Shuttleworth**, the founder of **Ubuntu**, says about PostgreSQL. - -> Postgres is a truly awesome database. When we started working on Launchpad I wasn’t sure if it would be up to the job. I was so wrong. It’s been robust, fast, and professional in every regard. -> -> — Mark Shuttleworth. - -In this handy tutorial, let us see how to install PostgreSQL 9.4 on Ubuntu 15.10 server. - -### Install PostgreSQL ### - -PostgreSQL is available in the default repositories. So enter the following command from the Terminal to install it. - - sudo apt-get install postgresql postgresql-contrib - -If you’re looking for other versions, add the PostgreSQL repository, and install it as shown below. - -The **PostgreSQL apt repository** supports LTS versions of Ubuntu (10.04, 12.04 and 14.04) on amd64 and i386 architectures as well as select non-LTS versions(14.10). While not fully supported, the packages often work on other non-LTS versions as well, by using the closest LTS version available. - -#### On Ubuntu 14.10 systems: #### - -Create the file **/etc/apt/sources.list.d/pgdg.list**; - - sudo vi /etc/apt/sources.list.d/pgdg.list - -Add a line for the repository: - - deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main - -**Note**: The above repository will only work on Ubuntu 14.10. It is not updated yet to Ubuntu 15.04 and 15.10. - -**On Ubuntu 14.04**, add the following line: - - deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main - -**On Ubuntu 12.04**, add the following line: - - deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main - -Import the repository signing key: - - wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc - ----------- - - sudo apt-key add - - -Update the package lists: - - sudo apt-get update - -Then install the required version. - - sudo apt-get install postgresql-9.4 - -### Accessing PostgreSQL command prompt ### - -The default database name and database user are “**postgres**”. Switch to postgres user to perform postgresql related operations: - - sudo -u postgres psql postgres - -#### Sample Output: #### - - psql (9.4.5) - Type "help" for help. - postgres=# - -To exit from posgresql prompt, type **\q** in the **psql** prompt return back to the Terminal. - -### Set “postgres” user password ### - -Login to postgresql prompt, - - sudo -u postgres psql postgres - -.. and set postgres password with following command: - - postgres=# \password postgres - Enter new password: - Enter it again: - postgres=# \q - -To install PostgreSQL Adminpack, enter the command in postgresql prompt: - - sudo -u postgres psql postgres - ----------- - - postgres=# CREATE EXTENSION adminpack; - CREATE EXTENSION - -Type **\q** in the **psql** prompt to exit from posgresql prompt, and return back to the Terminal. - -### Create New User and Database ### - -For example, let us create a new user called “**senthil**” with password “**ubuntu**”, and database called “**mydb**”. - - sudo -u postgres createuser -D -A -P senthil - ----------- - - sudo -u postgres createdb -O senthil mydb - -### Delete Users and Databases ### - -To delete the database, switch to postgres user: - - sudo -u postgres psql postgres - -Enter command: - - $ drop database - -To delete a user, enter the following command: - - $ drop user - -### Configure PostgreSQL-MD5 Authentication ### - -**MD5 authentication** requires the client to supply an MD5-encrypted password for authentication. To do that, edit **/etc/postgresql/9.4/main/pg_hba.conf** file: - - sudo vi /etc/postgresql/9.4/main/pg_hba.conf - -Add or Modify the lines as shown below - - [...] - # TYPE DATABASE USER ADDRESS METHOD - # "local" is for Unix domain socket connections only - local all all md5 - # IPv4 local connections: - host all all 127.0.0.1/32 md5 - host all all 192.168.1.0/24 md5 - # IPv6 local connections: - host all all ::1/128 md5 - [...] - -Here, 192.168.1.0/24 is my local network IP address. Replace this value with your own address. - -Restart postgresql service to apply the changes: - - sudo systemctl restart postgresql - -Or, - - sudo service postgresql restart - -### Configure PostgreSQL-Configure TCP/IP ### - -By default, TCP/IP connection is disabled, so that the users from another computers can’t access postgresql. To allow to connect users from another computers, Edit file **/etc/postgresql/9.4/main/postgresql.conf:** - - sudo vi /etc/postgresql/9.4/main/postgresql.conf - -Find the lines: - - [...] - #listen_addresses = 'localhost' - [...] - #port = 5432 - [...] - -Uncomment both lines, and set the IP address of your postgresql server or set ‘*’ to listen from all clients as shown below. You should be careful to make postgreSQL to be accessible from all remote clients. - - [...] - listen_addresses = '*' - [...] - port = 5432 - [...] - -Restart postgresql service to save changes: - - sudo systemctl restart postgresql - -Or, - - sudo service postgresql restart - -### Manage PostgreSQL with phpPgAdmin ### - -[**phpPgAdmin**][2] is a web-based administration utility written in PHP for managing PosgreSQL. - -phpPgAdmin is available in default repositories. So, Install phpPgAdmin using command: - - sudo apt-get install phppgadmin - -By default, you can access phppgadmin using **http://localhost/phppgadmin** from your local system’s web browser. - -To access remote systems, do the following. -On Ubuntu 15.10 systems: - -Edit file **/etc/apache2/conf-available/phppgadmin.conf**, - - sudo vi /etc/apache2/conf-available/phppgadmin.conf - -Find the line **Require local** and comment it by adding a **#** in front of the line. - - #Require local - -And add the following line: - - allow from all - -Save and exit the file. - -Then, restart apache service. - - sudo systemctl restart apache2 - -On Ubuntu 14.10 and previous versions: - -Edit file **/etc/apache2/conf.d/phppgadmin**: - - sudo nano /etc/apache2/conf.d/phppgadmin - -Comment the following line: - - [...] - #allow from 127.0.0.0/255.0.0.0 ::1/128 - -Uncomment the following line to make phppgadmin from all systems. - - allow from all - -Edit **/etc/apache2/apache2.conf**: - - sudo vi /etc/apache2/apache2.conf - -Add the following line: - - Include /etc/apache2/conf.d/phppgadmin - -Then, restart apache service. - - sudo service apache2 restart - -### Configure phpPgAdmin ### - -Edit file **/etc/phppgadmin/config.inc.php**, and do the following changes. Most of these options are self-explanatory. Read them carefully to know why do you change these values. - - sudo nano /etc/phppgadmin/config.inc.php - -Find the following line: - - $conf['servers'][0]['host'] = ''; - -Change it as shown below: - - $conf['servers'][0]['host'] = 'localhost'; - -And find the line: - - $conf['extra_login_security'] = true; - -Change the value to **false**. - - $conf['extra_login_security'] = false; - -Find the line: - - $conf['owned_only'] = false; - -Set the value as **true**. - - $conf['owned_only'] = true; - -Save and close the file. Restart postgresql service and Apache services. - - sudo systemctl restart postgresql - ----------- - - sudo systemctl restart apache2 - -Or, - - sudo service postgresql restart - - sudo service apache2 restart - -Now open your browser and navigate to **http://ip-address/phppgadmin**. You will see the following screen. - -![phpPgAdmin – Google Chrome_001](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg) - -Login with users that you’ve created earlier. I already have created a user called “**senthil**” with password “**ubuntu**” before, so I log in with user “senthil”. - -![phpPgAdmin – Google Chrome_002](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg) - -Now, you will be able to access the phppgadmin dashboard. - -![phpPgAdmin – Google Chrome_003](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg) - -Log in with postgres user: - -![phpPgAdmin – Google Chrome_004](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg) - -That’s it. Now you’ll able to create, delete and alter databases graphically using phppgadmin. - -Cheers! - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-10/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.twitter.com/ostechnix -[1]:http://www.postgresql.org/ -[2]:http://phppgadmin.sourceforge.net/doku.php \ No newline at end of file diff --git a/translated/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md b/translated/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md new file mode 100644 index 0000000000..7fd4414127 --- /dev/null +++ b/translated/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md @@ -0,0 +1,317 @@ +在 Ubuntu 15.10 上安装 PostgreSQL 9.4 和 phpPgAdmin +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png) + +### 简介 ### + +[PostgreSQL][1] 是一款强大的,开源对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、Unix(AIX、BSD、HP-UX,SGI IRIX、Mac OS、Solaris、Tru64) 以及 Windows 操作系统。 + +下面是 **Ubuntu** 发起者 **Mark Shuttleworth** 对 PostgreSQL 的一段评价。 + +> PostgreSQL 真的是一款很好的数据库系统。刚开始我们使用它的时候,并不确定它能否胜任工作。但我错的太离谱了。它很强壮、快速,在各个方面都很专业。 +> +> — Mark Shuttleworth. + +在这篇简短的指南中,让我们来看看如何在 Ubuntu 15.10 服务器中安装 PostgreSQL 9.4。 + +### 安装 PostgreSQL ### + +默认仓库中就有可用的 PostgreSQL。在终端中输入下面的命令安装它。 + + sudo apt-get install postgresql postgresql-contrib + +如果你需要其它的版本,按照下面那样先添加 PostgreSQL 仓库然后再安装。 + +**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版(10.04、12.04 和 14.04),以及非长期支持版(14.04)。对于其它非长期支持版,该软件包虽然不能完全支持,但使用和 LTS 版本近似的也能正常工作。 + +#### Ubuntu 14.10 系统: #### + +新建文件**/etc/apt/sources.list.d/pgdg.list**; + + sudo vi /etc/apt/sources.list.d/pgdg.list + +用下面一行添加仓库: + + deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main + +**注意**: 上面的库只能用于 Ubuntu 14.10。还没有升级到 Ubuntu 15.04 和 15.10。 + +**Ubuntu 14.04**,添加下面一行: + + deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main + +**Ubuntu 12.04**,添加下面一行: + + deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main + +导入库签名密钥: + + wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc + +---------- + + sudo apt-key add - + +更新软件包列表: + + sudo apt-get update + +然后安装需要的版本。 + + sudo apt-get install postgresql-9.4 + +### 访问 PostgreSQL 命令窗口 ### + +默认的数据库名称和数据库用户名称都是 “**postgres**”。切换到 postgres 用户进行 postgresql 相关的操作: + + sudo -u postgres psql postgres + +#### 事例输出: #### + + psql (9.4.5) + Type "help" for help. + postgres=# + +要退出 postgresql 窗口,在 **psql** 窗口输入 **\q** 退出到终端。 + +### 设置 “postgres” 用户密码 ### + +登录到 postgresql 窗口, + + sudo -u postgres psql postgres + +用下面的命令为用户 postgres 设置密码: + + postgres=# \password postgres + Enter new password: + Enter it again: + postgres=# \q + +要安装 PostgreSQL Adminpack,在 postgresql 窗口输入下面的命令: + + sudo -u postgres psql postgres + +---------- + + postgres=# CREATE EXTENSION adminpack; + CREATE EXTENSION + +在 **psql** 窗口输入 **\q** 从 postgresql 窗口退回到终端。 + +### 创建新用户和数据库 ### + +例如,让我们创建一个新的用户,名为 “**senthil**”,密码是 “**ubuntu**”,以及名为 “**mydb**” 的数据库。 + + sudo -u postgres createuser -D -A -P senthil + +---------- + + sudo -u postgres createdb -O senthil mydb + +### 删除用户和数据库 ### + +要删除数据库,首先切换到 postgres 用户: + + sudo -u postgres psql postgres + +输入命令: + + $ drop database + +要删除一个用户,输入下面的命令: + + $ drop user + +### 配置 PostgreSQL-MD5 验证 ### + +**MD5 验证** 要求用户提供一个 MD5 加密的密码用于认证。首先编辑 **/etc/postgresql/9.4/main/pg_hba.conf** 文件: + + sudo vi /etc/postgresql/9.4/main/pg_hba.conf + +按照下面所示添加或修改行 + + [...] + # TYPE DATABASE USER ADDRESS METHOD + # "local" is for Unix domain socket connections only + local all all md5 + # IPv4 local connections: + host all all 127.0.0.1/32 md5 + host all all 192.168.1.0/24 md5 + # IPv6 local connections: + host all all ::1/128 md5 + [...] + +其中, 192.168.1.0/24 是我的本地网络 IP 地址。用你自己的地址替换。 + +重启 postgresql 服务以使更改生效: + + sudo systemctl restart postgresql + +或者, + + sudo service postgresql restart + +### 配置 PostgreSQL TCP/IP 配置 ### + +默认情况下,没有启用 TCP/IP 连接,因此其它计算机的用户不能访问 postgresql。为了允许其它计算机的用户访问,编辑文件 **/etc/postgresql/9.4/main/postgresql.conf:** + + sudo vi /etc/postgresql/9.4/main/postgresql.conf + +找到下面一行: + + [...] + #listen_addresses = 'localhost' + [...] + #port = 5432 + [...] + +取消改行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 ‘*’ 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。 + + [...] + listen_addresses = '*' + [...] + port = 5432 + [...] + +重启 postgresql 服务保存更改: + + sudo systemctl restart postgresql + +或者, + + sudo service postgresql restart + +### 用 phpPgAdmin 管理 PostgreSQL ### + +[**phpPgAdmin**][2] 是基于 web 用 PHP 写的 PostgreSQL 管理工具。 + +默认仓库中有可用的 phpPgAdmin。用下面的命令安装 phpPgAdmin: + + sudo apt-get install phppgadmin + +默认情况下,你可以在本地系统的 web 浏览器用 **http://localhost/phppgadmin** 访问 phppgadmin。 + +要访问远程系统,在 Ubuntu 15.10 上做如下操作: + +编辑文件 **/etc/apache2/conf-available/phppgadmin.conf**, + + sudo vi /etc/apache2/conf-available/phppgadmin.conf + +找到 **Require local** 的一行在这行前面添加 **#** 注释掉它。 + + #Require local + +添加下面的一行: + + allow from all + +保存并退出文件。 + +然后重启 apache 服务。 + + sudo systemctl restart apache2 + +对于 Ubuntu 14.10 及之前版本: + +编辑 **/etc/apache2/conf.d/phppgadmin**: + + sudo nano /etc/apache2/conf.d/phppgadmin + +注释掉下面一行: + + [...] + #allow from 127.0.0.0/255.0.0.0 ::1/128 + +取消下面一行的注释使所有系统都可以访问 phppgadmin。 + + allow from all + +编辑 **/etc/apache2/apache2.conf**: + + sudo vi /etc/apache2/apache2.conf + +添加下面一行: + + Include /etc/apache2/conf.d/phppgadmin + +然后重启 apache 服务。 + + sudo service apache2 restart + +### 配置 phpPgAdmin ### + +编辑文件 **/etc/phppgadmin/config.inc.php**, 做以下更改。下面大部分选项都带有解释。认真阅读以便了解为什么要更改这些值。 + + sudo nano /etc/phppgadmin/config.inc.php + +找到下面一行: + + $conf['servers'][0]['host'] = ''; + +按照下面这样更改: + + $conf['servers'][0]['host'] = 'localhost'; + +找到这一行: + + $conf['extra_login_security'] = true; + +更改值为 **false**。 + + $conf['extra_login_security'] = false; + +找到这一行: + + $conf['owned_only'] = false; + +更改值为 **true**。 + + $conf['owned_only'] = true; + +保存并关闭文件。重启 postgresql 服务和 Apache 服务。 + + sudo systemctl restart postgresql + +---------- + + sudo systemctl restart apache2 + +或者, + + sudo service postgresql restart + + sudo service apache2 restart + +现在打开你的浏览器并导航到 **http://ip-address/phppgadmin**。你会看到以下截图。 + +![phpPgAdmin – Google Chrome_001](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg) + +用你之前创建的用户登录。我之前已经创建了一个名为 “**senthil**” 的用户,密码是 “**ubuntu**”,因此我以 “senthil” 用户登录。 + +![phpPgAdmin – Google Chrome_002](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg) + +然后你就可以访问 phppgadmin 面板了。 + +![phpPgAdmin – Google Chrome_003](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg) + +用 postgres 用户登录: + +![phpPgAdmin – Google Chrome_004](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg) + +就是这样。现在你可以用 phppgadmin 可视化创建、删除或者更改数据库了。 + +加油! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-10/ + +作者:[SK][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.twitter.com/ostechnix +[1]:http://www.postgresql.org/ +[2]:http://phppgadmin.sourceforge.net/doku.php \ No newline at end of file From b4c9b68050e335a1703dad32cb7f291418c7a374 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 24 Nov 2015 02:02:10 +0800 Subject: [PATCH 017/160] PUB:20150909 Superclass--15 of the world's best living programmers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @martin2011qi 翻译这篇辛苦啦! --- ... of the world's best living programmers.md | 427 ++++++++++++++++++ ... of the world's best living programmers.md | 389 ---------------- 2 files changed, 427 insertions(+), 389 deletions(-) create mode 100644 published/20150909 Superclass--15 of the world's best living programmers.md delete mode 100644 translated/talk/20150909 Superclass--15 of the world's best living programmers.md diff --git a/published/20150909 Superclass--15 of the world's best living programmers.md b/published/20150909 Superclass--15 of the world's best living programmers.md new file mode 100644 index 0000000000..89a42d29d7 --- /dev/null +++ b/published/20150909 Superclass--15 of the world's best living programmers.md @@ -0,0 +1,427 @@ +超神们:15 位健在的世界级程序员! +================================================================================ + +当开发人员说起世界顶级程序员时,他们的名字往往会被提及。 + +好像现在程序员有很多,其中不乏有许多优秀的程序员。但是哪些程序员更好呢? + +虽然这很难客观评价,不过在这个话题确实是开发者们津津乐道的。ITworld 深入程序员社区,避开四溅的争执口水,试图找出可能存在的所谓共识。事实证明,屈指可数的某些名字经常是讨论的焦点。 + +![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg) + +*图片来源: [tom_bullock CC BY 2.0][1]* + +下面就让我们来看看这些世界顶级的程序员吧! + +### 玛格丽特·汉密尔顿(Margaret Hamilton) ### + +![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg) + +*图片来源: [NASA][2]* + +**成就: 阿波罗飞行控制软件背后的大脑** + +生平: 查尔斯·斯塔克·德雷珀实验室(Charles Stark Draper Laboratory)软件工程部的主任,以她为首的团队负责设计和打造 NASA 的阿波罗的舰载飞行控制器软件和空间实验室(Skylab)的任务。基于阿波罗这段的工作经历,她又后续开发了[通用系统语言(Universal Systems Language)][5]和[开发先于事实( Development Before the Fact)][6]的范例。开创了[异步软件、优先调度和超可靠的软件设计][7]理念。被认为发明了“[软件工程( software engineering)][8]”一词。1986年获[奥古斯塔·埃达·洛夫莱斯奖(Augusta Ada Lovelace Award)][9],2003年获 [NASA 杰出太空行动奖(Exceptional Space Act Award)][10]。 + +评论: + +> “汉密尔顿发明了测试,使美国计算机工程规范了很多” —— [ford_beeblebrox][11] + +> “我认为在她之前(不敬地说,包括高德纳(Knuth)在内的)计算机编程是(另一种形式上留存的)数学分支。然而这个宇宙飞船的飞行控制系统明确地将编程带入了一个崭新的领域。” —— [Dan Allen][12] + +> “... 她引入了‘软件工程’这个术语 — 并作出了最好的示范。” —— [David Hamilton][13] + +> “真是个坏家伙” [Drukered][14] + + +### 唐纳德·克努斯(Donald Knuth),即 高德纳 ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg) + +*图片来源: [vonguard CC BY-SA 2.0][15]* + +**成就: 《计算机程序设计艺术(The Art of Computer Programming,TAOCP)》 作者** + +生平: 撰写了[编程理论的权威书籍][16]。发明了数字排版系统 Tex。1971年,[ACM(美国计算机协会)葛丽丝·穆雷·霍普奖(Grace Murray Hopper Award)][17] 的首位获奖者。1974年获 ACM [图灵奖(A. M. Turing)][18],1979年获[美国国家科学奖章(National Medal of Science)][19],1995年获IEEE[约翰·冯·诺依曼奖章(John von Neumann Medal)][20]。1998年入选[计算机历史博物馆(Computer History Museum)名人录(Hall of Fellows)][21]。 + +评论: + +> “... 写的计算机编程艺术(The Art of Computer Programming,TAOCP)可能是有史以来计算机编程方面最大的贡献。”—— [佚名][22] + +> “唐·克努斯的 TeX 是我所用过的计算机程序中唯一一个几乎没有 bug 的。真是让人印象深刻!”—— [Jaap Weel][23] + +> “如果你要问我的话,我只能说太棒了!” —— [Mitch Rees-Jones][24] + +### 肯·汤普逊(Ken Thompson) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg) + +*图片来源: [Association for Computing Machinery][25]* + +**成就: Unix 之父** + +生平:与[丹尼斯·里奇(Dennis Ritchie)][26]共同创造了 Unix。创造了 [B 语言][27]、[UTF-8 字符编码方案][28]、[ed 文本编辑器][29],同时也是 Go 语言的共同开发者。(和里奇)共同获得1983年的[图灵奖(A.M. Turing Award )][30],1994年获 [IEEE 计算机先驱奖( IEEE Computer Pioneer Award)][31],1998年获颁[美国国家科技奖章( National Medal of Technology )][32]。在1997年入选[计算机历史博物馆(Computer History Museum)名人录(Hall of Fellows)][33]。 + +评论: + +> “... 可能是有史以来最能成事的程序员了。Unix 内核,Unix 工具,国际象棋程序世界冠军 Belle,Plan 9,Go 语言。” —— [Pete Prokopowicz][34] + +> “肯所做出的贡献,据我所知无人能及,是如此的根本、实用、经得住时间的考验,时至今日仍在使用。” —— [Jan Jannink][35] + + +### 理查德·斯托曼(Richard Stallman) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg) + +*图片来源: [Jiel Beaumadier CC BY-SA 3.0][135]* + +**成就: Emacs 和 GCC 缔造者** + +生平: 成立了 [GNU 工程(GNU Project)] [36],并创造了它的许多核心工具,如 [Emacs、GCC、GDB][37] 和 [GNU Make][38]。还创办了[自由软件基金会(Free Software Foundation)] [39]。1990年荣获 ACM 的[葛丽丝·穆雷·霍普奖( Grace Murray Hopper Award)][40],1998年获 [EFF 先驱奖(Pioneer Award)][41]. + +评论: + +> “... 在 Symbolics 对阵 LMI 的战斗中,独自一人与一众 Lisp 黑客好手对码。” —— [Srinivasan Krishnan][42] + +> “通过他在编程上的精湛造诣与强大信念,开辟了一整套编程与计算机的亚文化。” —— [Dan Dunay][43] + +> “我可以不赞同这位伟人的很多方面,不必盖棺论定,他不可否认都已经是一位伟大的程序员了。” —— [Marko Poutiainen][44] + +> “试想 Linux 如果没有 GNU 工程的前期工作会怎么样。(多亏了)斯托曼的炸弹!” —— [John Burnette][45] + +### 安德斯·海尔斯伯格(Anders Hejlsberg) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg) + +*图片来源: [D.Begley CC BY 2.0][46]* + +**成就: 创造了Turbo Pascal** + +生平: [Turbo Pascal 的原作者][47],是最流行的 Pascal 编译器和第一个集成开发环境。而后,[领导了 Turbo Pascal 的继任者 Delphi][48] 的构建。[C# 的主要设计师和架构师][49]。2001年荣获[ Dr. Dobb 的杰出编程奖(Dr. Dobb's Excellence in Programming Award )][50]。 + +评论: + +> “他用汇编语言为当时两个主流的 PC 操作系统(DOS 和 CPM)编写了 [Pascal] 编译器。用它来编译、链接并运行仅需几秒钟而不是几分钟。” —— [Steve Wood][51] + +> “我佩服他 - 他创造了我最喜欢的开发工具,陪伴着我度过了三个关键的时期直至我成为一位专业的软件工程师。” —— [Stefan Kiryazov][52] + +### Doug Cutting ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg) + +图片来源: [vonguard CC BY-SA 2.0][53] + +**成就: 创造了 Lucene** + +生平: [开发了 Lucene 搜索引擎以及 Web 爬虫 Nutch][54] 和用于大型数据集的分布式处理套件 [Hadoop][55]。一位强有力的开源支持者(Lucene、Nutch 以及 Hadoop 都是开源的)。前 [Apache 软件基金(Apache Software Foundation)的理事][56]。 + +评论: + + +> “...他就是那个既写出了优秀搜索框架(lucene/solr),又为世界开启大数据之门(hadoop)的男人。” —— [Rajesh Rao][57] + +> “他在 Lucene 和 Hadoop(及其它工程)的创造/工作中为世界创造了巨大的财富和就业...” —— [Amit Nithianandan][58] + +### Sanjay Ghemawat ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg) + +*图片来源: [Association for Computing Machinery][59]* + +**成就: 谷歌核心架构师** + +生平: [协助设计和实现了一些谷歌大型分布式系统的功能][60],包括 MapReduce、BigTable、Spanner 和谷歌文件系统(Google File System)。[创造了 Unix 的 ical ][61]日历系统。2009年入选[美国国家工程院(National Academy of Engineering)][62]。2012年荣获 [ACM-Infosys 基金计算机科学奖( ACM-Infosys Foundation Award in the Computing Sciences)][63]。 + +评论: + + +> “Jeff Dean的僚机。” —— [Ahmet Alp Balkan][64] + +### Jeff Dean ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg) + +*图片来源: [Google][65]* + +**成就: 谷歌搜索索引背后的大脑** + +生平:协助设计和实现了[许多谷歌大型分布式系统的功能][66],包括网页爬虫,索引搜索,AdSense,MapReduce,BigTable 和 Spanner。2009年入选[美国国家工程院( National Academy of Engineering)][67]。2012年荣获ACM 的[SIGOPS 马克·维瑟奖( SIGOPS Mark Weiser Award)][68]及[ACM-Infosys基金计算机科学奖( ACM-Infosys Foundation Award in the Computing Sciences)][69]。 + +评论: + +> “... 带来了在数据挖掘(GFS、MapReduce、BigTable)上的突破。” —— [Natu Lauchande][70] + +> “... 设计、构建并部署 MapReduce 和 BigTable,和以及数不清的其它东西” —— [Erik Goldman][71] + +### 林纳斯·托瓦兹(Linus Torvalds) ### + +![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg) + +*图片来源: [Krd CC BY-SA 4.0][72]* + +**成就: Linux缔造者** + +生平:创造了 [Linux 内核][73]与[开源的版本控制系统 Git][74]。收获了许多奖项和荣誉,包括有1998年的 [EFF 先驱奖(EFF Pioneer Award)][75],2000年荣获[英国电脑学会(British Computer Society)授予的洛夫莱斯勋章(Lovelace Medal)][76],2012年荣获[千禧技术奖(Millenium Technology Prize)][77]还有2014年[IEEE计算机学会( IEEE Computer Society)授予的计算机先驱奖(Computer Pioneer Award)][78]。同样入选了2008年的[计算机历史博物馆( Computer History Museum)名人录(Hall of Fellows)][79]与2012年的[互联网名人堂(Internet Hall of Fame )][80]。 + +评论: + +> “他只用了几年的时间就写出了 Linux 内核,而 GNU Hurd(GNU 开发的内核)历经25年的开发却丝毫没有准备发布的意思。他的成就就是带来了希望。” —— [Erich Ficker][81] + +> “托沃兹可能是程序员的程序员。” —— [Dan Allen][82] + +> “他真的很棒。” —— [Alok Tripathy][83] + +### 约翰·卡马克(John Carmack) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg) + +*图片来源: [QuakeCon CC BY 2.0][84]* + +**成就: 毁灭战士的缔造者** + +生平: ID 社联合创始人,打造了德军总部3D(Wolfenstein 3D)、毁灭战士(Doom)和雷神之锤(Quake)等所谓的即时 FPS 游戏。引领了[切片适配刷新(adaptive tile refresh)][86], [二叉空间分割(binary space partitioning)][87],表面缓存(surface caching)等开创性的计算机图像技术。2001年入选[互动艺术与科学学会名人堂(Academy of Interactive Arts and Sciences Hall of Fame)][88],2007年和2008年荣获工程技术类[艾美奖(Emmy awards)][89]并于2010年由[游戏开发者甄选奖( Game Developers Choice Awards)][90]授予终生成就奖。 + +评论: + +> “他在写第一个渲染引擎的时候不到20岁。这家伙这是个天才。我若有他四分之一的天赋便心满意足了。” —— [Alex Dolinsky][91] + +> “... 德军总部3D(Wolfenstein 3D)、毁灭战士(Doom)还有雷神之锤(Quake)在那时都是革命性的,影响了一代游戏设计师。” —— [dniblock][92] + +> “一个周末他几乎可以写出任何东西....” —— [Greg Naughton][93] + +> “他是编程界的莫扎特... ” —— [Chris Morris][94] + +### 法布里斯·贝拉(Fabrice Bellard) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg) + +*图片来源: [Duff][95]* + +**成就: 创造了 QEMU** + +生平: 创造了[一系列耳熟能详的开源软件][96],其中包括硬件模拟和虚拟化的平台 QEMU,用于处理多媒体数据的 FFmpeg,微型C编译器(Tiny C Compiler)和 一个可执行文件压缩软件 LZEXE。2000年和2001年[C语言混乱代码大赛(Obfuscated C Code Contest)的获胜者][97]并在2011年荣获[Google-O'Reilly 开源奖(Google-O'Reilly Open Source Award )][98]。[计算 Pi 最多位数][99]的前世界纪录保持着。 + +评论: + + +> “我觉得法布里斯·贝拉做的每一件事都是那么显著而又震撼。” —— [raphinou][100] + +> “法布里斯·贝拉是世界上最高产的程序员...” —— [Pavan Yara][101] + +> “他就像软件工程界的尼古拉·特斯拉(Nikola Tesla)。” —— [Michael Valladolid][102] + +> “自80年代以来,他一直高产出一系列的成功作品。” —— [Michael Biggins][103] + +### Jon Skeet ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg) + +*图片来源: [Craig Murphy CC BY 2.0][104]* + +**成就: Stack Overflow 的传说级贡献者** + +生平: Google 工程师,[深入解析C#(C# in Depth)][105]的作者。保持着[有史以来在 Stack Overflow 上最高的声誉][106],平均每月解答390个问题。 + +评论: + + +> “他根本不需要调试器,只要他盯一下代码,错误之处自会原形毕露。” —— [Steven A. Lowe][107] + +> “如果他的代码没有通过编译,那编译器应该道歉。” —— [Dan Dyer][108] + +> “他根本不需要什么编程规范,他的代码就是编程规范。” —— [佚名][109] + +### 亚当·安捷罗(Adam D'Angelo) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg) + +*图片来源: [Philip Neustrom CC BY 2.0][110]* + +**成就: Quora 的创办人之一** + +生平: 还是 Facebook 工程师时,[为其搭建了 news feed 功能的基础][111]。直至其离开并联合创始了 Quora,已经成为了 Facebook 的CTO和工程 VP。2001年以高中生的身份在[美国计算机奥林匹克(USA Computing Olympiad)上第八位完成比赛][112]。2004年ACM国际大学生编程大赛(International Collegiate Programming Contest)[获得银牌的团队 - 加利福尼亚技术研究所( California Institute of Technology)][113]的成员。2005年入围 Topcoder 大学生[算法编程挑战赛(Algorithm Coding Competition)][114]。 + +评论: + +> “一位程序设计全才。” —— [佚名][115] + +> "我做的每个好东西,他都已有了六个。" —— [马克.扎克伯格(Mark Zuckerberg)][116] + +### Petr Mitrechev ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg) + +*图片来源: [Facebook][117]* + +**成就: 有史以来最具竞技能力的程序员之一** + +生平: 在国际信息学奥林匹克(International Olympiad in Informatics)中[两次获得金牌][118](2000,2002)。在2006,[赢得 Google Code Jam][119] 同时也是[TopCoder Open 算法大赛冠军][120]。也同样,两次赢得 Facebook黑客杯(Facebook Hacker Cup)([2011][121],[2013][122])。写这篇文章的时候,[TopCoder 榜中排第二][123] (即:Petr)、在 [Codeforces 榜同样排第二][124]。 + +评论: + +> “他是竞技程序员的偶像,即使在印度也是如此...” —— [Kavish Dwivedi][125] + +### Gennady Korotkevich ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg) + +*图片来源: [Ishandutta2007 CC BY-SA 3.0][126]* + +**成就: 竞技编程小神童** + +生平: 国际信息学奥林匹克(International Olympiad in Informatics)中最小参赛者(11岁),[6次获得金牌][127] (2007-2012)。2013年 ACM 国际大学生编程大赛(International Collegiate Programming Contest)[获胜队伍][128]成员及[2014 Facebook 黑客杯(Facebook Hacker Cup)][129]获胜者。写这篇文章的时候,[Codeforces 榜排名第一][130] (即:Tourist)、[TopCoder榜第一][131]。 + +评论: + +> “一个编程神童!” —— [Prateek Joshi][132] + +> “Gennady 真是棒,也是为什么我在白俄罗斯拥有一个强大开发团队的例证。” —— [Chris Howard][133] + +> “Tourist 真是天才” —— [Nuka Shrinivas Rao][134] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1 + +作者:[Phil Johnson][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Phil-Johnson/ +[1]:https://www.flickr.com/photos/tombullock/15713223772 +[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg +[3]:http://klabs.org/home_page/hamilton.htm +[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s +[5]:http://www.htius.com/Articles/r12ham.pdf +[6]:http://www.htius.com/Articles/Inside_DBTF.htm +[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html +[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html +[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false +[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html +[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof +[12]:http://qr.ae/RFEZLk +[13]:http://qr.ae/RFEZUn +[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9 +[15]:https://www.flickr.com/photos/44451574@N00/5347112697 +[16]:http://cs.stanford.edu/~uno/taocp.html +[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm +[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm +[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198 +[20]:http://www.ieee.org/documents/von_neumann_rl.pdf +[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/ +[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063 +[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel +[24]:http://qr.ae/RFE94x +[25]:http://amturing.acm.org/photo/thompson_4588371.cfm +[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY +[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html +[28]:http://doc.cat-v.org/bell_labs/utf-8_history +[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor +[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm +[31]:http://www.computer.org/portal/web/awards/cp-thompson +[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp +[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/ +[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1 +[35]:http://qr.ae/RFEWBY +[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J +[37]:http://www.emacswiki.org/emacs/RichardStallman +[38]:https://www.gnu.org/gnu/thegnuproject.html +[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation +[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm +[41]:https://w2.eff.org/awards/pioneer/1998.php +[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397 +[43]:http://qr.ae/RFEaib +[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen +[45]:http://qr.ae/RFEUqp +[46]:https://www.flickr.com/photos/begley/2979906130 +[47]:http://www.taoyue.com/tutorials/pascal/history.html +[48]:http://c2.com/cgi/wiki?AndersHejlsberg +[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx +[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602 +[51]:http://qr.ae/RFEZrv +[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov +[53]:https://www.flickr.com/photos/vonguard/4076389963/ +[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html +[55]:http://hadoop.apache.org/ +[56]:https://www.linkedin.com/in/cutting +[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071 +[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan +[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm +[60]:http://research.google.com/pubs/SanjayGhemawat.html +[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat +[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 +[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm +[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan +[65]:http://research.google.com/people/jeff/index.html +[66]:http://research.google.com/people/jeff/index.html +[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 +[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/ +[69]:http://awards.acm.org/award_winners/dean_2879385.cfm +[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande +[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399 +[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg +[73]:http://www.linuxfoundation.org/about/staff#torvalds +[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git +[75]:https://w2.eff.org/awards/pioneer/1998.php +[76]:http://www.bcs.org/content/ConWebDoc/14769 +[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789 +[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award +[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/ +[80]:http://www.internethalloffame.org/inductees/linus-torvalds +[81]:http://qr.ae/RFEeeo +[82]:http://qr.ae/RFEZLk +[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1 +[84]:https://www.flickr.com/photos/quakecon/9434713998 +[85]:http://doom.wikia.com/wiki/John_Carmack +[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/ +[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759 +[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6 +[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8 +[90]:http://www.gamechoiceawards.com/archive/lifetime.html +[91]:http://qr.ae/RFEEgr +[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562 +[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton +[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/ +[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/ +[96]:http://bellard.org/ +[97]:http://www.ioccc.org/winners.html#B +[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161 +[99]:http://bellard.org/pi/pi2700e9/ +[100]:https://news.ycombinator.com/item?id=7850797 +[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701 +[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450 +[103]:http://qr.ae/RFEjhZ +[104]:https://www.flickr.com/photos/craigmurphy/4325516497 +[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471 +[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow +[107]:http://meta.stackexchange.com/a/9156 +[108]:http://meta.stackexchange.com/a/9138 +[109]:http://meta.stackexchange.com/a/9182 +[110]:https://www.flickr.com/photos/philipn/5326344032 +[111]:http://www.crunchbase.com/person/adam-d-angelo +[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html +[113]:http://icpc.baylor.edu/community/results-2004 +[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205 +[115]:http://qr.ae/RFfOfe +[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB +[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 +[118]:http://stats.ioinformatics.org/people/1849 +[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html +[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855 +[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651 +[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 +[123]:http://community.topcoder.com/tc?module=AlgoRank +[124]:http://codeforces.com/ratings +[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855 +[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg +[127]:http://stats.ioinformatics.org/people/804 +[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings +[129]:https://www.facebook.com/hackercup/posts/10152022955628845 +[130]:http://codeforces.com/ratings +[131]:http://community.topcoder.com/tc?module=AlgoRank +[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi +[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779 +[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549 +[135]:http://commons.wikimedia.org/wiki/File:Jielbeaumadier_richard_stallman_2010.jpg \ No newline at end of file diff --git a/translated/talk/20150909 Superclass--15 of the world's best living programmers.md b/translated/talk/20150909 Superclass--15 of the world's best living programmers.md deleted file mode 100644 index 6f59aa13d9..0000000000 --- a/translated/talk/20150909 Superclass--15 of the world's best living programmers.md +++ /dev/null @@ -1,389 +0,0 @@ -教父们: 15位举世瞩目的程序员 -================================================================================ -当开发人员讨论关于世界顶级程序员时,这些名字往往就会出现。 - -![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg) - -图片来源: [tom_bullock CC BY 2.0][1] - -好像现在程序员有很多,其中不乏有许多优秀的程序员。但是期中哪些程序员更好呢? - -虽然这很难客观评价,不过在这个话题确实是开发者们乐于津道的。ITworld针对程序员社区的输入和刷新试图找出可能存在的所谓共识。事实证明,屈指可数的某些名字经常是讨论的焦点。 - -Use the arrows above to read about 15 people commonly cited as the world’s best living programmer.下面就让我们来看看这些世界顶级的程序员吧!(没有箭头呢:P) - -![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg) - -图片来源: [NASA][2] - -### 玛格丽特·汉密尔顿 ### - -**成就: 阿波罗飞行控制软件背后的大脑** - -生平: 查尔斯·斯塔克·德雷珀实验室软件工程部的主任,她为首的团队负责设计和打造NASA阿波罗的板载飞行控制器软件和Skylab任务。基于阿波罗这段的工作经历,她又后续开发了[通用系统语言][5]和[开发先于事实][6]的范例。开创了[异步软件、优先调度和超可靠的软件设计][7]理念。被认为发明了“[软件工程][8]”一词。1986年获[奥古斯塔·埃达·洛夫莱斯][9]奖,[2003年获NASA杰出太空行动奖][10]。 - -评论: “汉密尔顿发明了测试,使美国计算机工程规范了很多” [ford_beeblebrox][11] - -“我认为在她之前(不敬地说,包括高德纳在内的)计算机编程是(另一种形式上留存的)数学分支。然而宇宙飞船的飞行控制系统明确地将编程带入了一个崭新的领域。” [Dan Allen][12] - -“... 她引入了‘计算机工程’这个术语 — 并作出了最好的示范。” [David Hamilton][13] - -“真是个坏家伙” [Drukered][14] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg) - -图片来源: [vonguard CC BY-SA 2.0][15] - -### 唐纳德·尔文·克努斯 ### - -**成就: 《计算机程序设计艺术》 作者** - -生平: 撰写了[编程理论的权威书籍][16]。发明了数字排版系统Tex。1971年获得[首次ACM(美国计算机协会)葛丽丝·穆雷·霍普奖][17]。1974年获ACM[图灵奖][18]奖,1979年获[国家科学奖章][19],1995年获IEEE[约翰·冯·诺依曼奖章][20]。1998年入选[计算机历史博物馆名人录][21]。 - -评论: “... 写的计算器编程的艺术可能是有史以来计算机编程最大的贡献。”[佚名][22] - -“唐·克努斯的TeX是我所用过的计算机程序中唯一一个几乎没有bug的。真是让人印象深刻!” [Jaap Weel][23] - -“如果你要问我的话,我只能说太棒了!” [Mitch Rees-Jones][24] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg) - -图片来源: [Association for Computing Machinery][25] - -### 肯尼斯·蓝·汤普逊 ### - -**成就: Unix之父** - -生平: 与[丹尼斯·里奇][26]共同创造了Unix。创造了[B语言][27]、[UTF-8字符编码方案][28]、[ed文本编辑器][29],同时也是Go语言的合作开发人。(同里奇)共同获得1983年的[图灵奖][30],1994年获[IEEE计算机先驱奖][31],1998年获颁[美国国家科技创新奖章][32]。在1997年入选[计算机历史博物馆名人录][33]。 - -评论: “... 可能是有史以来最能成事的程序员了。Unix内核,Unix用具,国际象棋程序世界冠军Belle,Plan 9,Go语言。” [Pete Prokopowicz][34] - -“肯所做出的贡献,据我所知无人能及,是如此的根本、实用、经得住时间的考验,时至今日仍在使用。” [Jan Jannink][35] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg) - -图片来源: Jiel Beaumadier CC BY-SA 3.0 - -### 理查德·斯托曼 ### - -**成就: Emacs和GCC缔造者** - -生平: 成立了[GNU工程] [36],并创造了许多的核心工具,如[Emacs, GCC, GDB][37]和[GNU Make][38]。还创办了[自由软件基金会] [39]。1990 荣获ACM[葛丽丝·穆雷·霍普奖][40],[1998获EFF先驱奖][41]. - -评论: “... 在Symbolics对阵LMI的战斗中,独自一人与一众Lisp黑客好手对码。” [Srinivasan Krishnan][42] - -“通过他在编程上的造诣与强大信念,开辟了一整套编程与计算机的亚文化。” [Dan Dunay][43] - -“我可以不赞同这位伟人的很多方面,但不可否认无论活着还是死去,他都已经是一位伟大的程序员了。” [Marko Poutiainen][44] - -“试想Linux如果没有GNU工程的前期工作。斯托曼就是这个炸弹包,哟。” [John Burnette][45] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg) - -图片来源: [D.Begley CC BY 2.0][46] - -### 安德斯·海尔斯伯格 ### - -**成就: 创造了Turbo Pascal** - -生平: [Turbo Pascal的原作者][47],是最流行的Pascal编译器和第一个集成开发环境。而后,[领导了Delphi][48]和下一代Turbo Pascal的构建。[C#的主要设计师和架构师][49]。2001年荣获[Dr. Dobb's杰出编程奖][50]。 - -评论: “他用汇编在主流PC操作系统day(DOS and CPM)上编写了[Pascal]的编译器。用它来编译、链接并运行仅需几秒钟而不是几分钟。” [Steve Wood][51] - -“我佩服他 - 他创造了我最喜欢的开发工具,陪伴着我度过了三个关键的时期直至我成为一位专业的软件工程师。” [Stefan Kiryazov][52] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg) - -图片来源: [vonguard CC BY-SA 2.0][53] - -### Doug Cutting ### - -**成就: 创造了Lucene** - -生平: [开发了Lucene搜索引擎、Web爬虫Nutch][54]和[对于大型数据集的分布式处理套件Hadoop][55]。一位强有力的开源支持者(Lucene、Nutch以及Hadoop都是开源的)。前[Apache软件基金的理事][56]。 - -评论: “...他就是那个即写出了优秀搜索框架(lucene/solr),又为世界开启大数据之门(hadoop)的男人。” [Rajesh Rao][57] - -“他在Lucene和Hadoop(及其它工程)的创造/工作中为世界创造了巨大的财富和就业...” [Amit Nithianandan][58] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg) - -图片来源: [Association for Computing Machinery][59] - -### Sanjay Ghemawat ### - -**成就: 谷歌核心架构师** - -生平: [协助设计和实现了一些谷歌大型分布式系统的功能][60],包括MapReduce、BigTable、Spanner和谷歌文件系统。[创造了Unix的 ical][61]日历系统。2009年入选[国家工程院][62]。2012年荣获[ACM-Infosys基金计算机科学奖][63]。 - -评论: “Jeff Dean的僚机。” [Ahmet Alp Balkan][64] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg) - -图片来源: [Google][65] - -### Jeff Dean ### - -**成就: 谷歌索引搜索背后的大脑** - -生平: 协助设计和实现了[许多谷歌大型分布式系统的功能][66],包括网页爬虫,索引搜索,AdSense,MapReduce,BigTable和Spanner。2009年入选[国家工程院][67]。2012年荣获ACM [SIGOPS马克·维瑟奖][68]及[ACM-Infosys基金计算机科学奖][69]。 - -评论: “... 带来的在数据挖掘(GFS、MapReduce、BigTable)上的突破。” [Natu Lauchande][70] - -“... 设计、构建并部署MapReduce和BigTable,和以及数不清的东西” [Erik Goldman][71] - -![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg) - -图片来源: [Krd CC BY-SA 4.0][72] - -### 林纳斯·托瓦兹 ### - -**成就: Linux缔造者** - -生平: 创造了[Linux内核][73]与[开源版本控制器Git][74]。收获了许多奖项和荣誉,包括有1998年的[EFF先驱奖][75],2000年荣获[英国电脑学会授予的洛夫莱斯勋章][76],2012年荣获[千禧技术奖][77]还有2014年[IEEE计算机学会授予的计算机先驱奖][78]。同样入选了2008年的[计算机历史博物馆名人录][79]与2012年的[网络名人堂][80]。 - -评论: “他只用了几年的时间就写出了Linux内核,而GNU Hurd(GNU开发的内核)历经25年的开发却丝毫没有准备发布的意思。他的成就就是带来了希望。” [Erich Ficker][81] - -“托沃兹可能是程序员的程序员。” [Dan Allen][82] - -“他真的很棒。” [Alok Tripathy][83] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg) - -图片来源: [QuakeCon CC BY 2.0][84] - -### 约翰·卡马克 ### - -**成就: 毁灭战士缔造者** - -生平: ID社联合创始人,打造了德军总部3D、毁灭战士和雷神之锤等所谓的即使FPS游戏。引领了[切片适配更新(adaptive tile refresh)][86], [二叉空间分割(binary space partitioning)][87],表面缓存(surface caching)等开创性的计算机图像技术。2001年入选[互动艺术与科学学会名人堂][88],2007年和2008年荣获工程技术类[艾美奖][89]并于2010年由[游戏开发者甄选奖][90]授予终生成就奖。 - -评论: “他在写第一个渲染引擎的时候不到20岁。这家伙这是个天才。我若有他四分之一的天赋便心满意足了。” [Alex Dolinsky][91] - -“... 德军总部3D,、毁灭战士还有雷神之锤在那时都是革命性的,影响了一代游戏设计师。” [dniblock][92] - -“一个周末他几乎可以写出任何东西....” [Greg Naughton][93] - -“他是编程界的莫扎特... [Chris Morris][94] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg) - -图片来源: [Duff][95] - -### 法布里斯·贝拉 ### - -**成就: 创造了QEMU** - -生平: 创造了[一系列耳熟能详的开源软件][96],其中包括硬件模拟和虚拟化的平台QEMU,用于处理多媒体数据的FFmpeg,微型C编译器和 一个可执行文件压缩软件LZEXE。2000年和2001年[C语言混乱代码大赛的获胜者][97]并在2011年荣获[Google-O'Reilly开源奖][98]。[计算Pi最多位数][99]的前世界纪录保持着。 - -评论: “我觉得法布里斯·贝拉做的每一件事都是那么显著而又震撼。” [raphinou][100] - -“法布里斯·贝拉是世界上最高产的程序员...” [Pavan Yara][101] - -“他就像软件工程界的尼古拉·特斯拉。” [Michael Valladolid][102] - -“自80年代以来,他一直高产出一些列的成功作品。” [Michael Biggins][103] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg) - -图片来源: [Craig Murphy CC BY 2.0][104] - -### Jon Skeet ### - -**成就: Stack Overflow传说级贡献者** - -生平: Google工程师[深入解析C#][105]的作者。保持着[有史以来在Stack Overflow上最高的声誉][106],平均每月解答390个问题。 - -评论: “他根本不需要调试器,只要他盯一下代码,错误之处自会原形毕露。” [Steven A. Lowe][107] - -“如果他的代码没有通过编译,那编译器应该道歉。” [Dan Dyer][108] - -“他根本不需要什么编程规范,他的代码就是编程规范。” [Anonymous][109] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg) - -图片来源: [Philip Neustrom CC BY 2.0][110] - -### 亚当·安捷罗 ### - -**成就: Quora的创办人之一** - -生平: 还是Facebook工程师时,[为其搭建了news feed功能的基础][111]。直至其离开并联合创始了Quora,已经成为了Facebook的CTO和工程VP。2001年以高中生的身份在[美国计算机奥林匹克上第八位完成比赛][112]。2004年ACM国际大学生编程大赛[获得银牌的团队 - 加利福尼亚技术研究所][113]的成员。2005年入围Topcoder大学生[算法编程挑战赛][114]。 - -评论: “一位程序设计全才。” [Anonymous][115] - -"我做的每个好东西,他都已有了六个。" [Mark Zuckerberg][116] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg) - -图片来源: [Facebook][117] - -### Petr Mitrechev ### - -**成就: 有史以来最具竞技能力的程序员之一** - -生平: 在国际信息学奥林匹克中[两次获得金牌][118](2000,2002)。在2006,[赢得Google Code Jam][119]同时也是[TopCoder Open算法大赛冠军][120]。也同样,两次赢得Facebook黑客杯([2011][121],[2013][122])。写这篇文章的时候,[TopCoder榜中排第二][123] (即:Petr)、在[Codeforces榜同样排第二][124]。 - -评论: “他是竞技程序员的偶像,即使在印度也是如此...[Kavish Dwivedi][125] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg) - -图片来源: [Ishandutta2007 CC BY-SA 3.0][126] - -### Gennady Korotkevich ### - -**成就: 竞技编程小神童** - -生平: 国际信息学奥林匹克中最小参赛者(11岁)[6次获得金牌][127] (2007-2012)。2013年ACM国际大学生编程大赛[获胜队伍][128]成员及[2014 Facebook黑客杯][129]获胜者。写这篇文章的时候,[Codeforces榜排名第一][130] (即:Tourist)、[TopCoder榜第一][131]。 - -评论: “一个编程神童!” [Prateek Joshi][132] - -“Gennady真是棒,也是为什么我在白俄罗斯拥有一个强大开发团队的例证。” [Chris Howard][133] - -“Tourist真是天才” [Nuka Shrinivas Rao][134] - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1 - -作者:[Phil Johnson][a] -译者:[martin2011qi](https://github.com/martin2011qi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Phil-Johnson/ -[1]:https://www.flickr.com/photos/tombullock/15713223772 -[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg -[3]:http://klabs.org/home_page/hamilton.htm -[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s -[5]:http://www.htius.com/Articles/r12ham.pdf -[6]:http://www.htius.com/Articles/Inside_DBTF.htm -[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html -[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html -[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false -[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html -[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof -[12]:http://qr.ae/RFEZLk -[13]:http://qr.ae/RFEZUn -[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9 -[15]:https://www.flickr.com/photos/44451574@N00/5347112697 -[16]:http://cs.stanford.edu/~uno/taocp.html -[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm -[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm -[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198 -[20]:http://www.ieee.org/documents/von_neumann_rl.pdf -[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/ -[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063 -[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel -[24]:http://qr.ae/RFE94x -[25]:http://amturing.acm.org/photo/thompson_4588371.cfm -[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY -[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html -[28]:http://doc.cat-v.org/bell_labs/utf-8_history -[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor -[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm -[31]:http://www.computer.org/portal/web/awards/cp-thompson -[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp -[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/ -[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1 -[35]:http://qr.ae/RFEWBY -[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J -[37]:http://www.emacswiki.org/emacs/RichardStallman -[38]:https://www.gnu.org/gnu/thegnuproject.html -[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation -[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm -[41]:https://w2.eff.org/awards/pioneer/1998.php -[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397 -[43]:http://qr.ae/RFEaib -[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen -[45]:http://qr.ae/RFEUqp -[46]:https://www.flickr.com/photos/begley/2979906130 -[47]:http://www.taoyue.com/tutorials/pascal/history.html -[48]:http://c2.com/cgi/wiki?AndersHejlsberg -[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx -[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602 -[51]:http://qr.ae/RFEZrv -[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov -[53]:https://www.flickr.com/photos/vonguard/4076389963/ -[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html -[55]:http://hadoop.apache.org/ -[56]:https://www.linkedin.com/in/cutting -[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071 -[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan -[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm -[60]:http://research.google.com/pubs/SanjayGhemawat.html -[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat -[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 -[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm -[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan -[65]:http://research.google.com/people/jeff/index.html -[66]:http://research.google.com/people/jeff/index.html -[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 -[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/ -[69]:http://awards.acm.org/award_winners/dean_2879385.cfm -[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande -[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399 -[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg -[73]:http://www.linuxfoundation.org/about/staff#torvalds -[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git -[75]:https://w2.eff.org/awards/pioneer/1998.php -[76]:http://www.bcs.org/content/ConWebDoc/14769 -[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789 -[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award -[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/ -[80]:http://www.internethalloffame.org/inductees/linus-torvalds -[81]:http://qr.ae/RFEeeo -[82]:http://qr.ae/RFEZLk -[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1 -[84]:https://www.flickr.com/photos/quakecon/9434713998 -[85]:http://doom.wikia.com/wiki/John_Carmack -[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/ -[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759 -[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6 -[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8 -[90]:http://www.gamechoiceawards.com/archive/lifetime.html -[91]:http://qr.ae/RFEEgr -[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562 -[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton -[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/ -[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/ -[96]:http://bellard.org/ -[97]:http://www.ioccc.org/winners.html#B -[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161 -[99]:http://bellard.org/pi/pi2700e9/ -[100]:https://news.ycombinator.com/item?id=7850797 -[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701 -[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450 -[103]:http://qr.ae/RFEjhZ -[104]:https://www.flickr.com/photos/craigmurphy/4325516497 -[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471 -[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow -[107]:http://meta.stackexchange.com/a/9156 -[108]:http://meta.stackexchange.com/a/9138 -[109]:http://meta.stackexchange.com/a/9182 -[110]:https://www.flickr.com/photos/philipn/5326344032 -[111]:http://www.crunchbase.com/person/adam-d-angelo -[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html -[113]:http://icpc.baylor.edu/community/results-2004 -[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205 -[115]:http://qr.ae/RFfOfe -[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB -[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 -[118]:http://stats.ioinformatics.org/people/1849 -[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html -[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855 -[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651 -[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 -[123]:http://community.topcoder.com/tc?module=AlgoRank -[124]:http://codeforces.com/ratings -[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855 -[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg -[127]:http://stats.ioinformatics.org/people/804 -[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings -[129]:https://www.facebook.com/hackercup/posts/10152022955628845 -[130]:http://codeforces.com/ratings -[131]:http://community.topcoder.com/tc?module=AlgoRank -[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi -[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779 -[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549 From 66916e81bc287c57b520847a9873100e2037bffd Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 24 Nov 2015 11:35:31 +0800 Subject: [PATCH 018/160] =?UTF-8?q?20151124-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ew--5 memory debuggers for Linux coding.md | 284 ++++++++++++++++++ 1 file changed, 284 insertions(+) create mode 100644 sources/talk/20151124 Review--5 memory debuggers for Linux coding.md diff --git a/sources/talk/20151124 Review--5 memory debuggers for Linux coding.md b/sources/talk/20151124 Review--5 memory debuggers for Linux coding.md new file mode 100644 index 0000000000..db465e47cd --- /dev/null +++ b/sources/talk/20151124 Review--5 memory debuggers for Linux coding.md @@ -0,0 +1,284 @@ +Review: 5 memory debuggers for Linux coding +================================================================================ +![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg) +Credit: [Moini][1] + +As a programmer, I'm aware that I tend to make mistakes -- and why not? Even programmers are human. Some errors are detected during code compilation, while others get caught during software testing. However, a category of error exists that usually does not get detected at either of these stages and that may cause the software to behave unexpectedly -- or worse, terminate prematurely. + +If you haven't already guessed it, I am talking about memory-related errors. Manually debugging these errors can be not only time-consuming but difficult to find and correct. Also, it's worth mentioning that these errors are surprisingly common, especially in software written in programming languages like C and C++, which were designed for use with [manual memory management][2]. + +Thankfully, several programming tools exist that can help you find memory errors in your software programs. In this roundup, I assess five popular, free and open-source memory debuggers that are available for Linux: Dmalloc, Electric Fence, Memcheck, Memwatch and Mtrace. I've used all five in my day-to-day programming, and so these reviews are based on practical experience. + +eviews are based on practical experience. + +### [Dmalloc][3] ### + +**Developer**: Gray Watson +**Reviewed version**: 5.5.2 +**Linux support**: All flavors +**License**: Creative Commons Attribution-Share Alike 3.0 License + +Dmalloc is a memory-debugging tool developed by Gray Watson. It is implemented as a library that provides wrappers around standard memory management functions like **malloc(), calloc(), free()** and more, enabling programmers to detect problematic code. + +![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png) +Dmalloc + +As listed on the tool's Web page, the debugging features it provides includes memory-leak tracking, [double free][4] error tracking and [fence-post write detection][5]. Other features include file/line number reporting, and general logging of statistics. + +#### What's new #### + +Version 5.5.2 is primarily a [bug-fix release][6] containing corrections for a couple of build and install problems. + +#### What's good about it #### + +The best part about Dmalloc is that it's extremely configurable. For example, you can configure it to include support for C++ programs as well as threaded applications. A useful functionality it provides is runtime configurability, which means that you can easily enable/disable the features the tool provides while it is being executed. + +You can also use Dmalloc with the [GNU Project Debugger (GDB)][7] -- just add the contents of the dmalloc.gdb file (located in the contrib subdirectory in Dmalloc's source package) to the .gdbinit file in your home directory. + +Another thing that I really like about Dmalloc is its extensive documentation. Just head to the [documentation section][8] on its official website, and you'll get everything from how to download, install, run and use the library to detailed descriptions of the features it provides and an explanation of the output file it produces. There's also a section containing solutions to some common problems. + +#### Other considerations #### + +Like Mtrace, Dmalloc requires programmers to make changes to their program's source code. In this case you may, at the very least, want to add the **dmalloc.h** header, because it allows the tool to report the file/line numbers of calls that generate problems, something that is very useful as it saves time while debugging. + +In addition, the Dmalloc library, which is produced after the package is compiled, needs to be linked with your program while the program is being compiled. + +However, complicating things somewhat is the fact that you also need to set an environment variable, dubbed **DMALLOC_OPTION**, that the debugging tool uses to configure the memory debugging features -- as well as the location of the output file -- at runtime. While you can manually assign a value to the environment variable, beginners may find that process a bit tough, given that the Dmalloc features you want to enable are listed as part of that value, and are actually represented as a sum of their respective hexadecimal values -- you can read more about it [here][9]. + +An easier way to set the environment variable is to use the [Dmalloc Utility Program][10], which was designed for just that purpose. + +#### Bottom line #### + +Dmalloc's real strength lies in the configurability options it provides. It is also highly portable, having being successfully ported to many OSes, including AIX, BSD/OS, DG/UX, Free/Net/OpenBSD, GNU/Hurd, HPUX, Irix, Linux, MS-DOG, NeXT, OSF, SCO, Solaris, SunOS, Ultrix, Unixware and even Unicos (on a Cray T3E). Although the tool has a bit of a learning curve associated with it, the features it provides are worth it. + +### [Electric Fence][15] ### + +**Developer**: Bruce Perens +**Reviewed version**: 2.2.3 +**Linux support**: All flavors +**License**: GNU GPL (version 2) + +Electric Fence is a memory-debugging tool developed by Bruce Perens. It is implemented in the form of a library that your program needs to link to, and is capable of detecting overruns of memory allocated on a [heap][11] ) as well as memory accesses that have already been released. + +![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png) +Electric Fence + +As the name suggests, Electric Fence creates a virtual fence around each allocated buffer in a way that any illegal memory access results in a [segmentation fault][12]. The tool supports both C and C++ programs. + +#### What's new #### + +Version 2.2.3 contains a fix for the tool's build system, allowing it to actually pass the -fno-builtin-malloc option to the [GNU Compiler Collection (GCC)][13]. + +#### What's good about it #### + +The first thing that I liked about Electric Fence is that -- unlike Memwatch, Dmalloc and Mtrace -- it doesn't require you to make any changes in the source code of your program. You just need to link your program with the tool's library during compilation. + +Secondly, the way the debugging tool is implemented makes sure that a segmentation fault is generated on the very first instruction that causes a bounds violation, which is always better than having the problem detected at a later stage. + +Electric Fence always produces a copyright message in output irrespective of whether an error was detected or not. This behavior is quite useful, as it also acts as a confirmation that you are actually running an Electric Fence-enabled version of your program. + +#### Other considerations #### + +On the other hand, what I really miss in Electric Fence is the ability to detect memory leaks, as it is one of the most common and potentially serious problems that software written in C/C++ has. In addition, the tool cannot detect overruns of memory allocated on the stack, and is not thread-safe. + +Given that the tool allocates an inaccessible virtual memory page both before and after a user-allocated memory buffer, it ends up consuming a lot of extra memory if your program makes too many dynamic memory allocations. + +Another limitation of the tool is that it cannot explicitly tell exactly where the problem lies in your programs' code -- all it does is produce a segmentation fault whenever it detects a memory-related error. To find out the exact line number, you'll have to debug your Electric Fence-enabled program with a tool like [The Gnu Project Debugger (GDB)][14], which in turn depends on the -g compiler option to produce line numbers in output. + +Finally, although Electric Fence is capable of detecting most buffer overruns, an exception is the scenario where the allocated buffer size is not a multiple of the word size of the system -- in that case, an overrun (even if it's only a few bytes) won't be detected. + +#### Bottom line #### + +Despite all its limitations, where Electric Fence scores is the ease of use -- just link your program with the tool once, and it'll alert you every time it detects a memory issue it's capable of detecting. However, as already mentioned, the tool requires you to use a source-code debugger like GDB. + +### [Memcheck][16] ### + +**Developer**: [Valgrind Developers][17] +**Reviewed version**: 3.10.1 +**Linux support**: All flavors +**License**: GPL + +[Valgrind][18] is a suite that provides several tools for debugging and profiling Linux programs. Although it works with programs written in many different languages -- such as Java, Perl, Python, Assembly code, Fortran, Ada and more -- the tools it provides are largely aimed at programs written in C and C++. + +The most popular Valgrind tool is Memcheck, a memory-error detector that can detect issues such as memory leaks, invalid memory access, uses of undefined values and problems related to allocation and deallocation of heap memory. + +#### What's new #### + +This [release][19] of the suite (3.10.1) is a minor one that primarily contains fixes to bugs reported in version 3.10.0. In addition, it also "backports fixes for all reported missing AArch64 ARMv8 instructions and syscalls from the trunk." + +#### What's good about it #### + +Memcheck, like all other Valgrind tools, is basically a command line utility. It's very easy to use: If you normally run your program on the command line in a form such as prog arg1 arg2, you just need to add a few values, like this: valgrind --leak-check=full prog arg1 arg2. + +![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png) +Memcheck + +(Note: You don't need to mention Memcheck anywhere in the command line because it's the default Valgrind tool. However, you do need to initially compile your program with the -g option -- which adds debugging information -- so that Memcheck's error messages include exact line numbers.) + +What I really like about Memcheck is that it provides a lot of command line options (such as the --leak-check option mentioned above), allowing you to not only control how the tool works but also how it produces the output. + +For example, you can enable the --track-origins option to see information on the sources of uninitialized data in your program. Enabling the --show-mismatched-frees option will let Memcheck match the memory allocation and deallocation techniques. For code written in C language, Memcheck will make sure that only the free() function is used to deallocate memory allocated by malloc(), while for code written in C++, the tool will check whether or not the delete and delete[] operators are used to deallocate memory allocated by new and new[], respectively. If a mismatch is detected, an error is reported. + +But the best part, especially for beginners, is that the tool even produces suggestions about which command line option the user should use to make the output more meaningful. For example, if you do not use the basic --leak-check option, it will produce an output suggesting: "Rerun with --leak-check=full to see details of leaked memory." And if there are uninitialized variables in the program, the tool will generate a message that says, "Use --track-origins=yes to see where uninitialized values come from." + +Another useful feature of Memcheck is that it lets you [create suppression files][20], allowing you to suppress certain errors that you can't fix at the moment -- this way you won't be reminded of them every time the tool is run. It's worth mentioning that there already exists a default suppression file that Memcheck reads to suppress errors in the system libraries, such as the C library, that come pre-installed with your OS. You can either create a new suppression file for your use, or edit the existing one (usually /usr/lib/valgrind/default.supp). + +For those seeking advanced functionality, it's worth knowing that Memcheck can also [detect memory errors][21] in programs that use [custom memory allocators][22]. In addition, it also provides [monitor commands][23] that can be used while working with Valgrind's built-in gdbserver, as well as a [client request mechanism][24] that allows you not only to tell the tool facts about the behavior of your program, but make queries as well. + +#### Other considerations #### + +While there's no denying that Memcheck can save you a lot of debugging time and frustration, the tool uses a lot of memory, and so can make your program execution significantly slower (around 20 to 30 times, [according to the documentation][25]). + +Aside from this, there are some other limitations, too. According to some user comments, Memcheck apparently isn't [thread-safe][26]; it doesn't detect [static buffer overruns][27]). Also, there are some Linux programs, like [GNU Emacs][28], that currently do not work with Memcheck. + +If you're interested in taking a look, an exhaustive list of Valgrind's limitations can be found [here][29]. + +#### Bottom line #### + +Memcheck is a handy memory-debugging tool for both beginners as well as those looking for advanced features. While it's very easy to use if all you need is basic debugging and error checking, there's a bit of learning curve if you want to use features like suppression files or monitor commands. + +Although it has a long list of limitations, Valgrind (and hence Memcheck) claims on its site that it is used by [thousands of programmers][30] across the world -- the team behind the tool says it's received feedback from users in over 30 countries, with some of them working on projects with up to a whopping 25 million lines of code. + +### [Memwatch][31] ### + +**Developer**: Johan Lindh +**Reviewed version**: 2.71 +**Linux support**: All flavors +**License**: GNU GPL + +Memwatch is a memory-debugging tool developed by Johan Lindh. Although it's primarily a memory-leak detector, it is also capable (according to its Web page) of detecting other memory-related issues like [double-free error tracking and erroneous frees][32], buffer overflow and underflow, [wild pointer][33] writes, and more. + +The tool works with programs written in C. Although you can also use it with C++ programs, it's not recommended (according to the Q&A file that comes with the tool's source package). + +#### What's new #### + +This version adds ULONG_LONG_MAX to detect whether a program is 32-bit or 64-bit. + +#### What's good about it #### + +Like Dmalloc, Memwatch comes with good documentation. You can refer to the USING file if you want to learn things like how the tool works; how it performs initialization, cleanup and I/O operations; and more. Then there is a FAQ file that is aimed at helping users in case they face any common error while using Memcheck. Finally, there is a test.c file that contains a working example of the tool for your reference. + +![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png) +Memwatch + +Unlike Mtrace, the log file to which Memwatch writes the output (usually memwatch.log) is in human-readable form. Also, instead of truncating, Memwatch appends the memory-debugging output to the file each time the tool is run, allowing you to easily refer to the previous outputs should the need arise. + +It's also worth mentioning that when you execute your program with Memwatch enabled, the tool produces a one-line output on [stdout][34] informing you that some errors were found -- you can then head to the log file for details. If no such error message is produced, you can rest assured that the log file won't contain any mistakes -- this actually saves time if you're running the tool several times. + +Another thing that I liked about Memwatch is that it also provides a way through which you can capture the tool's output from within the code, and handle it the way you like (refer to the mwSetOutFunc() function in the Memwatch source code for more on this). + +#### Other considerations #### + +Like Mtrace and Dmalloc, Memwatch requires you to add extra code to your source file -- you have to include the memwatch.h header file in your code. Also, while compiling your program, you need to either compile memwatch.c along with your program's source files or include the object module from the compile of the file, as well as define the MEMWATCH and MW_STDIO variables on the command line. Needless to say, the -g compiler option is also required for your program if you want exact line numbers in the output. + +There are some features that it doesn't contain. For example, the tool cannot detect attempts to write to an address that has already been freed or read data from outside the allocated memory. Also, it's not thread-safe. Finally, as I've already pointed out in the beginning, there is no guarantee on how the tool will behave if you use it with programs written in C++. + +#### Bottom line #### + +Memcheck can detect many memory-related problems, making it a handy debugging tool when dealing with projects written in C. Given that it has a very small source code, you can learn how the tool works, debug it if the need arises, and even extend or update its functionality as per your requirements. + +### [Mtrace][35] ### + +**Developers**: Roland McGrath and Ulrich Drepper +**Reviewed version**: 2.21 +**Linux support**: All flavors +**License**: GNU LGPL + +Mtrace is a memory-debugging tool included in [the GNU C library][36]. It works with both C and C++ programs on Linux, and detects memory leaks caused by unbalanced calls to the malloc() and free() functions. + +![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png) +Mtrace + +The tool is implemented in the form of a function called mtrace(), which traces all malloc/free calls made by a program and logs the information in a user-specified file. Because the file contains data in computer-readable format, a Perl script -- also named mtrace -- is used to convert and display it in human-readable form. + +#### What's new #### + +[The Mtrace source][37] and [the Perl file][38] that now come with the GNU C library (version 2.21) add nothing new to the tool aside from an update to the copyright dates. + +#### What's good about it #### + +The best part about Mtrace is that the learning curve for it isn't steep; all you need to understand is how and where to add the mtrace() -- and the corresponding muntrace() -- function in your code, and how to use the Mtrace Perl script. The latter is very straightforward -- all you have to do is run the mtrace() command. (For an example, see the last command in the screenshot above.) + +Another thing that I like about Mtrace is that it's scalable -- which means that you can not only use it to debug a complete program, but can also use it to detect memory leaks in individual modules of the program. Just call the mtrace() and muntrace() functions within each module. + +Finally, since the tool is triggered when the mtrace() function -- which you add in your program's source code -- is executed, you have the flexibility to enable the tool dynamically (during program execution) [using signals][39]. + +#### Other considerations #### + +Because the calls to mtrace() and mauntrace() functions -- which are declared in the mcheck.h file that you need to include in your program's source -- are fundamental to Mtrace's operation (the mauntrace() function is not [always required][40]), the tool requires programmers to make changes in their code at least once. + +Be aware that you need to compile your program with the -g option (provided by both the [GCC][41] and [G++][42] compilers), which enables the debugging tool to display exact line numbers in the output. In addition, some programs (depending on how big their source code is) can take a long time to compile. Finally, compiling with -g increases the size of the executable (because it produces extra information for debugging), so you have to remember that the program needs to be recompiled without -g after the testing has been completed. + +To use Mtrace, you need to have some basic knowledge of environment variables in Linux, given that the path to the user-specified file -- which the mtrace() function uses to log all the information -- has to be set as a value for the MALLOC_TRACE environment variable before the program is executed. + +Feature-wise, Mtrace is limited to detecting memory leaks and attempts to free up memory that was never allocated. It can't detect other memory-related issues such as illegal memory access or use of uninitialized memory. Also, [there have been complaints][43] that it's not [thread-safe][44]. + +### Conclusions ### + +Needless to say, each memory debugger that I've discussed here has its own qualities and limitations. So, which one is best suited for you mostly depends on what features you require, although ease of setup and use might also be a deciding factor in some cases. + +Mtrace is best suited for cases where you just want to catch memory leaks in your software program. It can save you some time, too, since the tool comes pre-installed on your Linux system, something which is also helpful in situations where the development machines aren't connected to the Internet or you aren't allowed to download a third party tool for any kind of debugging. + +Dmalloc, on the other hand, can not only detect more error types compared to Mtrace, but also provides more features, such as runtime configurability and GDB integration. Also, unlike any other tool discussed here, Dmalloc is thread-safe. Not to mention that it comes with detailed documentation, making it ideal for beginners. + +Although Memwatch comes with even more comprehensive documentation than Dmalloc, and can detect even more error types, you can only use it with software written in the C programming language. One of its features that stands out is that it lets you handle its output from within the code of your program, something that is helpful in case you want to customize the format of the output. + +If making changes to your program's source code is not what you want, you can use Electric Fence. However, keep in mind that it can only detect a couple of error types, and that doesn't include memory leaks. Plus, you also need to know GDB basics to make the most out of this memory-debugging tool. + +Memcheck is probably the most comprehensive of them all. It detects more error types and provides more features than any other tool discussed here -- and it doesn't require you to make any changes in your program's source code.But be aware that, while the learning curve is not very high for basic usage, if you want to use its advanced features, a level of expertise is definitely required. + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Himanshu-Arora/ +[1]:https://openclipart.org/detail/132427/penguin-admin +[2]:https://en.wikipedia.org/wiki/Manual_memory_management +[3]:http://dmalloc.com/ +[4]:https://www.owasp.org/index.php/Double_Free +[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns +[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html +[7]:http://www.gnu.org/software/gdb/ +[8]:http://dmalloc.com/docs/ +[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32 +[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29 +[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation +[12]:https://en.wikipedia.org/wiki/Segmentation_fault +[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection +[14]:http://www.gnu.org/software/gdb/ +[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3 +[16]:http://valgrind.org/docs/manual/mc-manual.html +[17]:http://valgrind.org/info/developers.html +[18]:http://valgrind.org/ +[19]:http://valgrind.org/docs/manual/dist.news.html +[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles +[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools +[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators +[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands +[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs +[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf +[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/ +[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx +[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2 +[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits +[30]:http://valgrind.org/info/ +[31]:http://www.linkdata.se/sourcecode/memwatch/ +[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html +[33]:http://c2.com/cgi/wiki?WildPointer +[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29 +[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html +[36]:https://www.gnu.org/software/libc/ +[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD +[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD +[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu +[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger +[41]:http://linux.die.net/man/1/gcc +[42]:http://linux.die.net/man/1/g++ +[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html +[44]:https://en.wikipedia.org/wiki/Thread_safety \ No newline at end of file From eefcb0e5e64ca2bcd4388324c4c8c9ef5ef255ab Mon Sep 17 00:00:00 2001 From: chenj zhang <1134386961@qq.com> Date: Tue, 24 Nov 2015 17:03:41 +0800 Subject: [PATCH 019/160] Update 20151123 LNAV--Ncurses based log file viewer.md --- sources/tech/20151123 LNAV--Ncurses based log file viewer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20151123 LNAV--Ncurses based log file viewer.md b/sources/tech/20151123 LNAV--Ncurses based log file viewer.md index 0ceb06c252..08b7ee011a 100644 --- a/sources/tech/20151123 LNAV--Ncurses based log file viewer.md +++ b/sources/tech/20151123 LNAV--Ncurses based log file viewer.md @@ -75,9 +75,9 @@ If you want to view CUPS logs run the following command from your terminal via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html 作者:[ruchi][a] -译者:[译者ID](https://github.com/译者ID) +译者:[zky001](https://github.com/zky001) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.ubuntugeek.com/author/ubuntufix \ No newline at end of file +[a]:http://www.ubuntugeek.com/author/ubuntufix From 80e6360096c5568ae19db1e00b4f78914bdafb82 Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 24 Nov 2015 22:40:47 +0800 Subject: [PATCH 020/160] tip 9 done --- ...51028 10 Tips for 10x Application Performance.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index 62891f33d9..b27db20846 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -181,22 +181,23 @@ Whatever web server you use, you need to tune it for web application performance - **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests. - **客户端保活**。 - **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41]. -- **上游保活**。 +- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会收益于连接保活。对于上游连接老说,你可以增加*保活时间*,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新的连接次数。更多关于保活连接的信息可以参见[blog][41]. - **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached. -- **限制**。 +- **限制**。限制客户端使用的资源可以提高性能和安全性。对于NGINX 来说指令*limit_conn* 和 *limit_conn_zone* 限制了每个源的连接数量,而*limit_rate* 限制了带宽。这些限制都可以阻止合法用户*攫取* 资源,同时夜避免了攻击。指令*limit_req* 和 *limit_req_zone* 限制了客户端请求。对于上游服务器来说,可以在上游服务器的配置块里使用max_conns 可以限制连接到上游服务器的连接。 这样可以避免服务器过载。关联的队列指令会创建一个队列来在连接数抵达*max_conn* 限制时在指定的长度的时间内保存特定数量的请求。 - **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system. -- **工人进程**。 +- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和依赖操作系统的机制来有效的讲请求分发给不同的工人进程。这条建议推荐设置每个CPU 的参数*worker_processes* 。如果需要的话,工人连接的最大数(默认512)可以安全在大部分系统增加,是指找到最适合你的系统的值。 - **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive. -- **套接字分割**。 +- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会未每个工人进程创建一个套接字监听器,这样一来以内核分配连接给套接字就成为可能了。折可以减少锁竞争,并且提高多核系统的性能,要使能[套接字分隔][43]需要在监听指令里面加上复用端口参数。 - **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44]. -- **线程池**。 +- **线程池**。一个计算机进程可以处理一个缓慢的操作。对于web 服务器软件来说磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在NGINX理有两个操作——read()系统调用和sendfile() ——被分配到了[线程池][44] ![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png) **Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back. -**技巧**。 +**技巧**。当改变任务操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快那么就改回去。 See this [blog post][45] for more details on tuning NGINX. +在[blog][45]可以看到更详细的NGINX 调优方法。 ### Tip #10: 监视系统活动来解决问题和瓶颈 ### From 6d222a5ade91ecfc291963e00e4f5c454c685f47 Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 24 Nov 2015 23:03:09 +0800 Subject: [PATCH 021/160] tip 10 done --- .../20151028 10 Tips for 10x Application Performance.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index b27db20846..b8e88b3b5c 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -202,19 +202,28 @@ See this [blog post][45] for more details on tuning NGINX. ### Tip #10: 监视系统活动来解决问题和瓶颈 ### The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure. +在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的web 基础设施上监控程序活动。 Monitoring site activity is mostly passive – it tells you what’s going on, and leaves it to you to spot problems and fix them. +监视活动是最积极的——他会告诉你发生了什么,把问题留给你发现和最终解决掉。 Monitoring can catch several different kinds of issues. They include: +监视可以发现集中不同的问题。它们包括: - A server is down. +- 服务器宕机。 - A server is limping, dropping connections. +- 服务器出问题一直在丢失连接。 - A server is suffering from a high proportion of cache misses. +- 服务器出现大量的缓存未命中。 - A server is not sending correct content. +- 服务器没有发送正确的内容。 A global application performance monitoring tool like New Relic or Dynatrace helps you monitor page load time from remote locations, while NGINX helps you monitor the application delivery side. Application performance data tells you when your optimizations are making a real difference to your users, and when you need to consider adding capacity to your infrastructure to sustain the traffic. +应用的总体性能监控工具,比如New Relic 和Dynatrace,可以帮助你监控到从远处加载网页的时间,二NGINX 可以帮助你监控到应用发送的时 间。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。 To help identify and resolve issues quickly, NGINX Plus adds [application-aware health checks][46] – synthetic transactions that are repeated regularly and are used to alert you to problems. NGINX Plus also has [session draining][47], which stops new connections while existing tasks complete, and a slow start capability, allowing a recovered server to come up to speed within a load-balanced group. When used effectively, health checks allow you to identify issues before they significantly impact the user experience, while session draining and slow start allow you to replace servers and ensure the process does not negatively affect perceived performance or uptime. The figure shows the built-in NGINX Plus [live activity monitoring][48] dashboard for a web infrastructure with servers, TCP connections, and caching. +为了帮助开发者快速的发现、解决问题,NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,折可以组织当前任务未完成之前不接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的速度。当有使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。这个表格就展示了NGINX Plus 内建模块在web 基础设施[监视活活动][48]的仪表盘,包括了服务器群,TCP 连接和缓存等信息。 ![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png) From e239ea75c1c33c85b2a29851547e8e9b2a8544fd Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 24 Nov 2015 23:32:29 +0800 Subject: [PATCH 022/160] translation done --- ...1028 10 Tips for 10x Application Performance.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index b8e88b3b5c..cacfb2abc6 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -227,22 +227,30 @@ To help identify and resolve issues quickly, NGINX Plus adds [application-aware ![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png) -### Conclusion: Seeing 10x Performance Improvement ### +### 总结: 看看10倍性能提升的效果 ### The performance improvements that are available for any one web application vary tremendously, and actual gains depend on your budget, the time you can invest, and gaps in your existing implementation. So, how might you achieve 10x performance improvement for your own applications? +这些性能提升方案对任何一个web 应用都可用并且效果都很好,而实际效果取决于你的预算,如你能花费的时间,目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升? To help guide you on the potential impact of each optimization, here are pointers to the improvement that may be possible with each tip detailed above, though your mileage will almost certainly vary: +为了指导你了解每种优化手段的潜在影响,这里是是上面详述的每个优化方法的关键点,虽然你的里程肯定大不相同: - **Reverse proxy server and load balancing**. No load balancing, or poor load balancing, can cause episodes of very poor performance. Adding a reverse proxy server, such as NGINX, can prevent web applications from thrashing between memory and disk. Load balancing can move processing from overburdened servers to available ones and make scaling easy. These changes can result in dramatic performance improvement, with a 10x improvement easily achieved compared to the worst moments for your current implementation, and lesser but substantial achievements available for overall performance. +- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间断的极低性能。增加一个反向代理,比如NGINX可以避免web应用程序在内存和磁盘之间抖动。负载均衡可以将过载服务器的任务转移到空闲的服务器,还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升,很容易就可以比你现在的实现方案的最差性能提高10倍,对于总体性能来说可能提高的不多,但是也是有实质性的提升。 - **Caching dynamic and static content**. If you have an overburdened web server that’s doubling as your application server, 10x improvements in peak-time performance can be achieved by caching dynamic content alone. Caching for static files can improve performance by single-digit multiples as well. +- **缓存动态和静态数据**。如果你又一个web 服务器负担过重,那么毫无疑问肯定是你的应用服务器,只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高个位数倍的性能。 - **Compressing data**. Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two. +- **压缩数据**。使用媒体文件压缩格式,比如图像格式JPEG,图形格式PNG,视频格式MPEG-4,音乐文件格式MP3可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以提高初始页面加载速度提高两倍。 - **Optimizing SSL/TLS**. Secure handshakes can have a big impact on performance, so optimizing them can lead to perhaps a 2x improvement in initial responsiveness, particularly for text-heavy sites. Optimizing media file transmission under SSL/TLS is likely to yield only small performance improvements. +- **优化SSL/TLS**。安全握手会对性能产生巨大的影响,对他们的优化可能会对初始响应特别是重文本站点产生2倍的提升。优化SSL/TLS 下媒体文件只会产生很小的性能提升。 - **Implementing HTTP/2 and SPDY**. When used with SSL/TLS, these protocols are likely to result in incremental improvements for overall site performance. +- **使用HTTP/2 和SPDY*。当你使用了SSL/TLS,这些协议就可以提高整个站点的性能。 - **Tuning Linux and web server software (such as NGINX)**. Fixes such as optimizing buffering, using keepalive connections, and offloading time-intensive tasks to a separate thread pool can significantly boost performance; thread pools, for instance, can speed disk-intensive tasks by [nearly an order of magnitude][49]. +- **对linux 和web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49]. We hope you try out these techniques for yourself. We want to hear the kind of application performance improvements you’re able to achieve. Share your results in the comments below, or tweet your story with the hash tags #NGINX and #webperf! - -### Resources for Internet Statistics ### +我们希望你亲自尝试这些技术。我们希望这些提高应用性能的手段可以被你实现。请在下面评论栏分享你的结果 或者在标签#NGINX 和#webperf 下tweet 你的故事。 +### 网上资源 ### [Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50] From b0555ecbb4208c4b52cbdd168e82efcb5bae01f8 Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 24 Nov 2015 23:45:49 +0800 Subject: [PATCH 023/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...10 Tips for 10x Application Performance.md | 51 +------------------ 1 file changed, 2 insertions(+), 49 deletions(-) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md index cacfb2abc6..55cd24bd9a 100644 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -111,144 +111,97 @@ NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的 ### Tip #6: 使用 HTTP/2 或 SPDY ### -For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view. 对于已经使用了SSL/TLS 的站点,HTTP/2 和SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用SSL/TLS 的站点来说,HTTP/2 和SPDY会在响应速度上有些影响(通常会将度效率)。 -Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2. Google 在2012年开始把SPDY 作为一个比HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前IETF 标准,他也基于SPDY。SPDY 已经被广泛的支持了,但是很快就会被HTTP/2 替代。 -The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multiple requests and responses at the same time. SPDY 和HTTP/2 的关键是用单连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。 -By getting the most out of one connection, these protocols avoid the overhead of setting up and managing multiple connections, as required by the way browsers implement HTTP/1.x. The use of a single connection is especially helpful with SSL, because it minimizes the time-consuming handshaking that SSL/TLS needs to set up a secure connection. 通过使用一个连接这些协议可以避免过多的设置和管理多个连接,就像浏览器实现了HTTP/1.x 一样。单连接在对SSL 特别有效,这是因为它可以最小化SSL/TLS 建立安全链接时的握手时间。 -The SPDY protocol required the use of SSL/TLS; HTTP/2 does not officially require it, but all browsers so far that support HTTP/2 use it only if SSL/TLS is enabled. That is, a browser that supports HTTP/2 uses it only if the website is using SSL and its server accepts HTTP/2 traffic. Otherwise, the browser communicates over HTTP/1.x. SPDY 协议需要使用SSL/TLS, 而HTTP/2 官方并不需要,但是目前所有支持HTTP/2的浏览器只有在使能了SSL/TLS 的情况下才会使用它。这就意味着支持HTTP/2 的浏览器只有在网站使用了SSL 并且服务器接收HTTP/2 流量的情况下才会启用HTTP/2。否则的话浏览器就会使用HTTP/1.x 协议。 -When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations such as domain sharding, resource merging, and image spriting. These changes make your code and deployments simpler and easier to manage. To learn more about the changes that HTTP/2 is bringing about, read our [white paper][34]. 当你实现SPDY 或者HTTP/2时,你不再需要通常的HTTP 性能优化方案,比如域分隔资源聚合,以及图像登记。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。 ![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png) -As an example of support for these protocols, NGINX has supported SPDY from early on, and [most sites][35] that use SPDY today run on NGINX. NGINX is also [pioneering support][36] for HTTP/2, with [support][37] for HTTP/2 in NGINX open source and NGINX Plus as of September 2015. 作为支持这些协议的一个样例,NGINX 已经从一开始就支持了SPDY,而且[大部分使用SPDY 协议的网站][35]都运行的是NGINX。NGINX 同时也[很早][36]对HTTP/2 的提供了支持,从2015 年9月开始开源NGINX 和NGINX Plus 就[支持][37]它了。 -Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better. 经过一段时间,我们NGINX 希望更多的站点完全是能SSL 并且向HTTP/2 迁移。这将会提高安全性,同时新的优化手段也会被发现和实现,更简单的代码表现的更加优异。 ### Tip #7: 升级软件版本 ### -One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware. 一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。 -Stable new releases are typically more compatible and higher-performing than older releases. It’s also easier to keep on top of tuning optimizations, bug fixes, and security alerts when you stay on top of software updates. 稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉bug,以及安全性的提高。 -Staying with older software can also prevent you from taking advantage of new capabilities. For example, HTTP/2, described above, currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015. 一直使用旧版软件也会组织你利用新的特性。比如上面说到的HTTP/2,目前要求OpenSSL 1.0.1.在2016 年中期开始将会要求1.0.2 ,而这是在2015年1月才发布的。 -NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can. NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX Plus][39];他们都包含了罪行的能力,如socket分区和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把他们升级到你能能升级道德最新版本吧。 ### Tip #8: linux 系统性能调优 ### -Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance. linux 是大多数web 服务器使用操作系统,而且作为你的架构的基础,Linux 表现出明显可以提高性能的机会。默认情况下,很多linux 系统都被设置为使用很少的资源,匹配典型的桌面应用负载。这就意味着web 应用需要最少一些等级的调优才能达到最大效能。 -Linux optimizations are web server-specific. Using NGINX as an example, here are a few highlights of changes you can consider to speed up Linux: Linux 优化是转变们针对web 服务器方面的。以NGINX 为例,这里有一些在加速linux 时需要强调的变化: -- **Backlog queue**. If you have connections that appear to be stalling, consider increasing net.core.somaxconn, the maximum number of connections that can be queued awaiting attention from NGINX. You will see error messages if the existing connection limit is too small, and you can gradually increase this parameter until the error messages stop. - **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接线直太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数知道错误信息停止出现。 -- **File descriptors**. NGINX uses up to two file descriptors for each connection. If your system is serving a lot of connections, you might need to increase sys.fs.file_max, the system-wide limit for file descriptors, and nofile, the user file descriptor limit, to support the increased load. - **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接,你可能就需要提高sys.fs.file_max ,增加系统对文件描述符数量整体的限制,这样子才能支持不断增加的负载需求。 -- **Ephemeral ports**. When used as a proxy, NGINX creates temporary (“ephemeral”) ports for each upstream server. You can increase the range of port values, set by net.ipv4.ip_local_port_range, to increase the number of ports available. You can also reduce the timeout before an inactive port gets reused with the net.ipv4.tcp_fin_timeout setting, allowing for faster turnover. -- **短暂端口**。当使用代理时,NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。 +- **临时端口**。当使用代理时,NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。 -For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat! 对于NGINX 来说,可以查阅[NGINX 性能调优指南][40]来学习如果优化你的Linux 系统,这样子它就可以很好的适应大规模网络流量而不会超过工作极限。 ### Tip #9: web 服务器性能调优 ### -Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include: 无论你是用哪种web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何web 服务器,但是一些设置是针对NGINX的。关键的优化手段包括: -- **Access logging**. Instead of writing a log entry for every request to disk immediately, you can buffer entries in memory and write them to disk as a group. For NGINX, add the *buffer=size* parameter to the *access_log* directive to write log entries to disk when the memory buffer fills up. If you add the **flush=time** parameter, the buffer contents are also be written to disk after the specified amount of time. - **f访问日志**。不要把每个请求的日志都直接写回磁盘,你可以在内存将日志缓存起来然后一批写回磁盘。对于NGINX 来说添加给指令*access_log* 添加参数 *buffer=size* 可以让系统在缓存满了的情况下才把日志写到此哦按。如果你添加了参数**flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。 -- **Buffering**. Buffering holds part of a response in memory until the buffer fills, which can make communications with the client more efficient. Responses that don’t fit in memory are written to disk, which can slow performance. When NGINX buffering is [on][42], you use the *proxy_buffer_size* and *proxy_buffers* directives to manage it. - **缓存**。缓存掌握了内存中的部分资源知道满了位置,这可以让与客户端的通信更加高效。与内存中缓存不匹配的响应会写回磁盘,而这就会降低效能。当NGINX [启用][42]了缓存机制后,你可以使用指令*proxy_buffer_size* 和 *proxy_buffers* 来管理缓存。 -- **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests. -- **客户端保活**。 -- **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41]. +- **客户端保活**。保活连接可以减少开销,特别是使用SSL/TLS时。对于NGINX 来说,你可以增加*keepalive_requests* 的值,从默认值100 开始修改,这样一个客户端就可以转交一个指定的连接,而且你也可以通过增加*keepalive_timeout* 的值来允许保活连接存活更长时间,结果就是让后来的请求处理的更快速。 - **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会收益于连接保活。对于上游连接老说,你可以增加*保活时间*,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新的连接次数。更多关于保活连接的信息可以参见[blog][41]. -- **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached. - **限制**。限制客户端使用的资源可以提高性能和安全性。对于NGINX 来说指令*limit_conn* 和 *limit_conn_zone* 限制了每个源的连接数量,而*limit_rate* 限制了带宽。这些限制都可以阻止合法用户*攫取* 资源,同时夜避免了攻击。指令*limit_req* 和 *limit_req_zone* 限制了客户端请求。对于上游服务器来说,可以在上游服务器的配置块里使用max_conns 可以限制连接到上游服务器的连接。 这样可以避免服务器过载。关联的队列指令会创建一个队列来在连接数抵达*max_conn* 限制时在指定的长度的时间内保存特定数量的请求。 -- **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system. - **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和依赖操作系统的机制来有效的讲请求分发给不同的工人进程。这条建议推荐设置每个CPU 的参数*worker_processes* 。如果需要的话,工人连接的最大数(默认512)可以安全在大部分系统增加,是指找到最适合你的系统的值。 -- **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive. - **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会未每个工人进程创建一个套接字监听器,这样一来以内核分配连接给套接字就成为可能了。折可以减少锁竞争,并且提高多核系统的性能,要使能[套接字分隔][43]需要在监听指令里面加上复用端口参数。 -- **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44]. - **线程池**。一个计算机进程可以处理一个缓慢的操作。对于web 服务器软件来说磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在NGINX理有两个操作——read()系统调用和sendfile() ——被分配到了[线程池][44] ![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png) -**Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back. **技巧**。当改变任务操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快那么就改回去。 -See this [blog post][45] for more details on tuning NGINX. 在[blog][45]可以看到更详细的NGINX 调优方法。 ### Tip #10: 监视系统活动来解决问题和瓶颈 ### -The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure. 在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的web 基础设施上监控程序活动。 -Monitoring site activity is mostly passive – it tells you what’s going on, and leaves it to you to spot problems and fix them. 监视活动是最积极的——他会告诉你发生了什么,把问题留给你发现和最终解决掉。 -Monitoring can catch several different kinds of issues. They include: 监视可以发现集中不同的问题。它们包括: -- A server is down. - 服务器宕机。 -- A server is limping, dropping connections. - 服务器出问题一直在丢失连接。 -- A server is suffering from a high proportion of cache misses. - 服务器出现大量的缓存未命中。 -- A server is not sending correct content. - 服务器没有发送正确的内容。 -A global application performance monitoring tool like New Relic or Dynatrace helps you monitor page load time from remote locations, while NGINX helps you monitor the application delivery side. Application performance data tells you when your optimizations are making a real difference to your users, and when you need to consider adding capacity to your infrastructure to sustain the traffic. 应用的总体性能监控工具,比如New Relic 和Dynatrace,可以帮助你监控到从远处加载网页的时间,二NGINX 可以帮助你监控到应用发送的时 间。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。 -To help identify and resolve issues quickly, NGINX Plus adds [application-aware health checks][46] – synthetic transactions that are repeated regularly and are used to alert you to problems. NGINX Plus also has [session draining][47], which stops new connections while existing tasks complete, and a slow start capability, allowing a recovered server to come up to speed within a load-balanced group. When used effectively, health checks allow you to identify issues before they significantly impact the user experience, while session draining and slow start allow you to replace servers and ensure the process does not negatively affect perceived performance or uptime. The figure shows the built-in NGINX Plus [live activity monitoring][48] dashboard for a web infrastructure with servers, TCP connections, and caching. 为了帮助开发者快速的发现、解决问题,NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,折可以组织当前任务未完成之前不接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的速度。当有使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。这个表格就展示了NGINX Plus 内建模块在web 基础设施[监视活活动][48]的仪表盘,包括了服务器群,TCP 连接和缓存等信息。 ![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png) ### 总结: 看看10倍性能提升的效果 ### -The performance improvements that are available for any one web application vary tremendously, and actual gains depend on your budget, the time you can invest, and gaps in your existing implementation. So, how might you achieve 10x performance improvement for your own applications? 这些性能提升方案对任何一个web 应用都可用并且效果都很好,而实际效果取决于你的预算,如你能花费的时间,目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升? -To help guide you on the potential impact of each optimization, here are pointers to the improvement that may be possible with each tip detailed above, though your mileage will almost certainly vary: 为了指导你了解每种优化手段的潜在影响,这里是是上面详述的每个优化方法的关键点,虽然你的里程肯定大不相同: -- **Reverse proxy server and load balancing**. No load balancing, or poor load balancing, can cause episodes of very poor performance. Adding a reverse proxy server, such as NGINX, can prevent web applications from thrashing between memory and disk. Load balancing can move processing from overburdened servers to available ones and make scaling easy. These changes can result in dramatic performance improvement, with a 10x improvement easily achieved compared to the worst moments for your current implementation, and lesser but substantial achievements available for overall performance. - **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间断的极低性能。增加一个反向代理,比如NGINX可以避免web应用程序在内存和磁盘之间抖动。负载均衡可以将过载服务器的任务转移到空闲的服务器,还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升,很容易就可以比你现在的实现方案的最差性能提高10倍,对于总体性能来说可能提高的不多,但是也是有实质性的提升。 -- **Caching dynamic and static content**. If you have an overburdened web server that’s doubling as your application server, 10x improvements in peak-time performance can be achieved by caching dynamic content alone. Caching for static files can improve performance by single-digit multiples as well. - **缓存动态和静态数据**。如果你又一个web 服务器负担过重,那么毫无疑问肯定是你的应用服务器,只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高个位数倍的性能。 -- **Compressing data**. Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two. - **压缩数据**。使用媒体文件压缩格式,比如图像格式JPEG,图形格式PNG,视频格式MPEG-4,音乐文件格式MP3可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以提高初始页面加载速度提高两倍。 -- **Optimizing SSL/TLS**. Secure handshakes can have a big impact on performance, so optimizing them can lead to perhaps a 2x improvement in initial responsiveness, particularly for text-heavy sites. Optimizing media file transmission under SSL/TLS is likely to yield only small performance improvements. - **优化SSL/TLS**。安全握手会对性能产生巨大的影响,对他们的优化可能会对初始响应特别是重文本站点产生2倍的提升。优化SSL/TLS 下媒体文件只会产生很小的性能提升。 -- **Implementing HTTP/2 and SPDY**. When used with SSL/TLS, these protocols are likely to result in incremental improvements for overall site performance. - **使用HTTP/2 和SPDY*。当你使用了SSL/TLS,这些协议就可以提高整个站点的性能。 -- **Tuning Linux and web server software (such as NGINX)**. Fixes such as optimizing buffering, using keepalive connections, and offloading time-intensive tasks to a separate thread pool can significantly boost performance; thread pools, for instance, can speed disk-intensive tasks by [nearly an order of magnitude][49]. - **对linux 和web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49]. -We hope you try out these techniques for yourself. We want to hear the kind of application performance improvements you’re able to achieve. Share your results in the comments below, or tweet your story with the hash tags #NGINX and #webperf! 我们希望你亲自尝试这些技术。我们希望这些提高应用性能的手段可以被你实现。请在下面评论栏分享你的结果 或者在标签#NGINX 和#webperf 下tweet 你的故事。 ### 网上资源 ### From ae1c5b3663902bb06d0402bbc698c38e81780ba5 Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 24 Nov 2015 23:48:08 +0800 Subject: [PATCH 024/160] =?UTF-8?q?=E7=A7=BB=E5=8A=A8=E6=96=87=E4=BB=B6?= =?UTF-8?q?=E5=88=B0translated?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20151028 10 Tips for 10x Application Performance.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20151028 10 Tips for 10x Application Performance.md (100%) diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/translated/tech/20151028 10 Tips for 10x Application Performance.md similarity index 100% rename from sources/tech/20151028 10 Tips for 10x Application Performance.md rename to translated/tech/20151028 10 Tips for 10x Application Performance.md From 84ae23b98281a6ec70c34b88699c6f931ff96be6 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 25 Nov 2015 00:32:41 +0800 Subject: [PATCH 025/160] Create 20151125 Running a mainline kernel on a cellphone.md --- ...unning a mainline kernel on a cellphone.md | 29 +++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 sources/tech/20151125 Running a mainline kernel on a cellphone.md diff --git a/sources/tech/20151125 Running a mainline kernel on a cellphone.md b/sources/tech/20151125 Running a mainline kernel on a cellphone.md new file mode 100644 index 0000000000..8607db3bb6 --- /dev/null +++ b/sources/tech/20151125 Running a mainline kernel on a cellphone.md @@ -0,0 +1,29 @@ +Running a mainline kernel on a cellphone + +By Jonathan Corbet + +2015 Kernel Summit One of the biggest freedoms associated with free software is the ability to replace a program with an updated or modified version. Even so, of the many millions of people using Linux-powered phones, few are able to run a mainline kernel on those phones, even if they have the technical skills to do the replacement. The sad fact is that no mainstream phone available runs mainline kernels. A session at the 2015 Kernel Summit, led by Rob Herring, explored this problem and what might be done to address it. + +When asked, most of the developers in the room indicated that they would prefer to be able to run mainline kernels on their phones — though a handful did say that they would rather not do so. Rob has been working on this problem for the last year and a half in support of Project Ara (mentioned in this article). But the news is not good. + +There is, he said, too much out-of-tree code running on a typical handset; mainline kernels simply lack the drivers needed to make that handset work. A typical phone is running 1-3 million lines of out-of-tree code. Almost all of those phones are stuck on the 3.10 kernel — or something even older. There are all kinds of reasons for this, but the simple fact is that things seem to move too quickly in the handset world for the kernel community to keep up. Is that, he asked, something that we care about? + +Tim Bird noted that the Nexus 1, one of the original Android phones, never ran a mainline kernel and never will. It broke the promise of open source, making it impossible for users to put a new kernel onto their devices. At this point, no phone supports that ability. Peter Zijlstra wondered about how much of that out-of-tree code was duplicated functionality from one handset to the next; Rob noted that he has run into three independently developed hotplug governors so far. + +Dirk Hohndel suggested that few people care. Of the billion phones out there, he said, approximately 27 of them have owners who care about running mainline kernels. The rest just want to get the phone to work. Perhaps developers who are concerned about running mainline kernels are trying to solve the wrong problem. + +Chris Mason said that handset vendors are currently facing the same sorts of problems that distributors dealt with many years ago. They are coping with a lot of inefficient, repeated, duplicated work. Once the distributors [Rob Herring] decided to put their work into the mainline instead of carrying it themselves, things got a lot better. The key is to help the phone manufacturers to realize that they can benefit in the same way; that, rather than pressure from users, is how the problem will be solved. + +Grant Likely raised concerns about security in a world where phones cannot be upgraded. What we need is a real distribution market for phones. But, as long as the vendors are in charge of the operating software, phones will not be upgradeable. We have a big security mess coming, he said. Peter added that, with Stagefright, that mess is already upon us. + +Ted Ts'o said that running mainline kernels is not his biggest concern. He would be happy if the phones on sale this holiday season would be running a 3.18 or 4.1 kernel, rather than being stuck on 3.10. That, he suggested, is a more solvable problem. Steve Rostedt said that would not solve the security problem, but Ted remarked that a newer kernel would at least make it easier to backport fixes. Grant replied that, one year from now, it would all just happen again; shipping newer kernels is just an incremental fix. Kees Cook added that there is not much to be gained from backporting fixes; the real problem is that there are no defenses from bugs (he would expand on this theme in a separate session later in the day). + +Rob said that any kind of solution would require getting the vendors on board. That, though, will likely run into trouble with the sort of lockdown that vendors like to apply to their devices. Paolo Bonzini asked whether it would be possible to sue vendors over unfixed security vulnerabilities, especially when the devices are still under warranty. Grant said that upgradeability had to become a market requirement or it simply wasn't going to happen. It might be a nasty security issue that causes this to happen, or carriers might start requiring it. Meanwhile, kernel developers need to keep pushing in that direction. Rob noted that, beyond the advantages noted thus far, the ability to run mainline kernels would help developers to test and validate new features on Android devices. + +Josh Triplett asked whether the community would be prepared to do what it would take if the industry were to come around to the idea of mainline kernel support. There would be lots of testing and validation of kernels on handsets required; Android Compatibility Test Suite failures would have to be treated as regressions. Rob suggested that this could be discussed next year, after the basic functionality is in place, but Josh insisted that, if the demand were to show up, we would have to be able to give a good answer. + +Tim said that there is currently a big disconnect with the vendor world; vendors are not reporting or contributing anything back to the community at all. They are completely disconnected, so there is no forward progress ever. Josh noted that when vendors do report bugs with the old kernels they are using, the reception tends to be less than friendly. Arnd Bergmann said that what was needed was to get one of the big silicon vendors to commit to the idea and get its hardware to a point where running mainline kernels was possible; that would put pressure on the others. But, he added, that would require the existence of one free GPU driver that got shipped with the hardware — something that does not exist currently. + +Rob put up a list of problem areas, but there was not much time for discussion of the particulars. WiFi drivers continue to be an issue, especially with the new features being added in the Android world. Johannes Berg agreed that the new features are an issue; the Android developers do not even talk about them until they ship with the hardware. Support for most of those features does eventually land in the mainline kernel, though. + +As things wound down, Ben Herrenschmidt reiterated that the key was to get vendors to realize that working with the mainline kernel is in their own best interest; it saves work in the long run. Mark Brown said that, in past years when the kernel version shipped with Android moved forward more reliably, the benefits of working upstream were more apparent to vendors. Now that things seem to be stuck on 3.10, that pressure is not there in the same way. The session ended with developers determined to improve the situation, but without any clear plan for getting there. From abe55d640a56ac813b961da499f4b4ea5c15f0ca Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 25 Nov 2015 09:35:09 +0800 Subject: [PATCH 026/160] Delete 20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md --- ... 358.16 Driver in Ubuntu 15.10 or 14.04.md | 70 ------------------- 1 file changed, 70 deletions(-) delete mode 100644 sources/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md diff --git a/sources/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md b/sources/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md deleted file mode 100644 index 93e3985d53..0000000000 --- a/sources/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md +++ /dev/null @@ -1,70 +0,0 @@ -translation by strugglingyouth - -How to Install NVIDIA 358.16 Driver in Ubuntu 15.10, 14.04 -================================================================================ -![nvidia-logo-1](http://ubuntuhandbook.org/wp-content/uploads/2015/06/nvidia-logo-1.png) - -[NVIDIA 358.16][1], the first stable release in NVIDIA 358 series, has been announced with some fixes to 358.09 (Beta) and other small features. - -NVIDIA 358 added a new **nvidia-modeset.ko** kernel module that works in conjunction with the nvidia.ko kernel module to program the display engine of the GPU. In a later driver release, the **nvidia-modeset.ko** kernel driver will be used as a basis for the mode-setting interface provided by the kernel’s direct rendering manager (DRM). - -Thew new driver also has new GLX protocol extensions and a new system memory allocation mechanism for large allocations in the OpenGL driver. New GPUs **GeForce 805A** and **GeForce GTX 960A** are supported. NVIDIA 358.16 also supports X.Org Server 1.18 and OpenGL 4.3 - -### How to Install NVIDIA 358.16 in Ubuntu: ### - -> Please don’t do it on production machines unless you know what you’re doing and how to undo it. - -For the official binaries, please go to [nvidia.com/object/unix.html][1]. - -For those who prefer an Ubuntu PPA, I’d recommend the [Graphics Drivers PPA][2]. So far, Ubuntu 16.04, Ubuntu 15.10, Ubuntu 15.04, Ubuntu 14.04 are supported. - -**1. Add PPA.** - -Open terminal from Unity Dash, App Launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below command and hit enter: - - sudo add-apt-repository ppa:graphics-drivers/ppa - -![nvidia-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/08/nvidia-ppa.jpg) - -Type your password when it asks. No visual feedback, just type in mind and hit Enter to continue. - -**2. Refresh and install new driver.** - -After adding PPA, run below commands one by one to refresh repository cache and install new driver packages: - - sudo apt-get update - - sudo apt-get install nvidia-358 nvidia-settings - -### (Optional) Uninstall: ### - -Boot into the recovery mode from the grub menu, and drop into root console. Then run below commands one by one: - -Remount the file system as writable: - - mount -o remount,rw / - -Remove all nvidia packages: - - apt-get purge nvidia* - -Finally back to menu and reboot: - - reboot - -To disable/remove the graphics driver PPA, launch **Software & Updates** and navigate to **Other Software** tab. - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/11/install-nvidia-358-16-driver-ubuntu-15-10/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:http://www.nvidia.com/Download/driverResults.aspx/95921/en-us -[2]:http://www.nvidia.com/object/unix.html -[3]:https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa From 4a0b4f4ab2545f26b0a7df7703d90f0ff4478cb1 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 25 Nov 2015 09:35:56 +0800 Subject: [PATCH 027/160] Create 20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md --- ... 358.16 Driver in Ubuntu 15.10 or 14.04.md | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md diff --git a/translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md b/translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md new file mode 100644 index 0000000000..18684f6eee --- /dev/null +++ b/translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md @@ -0,0 +1,69 @@ + +如何在 Ubuntu 15.10,14.04 中安装 NVIDIA 358.16 驱动程序 +================================================================================ +![nvidia-logo-1](http://ubuntuhandbook.org/wp-content/uploads/2015/06/nvidia-logo-1.png) + +[NVIDIA 358.16][1], NVIDIA 358 系列的第一个稳定版本已经发布并在 358.09 中(测试版)做了一些修正,以及一些小的改进。 + +NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvidia.ko 内核模块工作来显示 GPU 引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于基本的模式接口,由内核直接传递管理(DRM)。 + +在 OpenGL 驱动中,新的驱动程序也有了新的 GLX 扩展协议,对于分配大量内存也有了一种新的系统内存分配机制。新的 GPU **GeForce 805A** 和 **GeForce GTX 960A** 也被支持了。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。 + +### 如何在 Ubuntu 中安装 NVIDIA 358.16 : ### + +> 请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。 + +对于官方的二进制文件,请到 [nvidia.com/object/unix.html][1] 查看。 + +对于那些喜欢 Ubuntu PPA 的,我建议你使用 [显卡驱动 PPA][2]。到目前为止,支持 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 15.04, Ubuntu 14.04。 + +**1. 添加 PPA.** + +通过按 Ctrl+Alt+T 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键: + + sudo add-apt-repository ppa:graphics-drivers/ppa + +![nvidia-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/08/nvidia-ppa.jpg) + +它会要求你输入密码。输入密码后,密码不会显示在屏幕上,按 Enter 继续。 + +**2. 刷新并安装新的驱动程序** + +添加 PPA 后,逐一运行下面的命令刷新软件库并安装新的驱动程序: + + sudo apt-get update + + sudo apt-get install nvidia-358 nvidia-settings + +### (可选) 卸载: ### + +开机从 GRUB 菜单进入恢复模式,进入根控制台。然后逐一运行下面的命令: + +重新挂载文件系统为可写: + + mount -o remount,rw / + +删除所有的 nvidia 包: + + apt-get purge nvidia* + +最后返回菜单并重新启动: + + reboot + +要禁用/删除显卡驱动 PPA,点击系统设置下的**软件和更新**,然后导航到**其他软件**标签。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-nvidia-358-16-driver-ubuntu-15-10/ + +作者:[Ji m][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.nvidia.com/Download/driverResults.aspx/95921/en-us +[2]:http://www.nvidia.com/object/unix.html +[3]:https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa From b060849a467ceb76908660c95fdba148270c0ddd Mon Sep 17 00:00:00 2001 From: XLCYun Date: Wed, 25 Nov 2015 11:47:34 +0800 Subject: [PATCH 028/160] Translating by XLCYun 20151123 Install Intel Gr... Translating by XLCYun --- ...0151123 Install Intel Graphics Installer in Ubuntu 15.10.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md index d9b8554c4e..2d9183ebd5 100644 --- a/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md +++ b/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md @@ -1,3 +1,4 @@ +Translating by XLCYun Install Intel Graphics Installer in Ubuntu 15.10 ================================================================================ ![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg) @@ -43,4 +44,4 @@ via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installe 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://01.org/linuxgraphics/downloads \ No newline at end of file +[1]:https://01.org/linuxgraphics/downloads From c047411fb68a5f6856c84a49bc80905e5a0d43d0 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 25 Nov 2015 13:56:35 +0800 Subject: [PATCH 029/160] =?UTF-8?q?20151125=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Running a mainline kernel on a cellphone.md | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/sources/tech/20151125 Running a mainline kernel on a cellphone.md b/sources/tech/20151125 Running a mainline kernel on a cellphone.md index 8607db3bb6..c247051def 100644 --- a/sources/tech/20151125 Running a mainline kernel on a cellphone.md +++ b/sources/tech/20151125 Running a mainline kernel on a cellphone.md @@ -1,8 +1,7 @@ Running a mainline kernel on a cellphone +================================================================================ -By Jonathan Corbet - -2015 Kernel Summit One of the biggest freedoms associated with free software is the ability to replace a program with an updated or modified version. Even so, of the many millions of people using Linux-powered phones, few are able to run a mainline kernel on those phones, even if they have the technical skills to do the replacement. The sad fact is that no mainstream phone available runs mainline kernels. A session at the 2015 Kernel Summit, led by Rob Herring, explored this problem and what might be done to address it. +One of the biggest freedoms associated with free software is the ability to replace a program with an updated or modified version. Even so, of the many millions of people using Linux-powered phones, few are able to run a mainline kernel on those phones, even if they have the technical skills to do the replacement. The sad fact is that no mainstream phone available runs mainline kernels. A session at the 2015 Kernel Summit, led by Rob Herring, explored this problem and what might be done to address it. When asked, most of the developers in the room indicated that they would prefer to be able to run mainline kernels on their phones — though a handful did say that they would rather not do so. Rob has been working on this problem for the last year and a half in support of Project Ara (mentioned in this article). But the news is not good. @@ -27,3 +26,15 @@ Tim said that there is currently a big disconnect with the vendor world; vendors Rob put up a list of problem areas, but there was not much time for discussion of the particulars. WiFi drivers continue to be an issue, especially with the new features being added in the Android world. Johannes Berg agreed that the new features are an issue; the Android developers do not even talk about them until they ship with the hardware. Support for most of those features does eventually land in the mainline kernel, though. As things wound down, Ben Herrenschmidt reiterated that the key was to get vendors to realize that working with the mainline kernel is in their own best interest; it saves work in the long run. Mark Brown said that, in past years when the kernel version shipped with Android moved forward more reliably, the benefits of working upstream were more apparent to vendors. Now that things seem to be stuck on 3.10, that pressure is not there in the same way. The session ended with developers determined to improve the situation, but without any clear plan for getting there. + +-------------------------------------------------------------------------------- + +via: https://lwn.net/Articles/662147/ + +作者:[Jonathan Corbet][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://lwn.net/Articles/KernelSummit2015/ From cf015998cf378efda7cd340ad471bc407eb0a327 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 25 Nov 2015 16:17:43 +0800 Subject: [PATCH 030/160] =?UTF-8?q?20151125-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ....8.16 in Ubuntu 16.04 or 15.10 or 14.04.md | 59 ++++++++ .../20151125 The tar command explained.md | 137 ++++++++++++++++++ 2 files changed, 196 insertions(+) create mode 100644 sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md create mode 100644 sources/tech/20151125 The tar command explained.md diff --git a/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md b/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md new file mode 100644 index 0000000000..b467e555af --- /dev/null +++ b/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md @@ -0,0 +1,59 @@ +How to Install GIMP 2.8.16 in Ubuntu 16.04, 15.10, 14.04 +================================================================================ +![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png) + +GIMP image editor 2.8.16 was released on its 20th birthday. Here’s how to install or upgrade in Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 and their derivatives, e.g., Linux Mint 17.x/13, Elementary OS Freya. + +GIMP 2.8.16 features support for layer groups in OpenRaster files, fixes for layer groups support in PSD, various user inrterface improvements, OSX build system fixes, translation updates, and more changes. Read the [official announcement][1]. + +![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg) + +### How to Install or Upgrade: ### + +Thanks to Otto Meier, an [Ubuntu PPA][2] with latest GIMP packages is available for all current Ubuntu releases and derivatives. + +**1. Add GIMP PPA** + +Open terminal from Unity Dash, App launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below command and hit Enter: + + sudo add-apt-repository ppa:otto-kesselgulasch/gimp + +![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg) + +Type in your password when it asks, no visual feedback so just type in mind, and hit enter to continue. + +**2. Install or Upgrade the editor.** + +After added the PPA, launch **Software Updater** (or Software Manager in Mint). After checking for updates, you’ll see GIMP in the update list. Click “Install Now” to upgrade it. + +![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg) + +For those who prefer Linux commands, run below commands one by one to refresh your repository caches and install GIMP: + + sudo apt-get update + + sudo apt-get install gimp + +**3. (Optional) Uninstall.** + +Just in case you want to uninstall or downgrade GIMP image editor. Use Software Center to remove it, or run below commands one by one to purge PPA as well as downgrade the software: + + sudo apt-get install ppa-purge + + sudo ppa-purge ppa:otto-kesselgulasch/gimp + +That’s it. Enjoy! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/ +[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp \ No newline at end of file diff --git a/sources/tech/20151125 The tar command explained.md b/sources/tech/20151125 The tar command explained.md new file mode 100644 index 0000000000..cc13a25dd2 --- /dev/null +++ b/sources/tech/20151125 The tar command explained.md @@ -0,0 +1,137 @@ +The tar command explained +================================================================================ +The Linux [tar][1] command is the swiss army of the Linux admin when it comes to archiving or distributing files. Gnu Tar archives can contain multiple files and directories, file permissions can be preserved and it supports multiple compression formats. The name tar stands for "**T**ape **Ar**chiver", the format is an official POSIX standard. + +### Tar file formats ### + +A short introduction into tar compression levels. + +- **No compression** Uncompressed files have the file ending .tar. +- **Gzip Compression** The Gzip format is the most widely used compression format for tar, it is fast for creating and extracting files. Files with gz compression have normally the file ending .tar.gz or .tgz. Here some examples on how to [create][2] and [extract][3] a tar.gz file. +- **Bzip2 Compression** The Bzip2 format offers a better compression then the Gzip format. Creating files is slower, the file ending is usually .tar.bz2. +- **Lzip (LZMA) Compression** The Lzip compression combines the speed of Gzip with a compression level that is similar to Bzip2 (or even better). Independently from these good attributes, this format is not widely used. +- **Lzop Compression** This compress option is probably the fastest compression format for tar, it has a compression level similar to gzip and is not widely used. + +The common formats are tar.gz and tar.bz2. If you goal is fast compression, then use gzip. When the archive file size is critical, then use tar.bz2. + +### What is the tar command used for? ### + +Here a few common use cases of the tar command. + +- Backup of Servers and Desktops. +- Document archiving. +- Software Distribution. + +### Installing tar ### + +The command is installed on most Linux Systems by default. Here are the instructions to install tar in case that the command is missing. + +#### CentOS #### + +Execute the following command as root user on the shell to install tar on CentOS. + + yum install tar + +#### Ubuntu #### + +This command will install tar on Ubuntu. The "sudo" command ensures that the apt command is run with root privileges. + + sudo apt-get install tar + +#### Debian #### + +The following apt command installs tar on Debian. + + apt-get install tar + +#### Windows #### + +The tar command is available for Windows as well, you can download it from the Gunwin project. [http://gnuwin32.sourceforge.net/packages/gtar.htm][4] + +### Create tar.gz Files ### + +Here is the [tar command][5] that has to be run on the shell. I will explain the command line options below. + + tar pczf myarchive.tar.gz /home/till/mydocuments + +This command creates the archive myarchive.tar.gz which contains the files and folders from the path /home/till/mydocuments. **The command line options explained**: + +- **[p]** This option stand for "preserve", it instructs tar to store details on file owner and file permissions in the archive. +- **[c]** Stands for create. This option is mandatory when a file is created. +- **[z]** The z option enables gzip compression. +- **[f]** The file option tells tar to create an archive file. Tar will send the output to stdout if this option is omitted. + +#### Tar command examples #### + +**Example 1: Backup the /etc Directory** Create a backup of the /etc config directory. The backup is stored in the root folder. + + tar pczvf /root/etc.tar.gz /etc + +![Backup the /etc directory with tar.](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png) + +The command should be run as root to ensure that all files in /etc are included in the backup. This time, I've added the [v] option in the command. This option stands for verbose, it tells tar to show all file names that get added into the archive. + +**Example 2: Backup your /home directory** Create a backup of your home directory. The backup will be stored in a directory /backup. + + tar czf /backup/myuser.tar.gz /home/myuser + +Replace myuser with your username. In this command, I've omitted the [p] switch, so the permissions get not preserved. + +**Example 3: A file-based backup of MySQL databases** The MySQL databases are stored in /var/lib/mysql on most Linux distributions. You can check that with the command: + + ls /var/lib/mysql + +![File based MySQL backup with tar.](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png) + +Stop the database server to get a consistent MySQL file backup with tar. The backup will be written to the /backup folder. + +1) Create the backup folder + + mkdir /backup + chmod 600 /backup + +2) Stop MySQL, run the backup with tar and start the database server again. + + service mysql stop + tar pczf /backup/mysql.tar.gz /var/lib/mysql + service mysql start + ls -lah /backup + +![File based MySQL backup.](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png) + +### Extract tar.gz Files ### + +The command to extract tar.gz files is: + + tar xzf myarchive.tar.gz + +#### The tar command options explained #### + +- **[x]** The x stand for extract, it is mandatory when a tar file shall be extracted. +- **[z]** The z option tells tar that the archive that shall be unpacked is in gzip format. +- **[f]** This option instructs tar to read the archive content from a file, in this case the file myarchive.tar.gz. + +The above tar command will silently extract that tar.gz file, it will show only error messages. If you like to see which files get extracted, then add the "v" option. + + tar xzvf myarchive.tar.gz + +The **[v]** option stands for verbose, it will show the file names while they get unpacked. + +![Extract a tar.gz file.](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/linux-tar-command/ + +作者:[howtoforge][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com/ +[1]:https://en.wikipedia.org/wiki/Tar_(computing) +[2]:http://www.faqforge.com/linux/create-tar-gz/ +[3]:http://www.faqforge.com/linux/extract-tar-gz/ +[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm +[5]:http://www.faqforge.com/linux/tar-command/ \ No newline at end of file From e458866204a02944d0c7f251e18d5cc230312dde Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 25 Nov 2015 17:20:08 +0800 Subject: [PATCH 031/160] =?UTF-8?q?20151125-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0 Years of GIMP Evolution--Step by Step.md | 171 ++++++++++++++++++ 1 file changed, 171 insertions(+) create mode 100644 sources/talk/20151125 20 Years of GIMP Evolution--Step by Step.md diff --git a/sources/talk/20151125 20 Years of GIMP Evolution--Step by Step.md b/sources/talk/20151125 20 Years of GIMP Evolution--Step by Step.md new file mode 100644 index 0000000000..edcef22d7f --- /dev/null +++ b/sources/talk/20151125 20 Years of GIMP Evolution--Step by Step.md @@ -0,0 +1,171 @@ +20 Years of GIMP Evolution: Step by Step +================================================================================ +注:youtube 视频 + + +[GIMP][1] (GNU Image Manipulation Program) – superb open source and free graphics editor. Development began in 1995 as students project of the University of California, Berkeley by Peter Mattis and Spencer Kimball. In 1997 the project was renamed in “GIMP” and became an official part of [GNU Project][2]. During these years the GIMP is one of the best graphics editor and platinum holy wars “GIMP vs Photoshop” – one of the most popular. + +The first announce, 21.11.1995: + +> From: Peter Mattis +> +> Subject: ANNOUNCE: The GIMP +> +> Date: 1995-11-21 +> +> Message-ID: <48s543$r7b@agate.berkeley.edu> +> +> Newsgroups: comp.os.linux.development.apps,comp.os.linux.misc,comp.windows.x.apps +> +> The GIMP: the General Image Manipulation Program +> ------------------------------------------------ +> +> The GIMP is designed to provide an intuitive graphical interface to a +> variety of image editing operations. Here is a list of the GIMP's +> major features: +> +> Image viewing +> ------------- +> +> * Supports 8, 15, 16 and 24 bit color. +> * Ordered and Floyd-Steinberg dithering for 8 bit displays. +> * View images as rgb color, grayscale or indexed color. +> * Simultaneously edit multiple images. +> * Zoom and pan in real-time. +> * GIF, JPEG, PNG, TIFF and XPM support. +> +> Image editing +> ------------- +> +> * Selection tools including rectangle, ellipse, free, fuzzy, bezier +> and intelligent. +> * Transformation tools including rotate, scale, shear and flip. +> * Painting tools including bucket, brush, airbrush, clone, convolve, +> blend and text. +> * Effects filters (such as blur, edge detect). +> * Channel & color operations (such as add, composite, decompose). +> * Plug-ins which allow for the easy addition of new file formats and +> new effect filters. +> * Multiple undo/redo. + +GIMP 0.54, 1996 + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/054.png) + +GIMP 0.54 was required X11 displays, X-server and Motif 1.2 wigdets and supported 8, 15, 16 & 24 color depths with RGB & grayscale colors. Supported images format: GIF, JPEG, PNG, TIFF and XPM. + +Basic functionality: rectangle, ellipse, free, fuzzy, bezier, intelligent selection tools, and rotate, scale, shear, clone, blend and flip images. + +Extended tools: text operations, effects filters, tools for channel and colors manipulation, undo and redo operations. Since the first version GIMP support the plugin system. + +GIMP 0.54 can be ran in Linux, HP-UX, Solaris, SGI IRIX. + +### GIMP 0.60, 1997 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/060.gif) + +This is development release, not for all users. GIMP has the new toolkits – GDK (GIMP Drawing Kit) and GTK (GIMP Toolkit), Motif support is deprecated. GIMP Toolkit is also begin of the GTK+ cross-platform widget toolkit. New features: + +- basic layers +- sub-pixel sampling +- brush spacing +- improver airbrush +- paint modes + +### GIMP 0.99, 1997 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/099.png) + +Since 0.99 version GIMP has the scripts add macros (Script-Fus) support. GTK and GDK with some improvements has now the new name – GTK+. Other improvements: + +- support big images (rather than 100 MB) +- new native format – XCF +- new API – write plugins and extensions is easy + +### GIMP 1.0, 1998 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/100.gif) + +GIMP and GTK+ was splitted into separate projects. The GIMP official website has +reconstructed and contained new tutorials, plugins and documentation. New features: + +- tile-based memory management +- massive changes in plugin API +- XFC format now support layers, guides and selections +- web interface +- online graphics generation + +### GIMP 1.2, 2000 ### + +New features: + +- translation for non-english languages +- fixed many bugs in GTK+ and GIMP +- many new plugins +- image map +- new toolbox: resize, measure, dodge, burn, smugle, samle colorize and curve bend +- image pipes +- images preview before saving +- scaled brush preview +- recursive selection by path +- new navigation window +- drag’n’drop +- watermarks support + +### GIMP 2.0, 2004 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/200.png) + +The biggest change – new GTK+ 2.x toolkit. + +### GIMP 2.2, 2004 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/220.png) + +Many bugfixes and drag’n’drop support. + +### GIMP 2.4, 2007 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/240.png) + +New features: + +- better drag’n’drop support +- Ti-Fu was replaced to Script-Fu – the new script interpreter +- new plugins: photocopy, softglow, neon, cartoon, dog, glob and others + +### GIMP 2.6, 2008 ### + +New features: + +- renew graphics interface +- new select and tool +- GEGL (GEneric Graphics Library) integration +- “The Utility Window Hint” for MDI behavior + +### GIMP 2.8, 2012 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/280.png) + +New features: + +- GUI has some visual changes +- new save and export menu +- renew text editor +- layers group support +- JPEG2000 and export to PDF support +- webpage screenshot tool + +-------------------------------------------------------------------------------- + +via: https://tlhp.cf/20-years-of-gimp-evolution/ + +作者:[Pavlo Rudyi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://tlhp.cf/author/paul/ +[1]:https://gimp.org/ +[2]:http://www.gnu.org/ \ No newline at end of file From d59d67258100f352c367c23c8dbc8f2ec61b0bf8 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 25 Nov 2015 19:36:06 +0800 Subject: [PATCH 032/160] Update 20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md --- ...to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md b/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md index b467e555af..8465520fc5 100644 --- a/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md +++ b/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to Install GIMP 2.8.16 in Ubuntu 16.04, 15.10, 14.04 ================================================================================ ![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png) @@ -56,4 +57,4 @@ via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-u [a]:http://ubuntuhandbook.org/index.php/about/ [1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/ -[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp \ No newline at end of file +[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp From aa6fdc6c9a1d4b1c5ac4e1a9c973e27be261dc50 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 26 Nov 2015 15:22:23 +0800 Subject: [PATCH 033/160] =?UTF-8?q?20151126-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...a 'World without Linux' and Open Source.md | 51 +++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md diff --git a/sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md b/sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md new file mode 100644 index 0000000000..90f8b22e32 --- /dev/null +++ b/sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md @@ -0,0 +1,51 @@ +Linux Foundation Explains a "World without Linux" and Open Source +================================================================================ +> The Linux Foundation responds to questions about its "World without Linux" movies, including what the Internet would be like without Linux and other open source software. + +![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/hey_22.png) + +Would the world really be tremendously different if Linux, the open source operating system kernel, did not exist? Would there be no Internet or movies? Those are the questions some viewers of the [Linux Foundation's][1] ongoing "[World without Linux][2]" video series are asking. Here are some answers. + +In case you've missed it, the "World without Linux" series is a collection of quirky short films that depict, well, a world without Linux (and open source software more generally). They have emphasized themes like [Linux's role in movie-making][3] and in [serving the Internet][4]. + +To offer perspective on the series's claims, direction and hidden symbols, Jennifer Cloer, vice president of communications at The Linux Foundation, recently sent The VAR Guy responses to some common queries about the movies. Below are the answers, in her own words. + +### The latest episode takes Sam and Annie to the movies. Would today's graphics really be that much different without Linux? ### + +In episode #4, we do a bit of a parody on "Avatar." Love it or hate it, the graphics in the real "Avatar" are pretty impressive. In a world without Linux, the graphics would be horrible but we wouldn't even know it because we wouldn't know any better. But in fact, "Avatar" was created using Linux. Weta Digital used one of the world's largest Linux clusters to render the film and do 3D modeling. It's also been reported that "Lord of the Rings," "Fantastic Four" and "King Kong," among others, have used Linux. We hope this episode can bring attention to that work, which hasn't been widely reported. + +### Some people criticized the original episode for concluding there would be no Internet without Linux. What's your reaction? ### + +We enjoyed the debate that resulted from the debut episode. With more than 100,000 views to date of that episode alone, it brought awareness to the role that Linux plays in society and to the worldwide community of contributors and supporters. Of course the Internet would exist without Linux but it wouldn't be the Internet we know today and it wouldn't have matured at the pace it has. Each episode makes a bold and fun statement about Linux's role in our every day lives. We hope this can help extend the story of Linux to more people around the world. + +### Why is Sam and Annie's cat named String? ### + +Nothing in the series is a coincidence. Look closely and you'll find all kinds of inside Linux and geek jokes. String is named after String theory and was named by our Linux.com Editor Libby Clark. In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. Kind of like Sam, Annie and String in a World Without Linux. + +### What can we expect from the next two episodes and, in particular, the finale? When will it air? ### + +In episode #5, we'll go to space and experience what a world without Linux would mean to exploration. It's a wild ride. In the finale, we finally get to see Linus in a world without Linux. There have been clues throughout the series as to what this finale will include but I can't give more than that away since there are ongoing contests to find the clues. And I can't give away the air date for the finale! You'll have to follow #WorldWithoutLinux to learn more. + +### Can you give us a hint on the clues in episode #4? ### + +There is another reference to the Free Burger Restaurant in this episode. Linux also actually does appear in this world without Linux but in a very covert way; you could say it's like reading Linux in another language. And, of course, just for fun, String makes another appearance. + +### Is the series achieving what you hoped? ### + +Yes. We're really happy to see people share and engage with these stories. We hope that it's reaching people who might not otherwise know the story of Linux or understand its pervasiveness in the world today. It's really about surfacing this to a broader audience and giving thanks to the worldwide community of developers and companies that support Linux and all the things it makes possible. + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/linux-foundation-explains-world-without-linux-and-open-so + +作者:[Christopher Tozzi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://linuxfoundation.org/ +[2]:http://www.linuxfoundation.org/world-without-linux +[3]:http://thevarguy.com/open-source-application-software-companies/new-linux-foundation-video-highlights-role-open-source-3d +[4]:http://thevarguy.com/open-source-application-software-companies/100715/would-internet-exist-without-linux-yes-without-open-sourc \ No newline at end of file From 8bcd03273de45bbb08b6ab3fe3fb8f5125e8fbd5 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 26 Nov 2015 15:55:59 +0800 Subject: [PATCH 034/160] =?UTF-8?q?20151126-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t and Linux--True Romance or Toxic Love.md | 77 +++++ ...everse Proxy for Apache on FreeBSD 10.2.md | 326 ++++++++++++++++++ ...trailing whitespaces in a file on Linux.md | 53 +++ 3 files changed, 456 insertions(+) create mode 100644 sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md create mode 100644 sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md create mode 100644 sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md diff --git a/sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md b/sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md new file mode 100644 index 0000000000..92705b4b5c --- /dev/null +++ b/sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md @@ -0,0 +1,77 @@ +Microsoft and Linux: True Romance or Toxic Love? +================================================================================ +Every now and then, you come across a news story that makes you choke on your coffee or splutter hot latte all over your monitor. Microsoft's recent proclamations of love for Linux is an outstanding example of such a story. + +Common sense says that Microsoft and the FOSS movement should be perpetual enemies. In the eyes of many, Microsoft embodies most of the greedy excesses that the Free Software movement rejects. In addition, Microsoft previously has labeled Linux as a cancer and the FOSS community as a "pack of thieves". + +We can understand why Microsoft has been afraid of a free operating system. When combined with open-source applications that challenge Microsoft's core line, it threatens Microsoft's grip on the desktop/laptop market. + +In spite of Microsoft's fears over its desktop dominance, the Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world's busiest sites. The sight of so much unclaimed licensing revenue must be painful indeed for Microsoft. + +Handheld devices are another realm where Microsoft has lost ground to free software. At one point, its Windows CE and Pocket PC operating systems were at the forefront of mobile computing. Windows-powered PDA devices were the shiniest and flashiest gadgets around. But, that all ended when Apple released its iPhone. Since then, Android has stepped into the limelight, with Windows Mobile largely ignored and forgotten. The Android platform is built on free and open-source components. + +The rapid expansion in Android's market share is due to the open nature of the platform. Unlike with iOS, any phone manufacturer can release an Android handset. And, unlike with Windows Mobile, there are no licensing fees. This has been really good news for consumers. It has led to lots of powerful and cheap handsets appearing from manufacturers all over the world. It's a very definite vindication of the value of FOSS software. + +Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater. Nobody likes to lose, especially when money is on the line. And, Microsoft does have a lot to lose. You would expect Microsoft to be bitter about it. And in the past, it has been. + +Microsoft has fought back against Linux and FOSS using every weapon at its disposal, from propaganda to patent threats, and although these attacks have slowed the adoption of Linux, they haven't stopped it. + +So, you can forgive us for being shocked when Microsoft starts handing out t-shirts and badges that say "Microsoft Loves Linux" at open-source conferences and events. Could it be true? Does Microsoft really love Linux? + +Of course, PR slogans and free t-shirts do not equal truth. Actions speak louder than words. And when you consider Microsoft's actions, Microsoft's stance becomes a little more ambiguous. + +On the one hand, Microsoft is recruiting hundreds of Linux developers and sysadmins. It's releasing its .NET Core framework as an open-source project with cross-platform support (so that .NET apps can run on OS X and Linux). And, it is partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center. + +On the other hand, Microsoft continues to launch legal attacks on open-source projects directly and through puppet corporations. It's clear that Microsoft hasn't had some big moral change of heart over proprietary vs. free software, so why the public declarations of adoration? + +To state the obvious, Microsoft is a profit-making entity. It's an investment vehicle for its shareholders and a source of income for its employees. Everything it does has a single ultimate goal: revenue. Microsoft doesn't act out of love or even hate (although that's a common accusation). + +So the question shouldn't be "does Microsoft really love Linux?" Instead, we should ask how Microsoft is going to profit from all this. + +Let's take the open-source release of .NET Core. This move makes it easy to port the .NET runtime to any platform. That extends the reach of Microsoft's .NET framework far beyond the Windows platform. + +Opening .NET Core ultimately will make it possible for .NET developers to produce cross-platform apps for OS X, Linux, iOS and even Android--all from a single codebase. + +From a developer's perspective, this makes the .NET framework much more attractive than before. Being able to reach many platforms from a single codebase dramatically increases the potential target market for any app developed using the .NET framework. + +What's more, a strong Open Source community would provide developers with lots of code to reuse in their own projects. So, the availability of open-source projects would make the .NET framework. + +On the plus side, opening .NET Core reduces fragmentation across different platforms and means a wider choice of apps for consumers. That means more choice, both in terms of open-source software and proprietary apps. + +From Microsoft's point of view, it would gain a huge army of developers. Microsoft profits by selling training, certification, technical support, development tools (including Visual Studio) and proprietary extensions. + +The question we should ask ourselves is does this benefit or hurt the Free Software community? + +Widespread adoption of the .NET framework could mean the eventual death of competing open-source projects, forcing us all to dance to Microsoft's tune. + +Moving beyond .NET, Microsoft is drawing a lot of attention to its Linux support on its Azure cloud computing platform. Remember, Azure originally was Windows Azure. That's because Windows Server was the only supported operating system. Today, Azure offers support for a number of Linux distros too. + +There's one reason for this: paying customers who need and want Linux services. If Microsoft didn't offer Linux virtual machines, those customers would do business with someone else. + +It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. + +This brings us back to the question of why there is so much buzz about Microsoft and Linux. We're all talking about it, because Microsoft wants us to think about it. After all, all these stories trace back to Microsoft, whether it's through press releases, blog posts or public announcements at conferences. The company is working hard to draw attention to its Linux expertise. + +What other possible purpose could be behind Chief Architect Kamala Subramaniam's blog post announcing Azure Cloud Switch? ACS is a custom Linux distro that Microsoft uses to automate the configuration of its switch hardware in the Azure data centers. + +ACS is not publicly available. It's intended for internal use in the Azure data center, and it's unlikely that anyone else would be able to find a use for it. In fact, Subramaniam states the same thing herself in her post. + +So, Microsoft won't be making any money from selling ACS, and it won't attract a user base by giving it away. Instead, Microsoft gets to draw attention to Linux and Azure, strengthening its position as a Linux cloud computing platform. + +Is Microsoft's new-found love for Linux good news for the community? + +We shouldn't be slow to forget Microsoft's mantra of Embrace, Extend and Exterminate. Right now, Microsoft is very much in the early stages of embracing Linux. Will Microsoft seek to splinter the community through custom extensions and proprietary "standards"? + +Let us know what you think in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0 + +作者:[James Darvell][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/james-darvell \ No newline at end of file diff --git a/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md b/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md new file mode 100644 index 0000000000..d9829e9daa --- /dev/null +++ b/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md @@ -0,0 +1,326 @@ +How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2 +================================================================================ +Nginx is free and open source HTTP server and reverse proxy, as well as an mail proxy server for IMAP/POP3. Nginx is high performance web server with rich of features, simple configuration and low memory usage. Originally written by Igor Sysoev on 2002, and until now has been used by a big technology company including Netflix, Github, Cloudflare, WordPress.com etc. + +In this tutorial we will "**install and configure nginx web server as reverse proxy for apache on freebsd 10.2**". Apache will run with php on port 8080, and then we need to configure nginx run on port 80 to receive a request from user/visitor. If user request for web page from the browser on port 80, then nginx will pass the request to apache webserver and PHP that running on port 8080. + +#### Prerequisite #### + +- FreeBSD 10.2. +- Root privileges. + +### Step 1 - Update the System ### + +Log in to your freebsd server with ssh credential and update system with command below : + + freebsd-update fetch + freebsd-update install + +### Step 2 - Install Apache ### + +pache is open source HTTP server and the most widely used web server. Apache is not installed by default on freebsd, but we can install it from the ports or package on "/usr/ports/www/apache24" or install it from freebsd repository with pkg command. In this tutorial we will use pkg command to install from the freebsd repository : + + pkg install apache24 + +### Step 3 - Install PHP ### + +Once apache is installed, followed with installing php for handling a PHP file request by a user. We will install php with pkg command as below : + + pkg install php56 mod_php56 php56-mysql php56-mysqli + +### Step 4 - Configure Apache and PHP ### + +Once all is installed, we will configure apache to run on port 8080, and php working with apache. To configure apache, we can edit the configuration file "httpd.conf", and for PHP we just need to copy the php configuration file php.ini on "/usr/local/etc/" directory. + +Go to "/usr/local/etc/" directory and copy php.ini-production file to php.ini : + + cd /usr/local/etc/ + cp php.ini-production php.ini + +Next, configure apache by editing file "httpd.conf" on apache directory : + + cd /usr/local/etc/apache24 + nano -c httpd.conf + +Port configuration on line **52** : + + Listen 8080 + +ServerName configuration on line **219** : + + ServerName 127.0.0.1:8080 + +Add DirectoryIndex file that apache will serve it if a directory requested on line **277** : + + DirectoryIndex index.php index.html + +Configure apache to work with php by adding script below under line **287** : + + + SetHandler application/x-httpd-php + + + SetHandler application/x-httpd-php-source + + +Save and exit. + +Now add apache to start at boot time with sysrc command : + + sysrc apache24_enable=yes + +And test apache configuration with command below : + + apachectl configtest + +If there is no error, start apache : + + service apache24 start + +If all is done, verify that php is running well with apache by creating phpinfo file on "/usr/local/www/apache24/data" directory : + + cd /usr/local/www/apache24/data + echo "" > info.php + +Now visit the freebsd server IP : 192.168.1.123:8080/info.php. + +![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png) + +Apache is working with php on port 8080. + +### Step 5 - Install Nginx ### + +Nginx high performance web server and reverse proxy with low memory consumption. In this step we will use nginx as reverse proxy for apache, so let's install it with pkg command : + + pkg install nginx + +### Step 6 - Configure Nginx ### + +Once nginx is installed, we must configure it by replacing nginx file "**nginx.conf**" with new configuration below. Change the directory to "/usr/local/etc/nginx/" and backup default nginx.conf : + + cd /usr/local/etc/nginx/ + mv nginx.conf nginx.conf.oroginal + +Now create new nginx configuration file : + + nano -c nginx.conf + +and paste configuration below : + + user www; + worker_processes 1; + error_log /var/log/nginx/error.log; + + events { + worker_connections 1024; + } + + http { + include mime.types; + default_type application/octet-stream; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + access_log /var/log/nginx/access.log; + + sendfile on; + keepalive_timeout 65; + + # Nginx cache configuration + proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; + proxy_temp_path /var/nginx/cache/tmp; + proxy_cache_key "$scheme$host$request_uri"; + + gzip on; + + server { + #listen 80; + server_name _; + + location /nginx_status { + + stub_status on; + access_log off; + } + + # redirect server error pages to the static page /50x.html + # + error_page 500 502 503 504 /50x.html; + location = /50x.html { + root /usr/local/www/nginx-dist; + } + + # proxy the PHP scripts to Apache listening on 127.0.0.1:8080 + # + location ~ \.php$ { + proxy_pass http://127.0.0.1:8080; + include /usr/local/etc/nginx/proxy.conf; + } + } + + include /usr/local/etc/nginx/vhost/*; + + } + +Save and exit. + +Next, create new file called **proxy.conf** for reverse proxy configuration on nginx directory : + + cd /usr/local/etc/nginx/ + nano -c proxy.conf + +Paste configuration below : + + proxy_buffering on; + proxy_redirect off; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + client_max_body_size 10m; + client_body_buffer_size 128k; + proxy_connect_timeout 90; + proxy_send_timeout 90; + proxy_read_timeout 90; + proxy_buffers 100 8k; + add_header X-Cache $upstream_cache_status; + +Save and exit. + +And the last, create new directory for nginx cache on "/var/nginx/cache" : + + mkdir -p /var/nginx/cache + +### Step 7 - Configure Nginx VirtualHost ### + +In this step we will create new virtualhost for domain "saitama.me", with document root on "/usr/local/www/saitama.me" and the log file on "/var/log/nginx" directory. + +First thing we must do is creating new directory to store the virtualhost file, we here use new directory called "**vhost**". Let's create it : + + cd /usr/local/etc/nginx/ + mkdir vhost + +vhost directory has been created, now go to the directory and create new file virtualhost. I'me here will create new file "**saitama.conf**" : + + cd vhost/ + nano -c saitama.conf + +Paste virtualhost configuration below : + + server { + # Replace with your freebsd IP + listen 192.168.1.123:80; + + # Document Root + root /usr/local/www/saitama.me; + index index.php index.html index.htm; + + # Domain + server_name www.saitama.me saitama.me; + + # Error and Access log file + error_log /var/log/nginx/saitama-error.log; + access_log /var/log/nginx/saitama-access.log main; + + # Reverse Proxy Configuration + location ~ \.php$ { + proxy_pass http://127.0.0.1:8080; + include /usr/local/etc/nginx/proxy.conf; + + # Cache configuration + proxy_cache my-cache; + proxy_cache_valid 10s; + proxy_no_cache $cookie_PHPSESSID; + proxy_cache_bypass $cookie_PHPSESSID; + proxy_cache_key "$scheme$host$request_uri"; + + } + + # Disable Cache for the file type html, json + location ~* .(?:manifest|appcache|html?|xml|json)$ { + expires -1; + } + + # Enable Cache the file 30 days + location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ { + proxy_cache_valid 200 120m; + expires 30d; + proxy_cache my-cache; + access_log off; + } + + } + +Save and exit. + +Next, create new log directory for nginx and virtualhost on "/var/log/" : + + mkdir -p /var/log/nginx/ + +If all is done, let's create a directory for document root for saitama.me : + + cd /usr/local/www/ + mkdir saitama.me + +### Step 8 - Testing ### + +This step is just test our nginx configuration and test the nginx virtualhost. + +Test nginx configuration with command below : + + nginx -t + +If there is no problem, add nginx to boot time with sysrc command, and then start it and restart apache: + + sysrc nginx_enable=yes + service nginx start + service apache24 restart + +All is done, now verify the the php is working by adding new file phpinfo on saitama.me directory : + + cd /usr/local/www/saitama.me + echo "" > info.php + +Visit the domain : **www.saitama.me/info.php**. + +![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png) + +Nginx as reverse proxy for apache is working, and php is working too. + +And this is another results : + +Test .html file with no-cache. + + curl -I www.saitama.me + +![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png) + +Test .css file with 30day cache. + + curl -I www.saitama.me/test.css + +![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png) + +Test .php file with cache : + + curl -I www.saitama.me/info.php + +![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png) + +All is done. + +### Conclusion ### + +Nginx is most popular HTTP server and reverse proxy. Has a rich of features with high performance and low memory/RAM usage. Nginx use too for caching, we can cache a static file on the web to make the web fast load, and cache for php file if a user request for it. Nginx is easy to configure and use, use for HTTP server or act as reverse proxy for apache. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/ + +作者:[Arul][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ \ No newline at end of file diff --git a/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md b/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md new file mode 100644 index 0000000000..84c04e7436 --- /dev/null +++ b/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md @@ -0,0 +1,53 @@ +Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux +================================================================================ +> Question: I have a text file in which I need to remove all trailing whitespsaces (e.g., spaces and tabs) in each line for formatting purpose. Is there a quick and easy Linux command line tool I can use for this? + +When you are writing code for your program, you must understand that there are standard coding styles to follow. For example, "trailing whitespaces" are typically considered evil because when they get into a code repository for revision control, they can cause a lot of problems and confusion (e.g., "false diffs"). Many IDEs and text editors are capable of highlighting and automatically trimming trailing whitepsaces at the end of each line. + +Here are a few ways to **remove trailing whitespaces in Linux command-line environment**. + +### Method One ### + +A simple command line approach to remove unwanted whitespaces is via sed. + +The following command deletes all spaces and tabs at the end of each line in input.java. + + $ sed -i 's/[[:space:]]*$//' input.java + +If there are multiple files that need trailing whitespaces removed, you can use a combination of find and sed. For example, the following command deletes trailing whitespaces in all *.java files recursively found in the current directory as well as all its sub-directories. + + $ find . -name "*.java" -type f -print0 | xargs -0 sed -i 's/[[:space:]]*$//' + +### Method Two ### + +Vim text editor is able to highlight and trim whitespaces in a file as well. + +To highlight all trailing whitespaces in a file, open the file with Vim editor and enable text highlighting by typing the following in Vim command line mode. + + :set hlsearch + +Then search for trailing whitespaces by typing: + + /\s\+$ + +This will show all trailing spaces and tabs found throughout the file. + +![](https://c1.staticflickr.com/1/757/23198657732_bc40e757b4_b.jpg) + +Then to clean up all trailing whitespaces in a file with Vim, type the following Vim command. + + :%s/\s\+$// + +This command means substituting all whitespace characters found at the end of the line (\s\+$) with no character. + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/remove-trailing-whitespaces-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file From 58a6def9177e8da705a6cc90f52d841e2481417c Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 26 Nov 2015 22:27:05 +0800 Subject: [PATCH 035/160] =?UTF-8?q?XLCYun=20=E5=88=A0=E9=99=A4=E5=8E=9F?= =?UTF-8?q?=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit XLCYun 删除原文 --- ...ntel Graphics Installer in Ubuntu 15.10.md | 47 ------------------- 1 file changed, 47 deletions(-) delete mode 100644 sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md diff --git a/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md deleted file mode 100644 index 2d9183ebd5..0000000000 --- a/sources/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md +++ /dev/null @@ -1,47 +0,0 @@ -Translating by XLCYun -Install Intel Graphics Installer in Ubuntu 15.10 -================================================================================ -![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg) - -Intel has announced a new release of its Linux graphics installer recently. Ubuntu 15.10 Wily is required and support for Ubuntu 15.04 is deprecated in the new release. - -> The Intel® Graphics Installer for Linux* allows you to easily install the latest graphics and video drivers for your Intel graphics hardware. This allows you to stay current with the latest enhancements, optimizations, and fixes to the Intel® Graphics Stack to ensure the best user experience with your Intel® graphics hardware. The Intel® Graphics Installer for Linux* is available for the latest version of Ubuntu*. - -![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg) - -### How to Install: ### - -**1.** Download the installer from [the link page][1]. The current is version 1.2.1 for Ubuntu 15.10. Check your OS type, 32-bit or 64-bit, via **System Settings -> Details**. - -![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg) - -**2.** Once the download process finished, go to your Download folder and click open the .deb package with Ubuntu Software Center and finally click the install button. - -![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg) - -**3.** In order to trust the Intel Graphics Installer, you will need to add keys via below commands. - -Open terminal from Unity Dash, App Launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below commands and run one by one: - - wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add - - - wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add - - -![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg) - -NOTE: While running the first command, if the cursor is stuck and blinking after downloading the key, as above picture shows, type your password (no visual feedback) and hit enter to continue. - -Finally launch Intel Graphics Installer via Unity Dash or Application launcher. - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://01.org/linuxgraphics/downloads From 60247119fa15341434da793aeb1212d4784db6aa Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 26 Nov 2015 22:28:12 +0800 Subject: [PATCH 036/160] Create 20151123 Install Intel Graphics Installer in Ubuntu 15.10.md --- ...ntel Graphics Installer in Ubuntu 15.10.md | 47 +++++++++++++++++++ 1 file changed, 47 insertions(+) create mode 100644 translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md diff --git a/translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md new file mode 100644 index 0000000000..be91927f24 --- /dev/null +++ b/translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md @@ -0,0 +1,47 @@ +在Ubuntu 15.10上安装Intel图形安装器 +================================================================================ +![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg) + +Intel最近发布了一个新版本的Linux图型安装器。在新版本中,Ubuntu 15.04将不被支持而必须用Ubuntu 15.10 Wily。 + + +> Linux版Intel®图形安装器可以让你很容易的安装最新版的图形与视频驱动。它能保证你一直使用最新的增强与优化功能,并能够安装到Intel图形堆栈中,来保证你在你的Intel图形硬件下,享受到最佳的用户体验。*现在的Linux版的Intel®图形安装器支持最新版的Ubuntu。* + +![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg) + +### 安装 ### + +**1.** 从[链接页面][1]中下载安装器。当前支持Ubuntu 15.10的版本是1.2.1版。你可以在**系统设置 -> 详细信息**中检查你的操作系统(32位或64位)的类型。 + +![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg) + +**2.** 一旦下载完成,到下载目录中点击.deb安装包用Ubuntu软件中心打开它,然最后点击“安装”按钮。 + +![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg) + +**3.** 为了让系统信任Intel图形安装器,你需要通过下面的命令来为它添加钥匙。 + +用快捷键Ctrl+Alt+T或者在Unity Dash中的“应用程序启动器”中打开终端。依次粘贴运行下面的命令。 + + wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add - + + wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add - + +![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg) + +注意:在运行第一个命令的过程中,如果钥匙下载完成后光标停住不动并且一直闪烁的话,就像上面图片显示的那样,输入你的密码(输入时不会看到什么有变化)然后回车就行了。 + +最后通过Unity Dash或应用程序启动器打开Intel图形安装器。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/ + +作者:[Ji m][a] +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://01.org/linuxgraphics/downloads From 92f5eb379241b9c273511177152ebc93bb7ae5a9 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 27 Nov 2015 06:41:08 +0800 Subject: [PATCH 037/160] PUB:20151012 Curious about Linux Try Linux Desktop on the Cloud @sevenot --- ...ut Linux Try Linux Desktop on the Cloud.md | 44 +++++++++++++++++++ ...ut Linux Try Linux Desktop on the Cloud.md | 43 ------------------ 2 files changed, 44 insertions(+), 43 deletions(-) create mode 100644 published/20151012 Curious about Linux Try Linux Desktop on the Cloud.md delete mode 100644 translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md diff --git a/published/20151012 Curious about Linux Try Linux Desktop on the Cloud.md b/published/20151012 Curious about Linux Try Linux Desktop on the Cloud.md new file mode 100644 index 0000000000..2d2985bc34 --- /dev/null +++ b/published/20151012 Curious about Linux Try Linux Desktop on the Cloud.md @@ -0,0 +1,44 @@ +好奇 Linux?试试云端的 Linux 桌面 +================================================================================ +Linux 在桌面操作系统市场上只占据了非常小的份额,从目前的调查结果来看,估计只有2%的市场份额;对比来看,丰富多变的 Windows 系统占据了接近90%的市场份额。对于 Linux 来说,要挑战 Windows 在桌面操作系统市场的垄断,需要有一个让用户学习不同的操作系统的简单方式。如果你相信传统的 Windows 用户会再买一台机器来使用 Linux,那你就太天真了。我们只能去试想用户重新分区,设置引导程序来使用双系统,或者跳过所有步骤回到一个最简单的方法。 + +![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png) + +我们实验过一系列让用户试操作 Linux 的无风险的使用方法,不涉及任何分区管理,包括 CD/DVD 光盘、USB 存储棒和桌面虚拟化软件等等。通过实验,我强烈推荐使用 VMware 的 VMware Player 或者 Oracle VirtualBox 虚拟机,对于桌面操作系统或者便携式电脑的用户,这是一种安装运行多操作系统的相对简单而且免费的的方法。每一台虚拟机和其他虚拟机相隔离,但是共享 CPU、内存、网络接口等等。虚拟机仍需要一定的资源来安装运行 Linux,也需要一台相当强劲的主机。但对于一个好奇心不大的人,这样做实在是太麻烦了。 + +要打破用户传统的使用观念是非常困难的。很多 Windows 用户可以尝试使用 Linux 提供的自由软件,但也有太多要学习的 Linux 系统知识。这会花掉他们相当一部分时间才能习惯 Linux 的工作方式。 + +当然了,对于一个第一次在 Linux 上操作的新手,有没有一个更高效的方法呢?答案是肯定的,接着往下看看云实验平台。 + +### LabxNow ### + +![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png) + +LabxNow 提供了一个免费服务,方便广大用户通过浏览器来访问远程 Linux 桌面。开发者将其加强为一个用户个人远程实验室(用户可以在系统里运行、开发任何程序),用户可以在任何地方通过互联网登入远程实验室。 + +这项服务现在可以为个人用户提供2核处理器,4GB RAM和10GB的固态硬盘,运行在128G RAM的4 AMD 6272处理器上。 + +#### 配置参数: #### + +- 系统镜像:基于 Ubuntu 14.04 的 Xface 4.10,RHEL 6.5,CentOS(Gnome桌面),Oracle +- 硬件: CPU - 1核或者2核;内存: 512MB, 1GB, 2GB or 4GB +- 超快的网络数据传输 +- 可以运行在所有流行的浏览器上 +- 可以安装任意程序,可以运行任何程序 – 这是一个非常棒的方法,可以随意做实验学习你想学的任何知识,没有 一点风险 +- 添加、删除、管理、制定虚拟机非常方便 +- 支持虚拟机共享,远程桌面 + +你所需要的只是一台有稳定网络的设备。不用担心虚拟专用系统(VPS)、域名、或者硬件带来的高费用。LabxNow提供了一个在 Ubuntu、RHEL 和 CentOS 上实验的非常好的方法。它给 Windows 用户提供一个极好的环境,让他们探索美妙的 Linux 世界。说得深入一点,它可以让用户随时随地在里面工作,而没有了要在每台设备上安装 Linux 的压力。点击下面这个链接进入 [www.labxnow.org/labxweb/][1]。 + +另外还有一些其它服务(大部分是收费服务)可以让用户使用 Linux,包括 Cloudsigma 环境的7天使用权和Icebergs.io (通过HTML5实现root权限)。但是现在,我推荐 LabxNow。 + +-------------------------------------------------------------------------------- + +来自: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html + +译者:[sevenot](https://github.com/sevenot) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.labxnow.org/labxweb/ diff --git a/translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md b/translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md deleted file mode 100644 index 016429d92d..0000000000 --- a/translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md +++ /dev/null @@ -1,43 +0,0 @@ -sevenot translated -好奇Linux?试试云端的Linux桌面 -================================================================================ -Linux在桌面操作系统市场上只占据了非常小的份额,目前调查来看,估计只有2%的市场份额;对比来看丰富多变的Windows系统占据了接近90%的市场份额。对于Linux来说要挑战Windows在桌面操作系统市场的垄断,需要一个简单的方式来让用户学习不同的操作系统。如果你相信传统的Windows用户再买一台机器来使用Linux,你就太天真了。我们只能去试想用户重新分盘,设置引导程序来使用双系统,或者跳过所有步骤回到一个最简单的方法。 -![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png) - -我们实验过一系列无风险的使用方法让用户试操作Linux,并且不涉及任何分区管理,包括CD/DVDs光盘、USB钥匙和桌面虚拟化软件。通过实验,我强烈推荐使用VMware的VMware Player或者Oracle VirtualBox虚拟机,对于桌面操作系统或者便携式电脑的用户,这是一种相对简单而且免费的的方法来安装运行多操作系统。每一台虚拟机和其他虚拟机相隔离,但是共享CPU,存贮,网络接口等等。但是虚拟机仍需要一定的资源来安装运行Linux,也需要一台相当强劲的主机。对于一个好奇心不大的人,这样做实在是太麻烦了。 - -要打破用户传统的使用观念市非常困难的。很多Windows用户可以尝试使用Linux提供的免费软件,但也有太多要学习的Linux系统知识。这会花掉相当一部分时间来习惯Linux的工作方式。 - -当然了,对于一个第一次在Linux上操作的新手,有没有一个更高效的方法呢?答案是肯定的,接着往下看看云实验平台。 - -### LabxNow ### - -![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png) - -LabxNow提供了一个免费服务,方便广大用户通过浏览器来访问远程Liunx桌面。开发者将其加强为一个用户个人远程实验室(用户可以在系统里运行、开发任何程序),用户可以在任何地方通过互联网登入远程实验室。 - -这项服务现在可以为个人用户提供2核处理器,4GB RAM和10GB的固态硬盘,运行在128G RAM的4 AMD 6272处理器上。 - -#### 配置参数: #### - -- 系统镜像:基于Ubuntu 14.04的Xface 4.10,RHEL 6.5,CentOS(Gnome桌面),Oracle -- 硬件: CPU - 1核或者2核; 内存: 512MB, 1GB, 2GB or 4GB -- 超快的网络数据传输 -- 可以运行在所有流行的浏览器上 -- 可以安装任意程序,可以运行任何程序 – 这是一个非常棒的方法,可以随意做实验学你你想学的所有知识, 没有 一点风险 -- 添加、删除、管理、制定虚拟机非常方便 -- 支持虚拟机共享,远程桌面 - -你所需要的只是一台有稳定网络的设备。不用担心虚拟专用系统(VPS)、域名、或者硬件带来的高费用。LabxNow提供了一个非常好的方法在Ubuntu、RHEL和CentOS上实验。它给Windows用户一个极好的环境,让他们探索美妙的Linux世界。说得深一点,它可以让用户随时随地在里面工作,而没有了要在每台设备上安装Linux的压力。点击下面这个链接进入[www.labxnow.org/labxweb/][1]。 - -这里还有一些其它服务(大部分市收费服务)可以让用户在Linux使用。包括Cloudsigma环境的7天使用权和Icebergs.io(通过HTML5实现root权限)。但是现在,我推荐LabxNow。 --------------------------------------------------------------------------------- - -来自: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html - -译者:[sevenot](https://github.com/sevenot) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.labxnow.org/labxweb/ From 8097a41813f672272eeb20ca24f9bd8c906b94d4 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 27 Nov 2015 06:53:35 +0800 Subject: [PATCH 038/160] PUB:20151105 Linux FAQs with Answers--How to find which shell I am using on Linux @strugglingyouth --- ...to find which shell I am using on Linux.md | 29 +++++-------------- 1 file changed, 7 insertions(+), 22 deletions(-) rename {translated/tech => published}/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md (66%) diff --git a/translated/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md b/published/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md similarity index 66% rename from translated/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md rename to published/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md index 675ef43d94..e9e3aeabcc 100644 --- a/translated/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md +++ b/published/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md @@ -1,5 +1,4 @@ - -Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell +Linux 有问必答:如何知道当前正在使用的 shell 是哪个? ================================================================================ > **问题**: 我经常在命令行中切换 shell。是否有一个快速简便的方法来找出我当前正在使用的 shell 呢?此外,我怎么能找到当前 shell 的版本? @@ -7,36 +6,30 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell 有多种方式可以查看你目前在使用什么 shell,最简单的方法就是通过使用 shell 的特殊参数。 -其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 的 PID。此参数是只读的,不能被修改。所以,下面的命令也将显示你正在运行的 shell 的名字: +其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 实例的 PID。此参数是只读的,不能被修改。所以,下面的命令也将显示你正在运行的 shell 的名字: $ ps -p $$ ----------- - PID TTY TIME CMD 21666 pts/4 00:00:00 bash 上述命令可在所有可用的 shell 中工作。 -如果你不使用 csh,使用 shell 的特殊参数 “$$” 可以找出当前的 shell,这表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shells 中,如 sh, zsh, tcsh or dash。使用 echo 命令也可以查看你目前正在使用的 shell 的名称。 +如果你不使用 csh,找到当前使用的 shell 的另外一个办法是使用特殊参数 “$0” ,它表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shell 中,如 sh、zsh、tcsh 或 dash。使用 echo 命令可以查看你目前正在使用的 shell 的名称。 $ echo $0 ----------- - bash -不要将 $SHELL 看成是一个单独的环境变量,它被设置为整个路径下的默认 shell。因此,这个变量并不一定指向你当前使用的 shell。例如,即使你在终端中调用不同的 shell,$SHELL 也保持不变。 +不要被一个叫做 $SHELL 的单独的环境变量所迷惑,它被设置为你的默认 shell 的完整路径。因此,这个变量并不一定指向你当前使用的 shell。例如,即使你在终端中调用不同的 shell,$SHELL 也保持不变。 $ echo $SHELL ----------- - /bin/shell ![](https://c2.staticflickr.com/6/5688/22544087680_4a9c180485_c.jpg) -因此,找出当前的shell,你应该使用 $$ 或 $0,但不是 $ SHELL。 +因此,找出当前的shell,你应该使用 $$ 或 $0,但不是 $SHELL。 ### 找出当前 Shell 的版本 ### @@ -46,8 +39,6 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell $ bash --version ----------- - GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later @@ -59,23 +50,17 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell $ zsh --version ----------- - zsh 5.0.7 (x86_64-pc-linux-gnu) **对于** tcsh **shell**: $ tcsh --version ----------- - tcsh 6.18.01 (Astron) 2012-02-14 (x86_64-unknown-linux) options wide,nls,dl,al,kan,rh,nd,color,filec -对于一些 shells,你还可以使用 shell 特定的变量(例如,$ BASH_VERSION 或 $ ZSH_VERSION)。 +对于某些 shell,你还可以使用 shell 特定的变量(例如,$BASH_VERSION 或 $ZSH_VERSION)。 $ echo $BASH_VERSION ----------- - 4.3.8(1)-release -------------------------------------------------------------------------------- @@ -84,7 +69,7 @@ via: http://ask.xmodulo.com/which-shell-am-i-using.html 作者:[Dan Nanni][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 57b040f06e3320fdf2fbaabf0ca03ee2b00717c4 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 27 Nov 2015 14:44:02 +0800 Subject: [PATCH 039/160] =?UTF-8?q?20151127-1=20=E9=80=89=E9=A2=98=20Linux?= =?UTF-8?q?=20or=20UNIX=20grep=20Command=20Tutorial=20series?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...grep Command In Linux or UNIX--Examples.md | 151 +++++++++ ...l series 2--Regular Expressions In grep.md | 289 ++++++++++++++++++ ...ds or String Pattern Using grep Command.md | 41 +++ ...Count Lines If a String or Word Matches.md | 33 ++ ...ep From Files and Display the File Name.md | 67 ++++ ...How To Find Files by Content Under UNIX.md | 66 ++++ ...ives Uncommented Lines of a Config File.md | 151 +++++++++ 7 files changed, 798 insertions(+) create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md create mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md new file mode 100644 index 0000000000..b18b40e04c --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md @@ -0,0 +1,151 @@ +HowTo: Use grep Command In Linux / UNIX – Examples +================================================================================ +How do I use grep command on Linux, Apple OS X, and Unix-like operating systems? Can you give me a simple examples of the grep command? + +The grep command is used to search text or searches the given file for lines containing a match to the given strings or words. By default, grep displays the matching lines. Use grep to search for lines of text that match one or many regular expressions, and outputs only the matching lines. grep is considered as one of the most useful commands on Linux and Unix-like operating systems. + +### Did you know? ### + +The name, "grep", derives from the command used to perform a similar operation, using the Unix/Linux text editor ed: + + g/re/p + +### The grep command syntax ### + +The syntax is as follows: + + grep 'word' filename + grep 'word' file1 file2 file3 + grep 'string1 string2' filename + cat otherfile | grep 'something' + command | grep 'something' + command option1 | grep 'data' + grep --color 'data' fileName + +### How do I use grep command to search a file? ### + +Search /etc/passwd file for boo user, enter: + + $ grep boo /etc/passwd + +Sample outputs: + + foo:x:1000:1000:foo,,,:/home/foo:/bin/ksh + +You can force grep to ignore word case i.e match boo, Boo, BOO and all other combination with the -i option: + + $ grep -i "boo" /etc/passwd + +### Use grep recursively ### + +You can search recursively i.e. read all files under each directory for a string "192.168.1.5" + + $ grep -r "192.168.1.5" /etc/ + +OR + + $ grep -R "192.168.1.5" /etc/ + +Sample outputs: + + /etc/ppp/options:# ms-wins 192.168.1.50 + /etc/ppp/options:# ms-wins 192.168.1.51 + /etc/NetworkManager/system-connections/Wired connection 1:addresses1=192.168.1.5;24;192.168.1.2; + +You will see result for 192.168.1.5 on a separate line preceded by the name of the file (such as /etc/ppp/options) in which it was found. The inclusion of the file names in the output data can be suppressed by using the -h option as follows: + + $ grep -h -R "192.168.1.5" /etc/ + +OR + + $ grep -hR "192.168.1.5" /etc/ + +Sample outputs: + + # ms-wins 192.168.1.50 + # ms-wins 192.168.1.51 + addresses1=192.168.1.5;24;192.168.1.2; + +### Use grep to search words only ### + +When you search for boo, grep will match fooboo, boo123, barfoo35 and more. You can force the grep command to select only those lines containing matches that form whole words i.e. match only boo word: + + $ grep -w "boo" file + +### Use grep to search 2 different words ### + +Use the egrep command as follows: + + $ egrep -w 'word1|word2' /path/to/file + +### Count line when words has been matched ### + +The grep can report the number of times that the pattern has been matched for each file using -c (count) option: + + $ grep -c 'word' /path/to/file + +Pass the -n option to precede each line of output with the number of the line in the text file from which it was obtained: + + $ grep -n 'root' /etc/passwd + +Sample outputs: + + 1:root:x:0:0:root:/root:/bin/bash + 1042:rootdoor:x:0:0:rootdoor:/home/rootdoor:/bin/csh + 3319:initrootapp:x:0:0:initrootapp:/home/initroot:/bin/ksh + +### Grep invert match ### + +You can use -v option to print inverts the match; that is, it matches only those lines that do not contain the given word. For example print all line that do not contain the word bar: + + $ grep -v bar /path/to/file + +### UNIX / Linux pipes and grep command ### + +grep command often used with [shell pipes][1]. In this example, show the name of the hard disk devices: + + # dmesg | egrep '(s|h)d[a-z]' + +Display cpu model name: + + # cat /proc/cpuinfo | grep -i 'Model' + +However, above command can be also used as follows without shell pipe: + + # grep -i 'Model' /proc/cpuinfo + +Sample outputs: + + model : 30 + model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz + model : 30 + model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz + +### How do I list just the names of matching files? ### + +Use the -l option to list file name whose contents mention main(): + + $ grep -l 'main' *.c + +Finally, you can force grep to display output in colors, enter: + + $ grep --color vivek /etc/passwd + +Sample outputs: + +![Grep command in action](http://files.cyberciti.biz/uploads/faq/2007/08/grep_command_examples.png) + +Grep command in action + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[1]:http://bash.cyberciti.biz/guide/Pipes \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md new file mode 100644 index 0000000000..506719d8aa --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md @@ -0,0 +1,289 @@ +Regular Expressions In grep +================================================================================ +How do I use the Grep command with regular expressions on a Linux and Unix-like operating systems? + +Linux comes with GNU grep, which supports extended regular expressions. GNU grep is the default on all Linux systems. The grep command is used to locate information stored anywhere on your server or workstation. + +### Regular Expressions ### + +Regular Expressions is nothing but a pattern to match for each input line. A pattern is a sequence of characters. Following all are examples of pattern: + + ^w1 + w1|w2 + [^ ] + +#### grep Regular Expressions Examples #### + +Search for 'vivek' in /etc/passswd + + grep vivek /etc/passwd + +Sample outputs: + + vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash + vivekgite:x:1001:1001::/home/vivekgite:/bin/sh + gitevivek:x:1002:1002::/home/gitevivek:/bin/sh + +Search vivek in any case (i.e. case insensitive search) + + grep -i -w vivek /etc/passwd + +Search vivek or raj in any case + + grep -E -i -w 'vivek|raj' /etc/passwd + +The PATTERN in last example, used as an extended regular expression. + +### Anchors ### + +You can use ^ and $ to force a regex to match only at the start or end of a line, respectively. The following example displays lines starting with the vivek only: + + grep ^vivek /etc/passwd + +Sample outputs: + + vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash + vivekgite:x:1001:1001::/home/vivekgite:/bin/sh + +You can display only lines starting with the word vivek only i.e. do not display vivekgite, vivekg etc: + + grep -w ^vivek /etc/passwd + +Find lines ending with word foo: +grep 'foo$' filename + +Match line only containing foo: + + grep '^foo$' filename + +You can search for blank lines with the following examples: + + grep '^$' filename + +### Character Class ### + +Match Vivek or vivek: + + grep '[vV]ivek' filename + +OR + + grep '[vV][iI][Vv][Ee][kK]' filename + +You can also match digits (i.e match vivek1 or Vivek2 etc): + + grep -w '[vV]ivek[0-9]' filename + +You can match two numeric digits (i.e. match foo11, foo12 etc): + + grep 'foo[0-9][0-9]' filename + +You are not limited to digits, you can match at least one letter: + + grep '[A-Za-z]' filename + +Display all the lines containing either a "w" or "n" character: + + grep [wn] filename + +Within a bracket expression, the name of a character class enclosed in "[:" and ":]" stands for the list of all characters belonging to that class. Standard character class names are: + +- [:alnum:] - Alphanumeric characters. +- [:alpha:] - Alphabetic characters +- [:blank:] - Blank characters: space and tab. +- [:digit:] - Digits: '0 1 2 3 4 5 6 7 8 9'. +- [:lower:] - Lower-case letters: 'a b c d e f g h i j k l m n o p q r s t u v w x y z'. +- [:space:] - Space characters: tab, newline, vertical tab, form feed, carriage return, and space. +- [:upper:] - Upper-case letters: 'A B C D E F G H I J K L M N O P Q R S T U V W X Y Z'. + +In this example match all upper case letters: + + grep '[:upper:]' filename + +### Wildcards ### + +You can use the "." for a single character match. In this example match all 3 character word starting with "b" and ending in "t": + + grep '\' filename + +Where, + +- \< Match the empty string at the beginning of word +- \> Match the empty string at the end of word. + +Print all lines with exactly two characters: + + grep '^..$' filename + +Display any lines starting with a dot and digit: + + grep '^\.[0-9]' filename + +#### Escaping the dot #### + +The following regex to find an IP address 192.168.1.254 will not work: + + grep '192.168.1.254' /etc/hosts + +All three dots need to be escaped: + + grep '192\.168\.1\.254' /etc/hosts + +The following example will only match an IP address: + + egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' filename + +The following will match word Linux or UNIX in any case: + + egrep -i '^(linux|unix)' filename + +### How Do I Search a Pattern Which Has a Leading - Symbol? ### + +Searches for all lines matching '--test--' using -e option Without -e, grep would attempt to parse '--test--' as a list of options: + + grep -e '--test--' filename + +### How Do I do OR with grep? ### + +Use the following syntax: + + grep 'word1|word2' filename + +OR + + grep 'word1\|word2' filename + +### How Do I do AND with grep? ### + +Use the following syntax to display all lines that contain both 'word1' and 'word2' + + grep 'word1' filename | grep 'word2' + +### How Do I Test Sequence? ### + +You can test how often a character must be repeated in sequence using the following syntax: + + {N} + {N,} + {min,max} + +Match a character "v" two times: + + egrep "v{2}" filename + +The following will match both "col" and "cool": + + egrep 'co{1,2}l' filename + +The following will match any row of at least three letters 'c'. + + egrep 'c{3,}' filename + +The following example will match mobile number which is in the following format 91-1234567890 (i.e twodigit-tendigit) + + grep "[[:digit:]]\{2\}[ -]\?[[:digit:]]\{10\}" filename + +### How Do I Hightlight with grep? ### + +Use the following syntax: + + grep --color regex filename + +How Do I Show Only The Matches, Not The Lines? + +Use the following syntax: + + grep -o regex filename + +### Regular Expression Operator ### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Regex operatorMeaning
.Matches any single character.
?The preceding item is optional and will be matched, at most, once.
*The preceding item will be matched zero or more times.
+The preceding item will be matched one or more times.
{N}The preceding item is matched exactly N times.
{N,}The preceding item is matched N or more times.
{N,M}The preceding item is matched at least N times, but not more than M times.
-Represents the range if it's not first or last in a list or the ending point of a range in a list.
^Matches the empty string at the beginning of a line; also represents the characters not in the range of a list.
$Matches the empty string at the end of a line.
\bMatches the empty string at the edge of a word.
\BMatches the empty string provided it's not at the edge of a word.
\<Match the empty string at the beginning of word.
\> Match the empty string at the end of word.
+ +#### grep vs egrep #### + +egrep is the same as **grep -E**. It interpret PATTERN as an extended regular expression. From the grep man page: + + In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, \+, \{, + \|, \(, and \). + Traditional egrep did not support the { meta-character, and some egrep implementations support \{ instead, so portable scripts should avoid { in + grep -E patterns and should use [{] to match a literal {. + GNU grep -E attempts to support traditional usage by assuming that { is not special if it would be the start of an invalid interval specification. + For example, the command grep -E '{1' searches for the two-character string {1 instead of reporting a syntax error in the regular expression. + POSIX.2 allows this behavior as an extension, but portable scripts should avoid it. + +References: + +- man page grep and regex(7) +- info page grep` + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-regular-expressions/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md new file mode 100644 index 0000000000..bb12d2e1b3 --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md @@ -0,0 +1,41 @@ +Search Multiple Words / String Pattern Using grep Command +================================================================================ +How do I search multiple strings or words using the grep command? For example I'd like to search word1, word2, word3 and so on within /path/to/file. How do I force grep to search multiple words? + +The [grep command supports regular expression][1] pattern. To search multiple words, use following syntax: + + grep 'word1\|word2\|word3' /path/to/file + +In this example, search warning, error, and critical words in a text log file called /var/log/messages, enter: + + $ grep 'warning\|error\|critical' /var/log/messages + +To just match words, add -w swith: + + $ grep -w 'warning\|error\|critical' /var/log/messages + +egrep command can skip the above syntax and use the following syntax: + + $ egrep -w 'warning|error|critical' /var/log/messages + +I recommend that you pass the -i (ignore case) and --color option as follows: + + $ egrep -wi --color 'warning|error|critical' /var/log/messages + +Sample outputs: + +![Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output](http://s0.cyberciti.org/uploads/faq/2008/04/egrep-words-output.png) + +Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/searching-multiple-words-string-using-grep/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/ \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md new file mode 100644 index 0000000000..cc11cf85c2 --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md @@ -0,0 +1,33 @@ +Grep Count Lines If a String / Word Matches +================================================================================ +How do I count lines if given word or string matches for each input file under Linux or UNIX operating systems? + +You need to pass the -c or --count option to suppress normal output. It will display a count of matching lines for each input file: + + $ grep -c vivek /etc/passwd + +OR + + $ grep -w -c vivek /etc/passwd + +Sample outputs: + + 1 + +However, with the -v or --invert-match option it will count non-matching lines, enter: + + $ grep -c vivek /etc/passwd + +Sample outputs: + + 45 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-count-lines-if-a-string-word-matches/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md new file mode 100644 index 0000000000..6fa9dc7a27 --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md @@ -0,0 +1,67 @@ +Grep From Files and Display the File Name +================================================================================ +How do I grep from a number of files and display the file name only? + +When there is more than one file to search it will display file name by default: + + grep "word" filename + grep root /etc/* + +Sample outputs: + + /etc/bash.bashrc: See "man sudo_root" for details. + /etc/crontab:17 * * * * root cd / && run-parts --report /etc/cron.hourly + /etc/crontab:25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) + /etc/crontab:47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) + /etc/crontab:52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) + /etc/group:root:x:0: + grep: /etc/gshadow: Permission denied + /etc/logrotate.conf: create 0664 root utmp + /etc/logrotate.conf: create 0660 root utmp + +The first name is file name (e.g., /etc/crontab, /etc/group). The -l option will only print filename if th + + grep -l "string" filename + grep -l root /etc/* + +Sample outputs: + + /etc/aliases + /etc/arpwatch.conf + grep: /etc/at.deny: Permission denied + /etc/bash.bashrc + /etc/bash_completion + /etc/ca-certificates.conf + /etc/crontab + /etc/group + +You can suppress normal output; instead print the name of each input file from **which no output would normally have been** printed: + + grep -L "word" filename + grep -L root /etc/* + +Sample outputs: + + /etc/apm + /etc/apparmor + /etc/apparmor.d + /etc/apport + /etc/apt + /etc/avahi + /etc/bash_completion.d + /etc/bindresvport.blacklist + /etc/blkid.conf + /etc/bluetooth + /etc/bogofilter.cf + /etc/bonobo-activation + /etc/brlapi.key + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-from-files-and-display-the-file-name/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md new file mode 100644 index 0000000000..3d5943fc07 --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md @@ -0,0 +1,66 @@ +How To Find Files by Content Under UNIX +================================================================================ +I had written lots of code in C for my school work and saved it as source code under /home/user/c/*.c and *.h. How do I find files by content such as string or words (function name such as main() under UNIX shell prompt? + +You need to use the following tools: + +[a] **grep command** : print lines matching a pattern. + +[b] **find command**: search for files in a directory hierarchy. + +### [grep Command To Find Files By][1] Content ### + +Type the command as follows: + + grep 'string' *.txt + grep 'main(' *.c + grep '#include' *.c + grep 'getChar*' *.c + grep -i 'ultra' *.conf + grep -iR 'ultra' *.conf + +Where + +- **-i** : Ignore case distinctions in both the PATTERN (match valid, VALID, ValID string) and the input files (math file.c FILE.c FILE.C filename). +- **-R** : Read all files under each directory, recursively + +### Highlighting searched patterns ### + +You can highlight patterns easily while searching large number of files: + + $ grep --color=auto -iR 'getChar();' *.c + +### Displaying file names and line number for searched patterns ### + +You may also need to display filenames and numbers: + + $ grep --color=auto -iRnH 'getChar();' *.c + +Where, + +- **-n** : Prefix each line of output with the 1-based line number within its input file. +- **-H** Print the file name for each match. This is the default when there is more than one file to search. + + $grep --color=auto -nH 'DIR' * + +Sample output: + +![Fig.01: grep command displaying searched pattern](http://www.cyberciti.biz/faq/wp-content/uploads/2008/09/grep-command.png) + +Fig.01: grep command displaying searched pattern + +You can also use find command: + + $ find . -name "*.c" -print | xargs grep "main(" + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/unix-linux-finding-files-by-content/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/howto-search-find-file-for-text-string/ \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md new file mode 100644 index 0000000000..d7d520326e --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md @@ -0,0 +1,151 @@ +Linux / UNIX View Only Configuration File Directives ( Uncommented Lines of a Config File ) +================================================================================ +Most Linux and UNIX-like system configuration files are documented using comments, but some time I just need to see line of configuration text in a config file. How can I view just the uncommented configuration file directives from squid.conf or httpd.conf file? How can I strip out comments and blank lines on a Linux or Unix-like systems? + +To view just the uncommented lines of text in a config file use the grep, sed, awk, perl or any other text processing utility provided by UNIX / BSD / OS X / Linux operating systems. + +### grep command example to strip out command ### + +You can use the gerp command as follows: + + $ grep -v "^#" /path/to/config/file + $ grep -v "^#" /etc/apache2/apache2.conf + +Sample outputs: + + ServerRoot "/etc/apache2" + + LockFile /var/lock/apache2/accept.lock + + PidFile ${APACHE_PID_FILE} + + Timeout 300 + + KeepAlive On + + MaxKeepAliveRequests 100 + + KeepAliveTimeout 15 + + + + StartServers 5 + MinSpareServers 5 + MaxSpareServers 10 + MaxClients 150 + MaxRequestsPerChild 0 + + + + StartServers 2 + MinSpareThreads 25 + MaxSpareThreads 75 + ThreadLimit 64 + ThreadsPerChild 25 + MaxClients 150 + MaxRequestsPerChild 0 + + + + StartServers 2 + MaxClients 150 + MinSpareThreads 25 + MaxSpareThreads 75 + ThreadLimit 64 + ThreadsPerChild 25 + MaxRequestsPerChild 0 + + + User ${APACHE_RUN_USER} + Group ${APACHE_RUN_GROUP} + + + AccessFileName .htaccess + + + Order allow,deny + Deny from all + Satisfy all + + + DefaultType text/plain + + + HostnameLookups Off + + ErrorLog /var/log/apache2/error.log + + LogLevel warn + + Include /etc/apache2/mods-enabled/*.load + Include /etc/apache2/mods-enabled/*.conf + + Include /etc/apache2/httpd.conf + + Include /etc/apache2/ports.conf + + LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined + LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined + LogFormat "%h %l %u %t \"%r\" %>s %O" common + LogFormat "%{Referer}i -> %U" referer + LogFormat "%{User-agent}i" agent + + CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined + + + + Include /etc/apache2/conf.d/ + + Include /etc/apache2/sites-enabled/ + +To suppress blank lines use [egrep command][1], run: + + egrep -v "^#|^$" /etc/apache2/apache2.conf + ## or pass it to the page such as more or less ## + egrep -v "^#|^$" /etc/apache2/apache2.conf | less + + ## Bash function ###################################### + ## or create function or alias and use it as follows ## + ## viewconfig /etc/squid/squid.conf ## + ####################################################### + viewconfig(){ + local f="$1" + [ -f "$1" ] && command egrep -v "^#|^$" "$f" || echo "Error $1 file not found." + } + +Sample output: + +![Fig.01: Unix/Linux Egrep Strip Out Comments Blank Lines](http://s0.cyberciti.org/uploads/faq/2008/05/grep-strip-out-comments-blank-lines.jpg) + +Fig.01: Unix/Linux Egrep Strip Out Comments Blank Lines + +### Understanding grep/egrep command line options ### + +The -v option invert the sense of matching, to select non-matching lines. This option should work under all posix based systems. The regex ^$ matches and removes all blank lines and ^# matches and removes all comments that starts with a "#". + +### sed Command example ### + +GNU / sed command can be used as follows: + + $ sed '/ *#/d; /^ *$/d' /path/to/file + $ sed '/ *#/d; /^ *$/d' /etc/apache2/apache2.conf + +GNU or BSD sed can update your config file too. The syntax is as follows to edit files in-place, saving backups with the specified extension such as .bak: + + sed -i'.bak.2015.12.27' '/ *#/d; /^ *$/d' /etc/apache2/apache2.conf + +For more info see man pages - [grep(1)][2], [sed(1)][3] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/shell-display-uncommented-lines-only/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/ +[2]:http://www.manpager.com/linux/man1/grep.1.html +[3]:http://www.manpager.com/linux/man1/sed.1.html \ No newline at end of file From 33ae7000e58b92099dd70ec929efa286e940bec1 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 28 Nov 2015 12:01:02 +0800 Subject: [PATCH 040/160] Translating sources/tech/20151123 LNAV--Ncurses based log file viewer.md sources/tech/20151125 The tar command explained.md --- sources/tech/20151123 LNAV--Ncurses based log file viewer.md | 1 + sources/tech/20151125 The tar command explained.md | 1 + 2 files changed, 2 insertions(+) diff --git a/sources/tech/20151123 LNAV--Ncurses based log file viewer.md b/sources/tech/20151123 LNAV--Ncurses based log file viewer.md index 08b7ee011a..b6b6ca3c6d 100644 --- a/sources/tech/20151123 LNAV--Ncurses based log file viewer.md +++ b/sources/tech/20151123 LNAV--Ncurses based log file viewer.md @@ -1,3 +1,4 @@ +ictlyh Translating LNAV – Ncurses based log file viewer ================================================================================ The Logfile Navigator, lnav for short, is a curses-based tool for viewing and analyzing log files. The value added by lnav over text viewers / editors is that it takes advantage of any semantic information that can be gleaned from the log file, such as timestamps and log levels. Using this extra semantic information, lnav can do things like: interleaving messages from different files; generate histograms of messages over time; and providing hotkeys for navigating through the file. It is hoped that these features will allow the user to quickly and efficiently zero-in on problems. diff --git a/sources/tech/20151125 The tar command explained.md b/sources/tech/20151125 The tar command explained.md index cc13a25dd2..7911dbd1f6 100644 --- a/sources/tech/20151125 The tar command explained.md +++ b/sources/tech/20151125 The tar command explained.md @@ -1,3 +1,4 @@ +ictlyh Translating The tar command explained ================================================================================ The Linux [tar][1] command is the swiss army of the Linux admin when it comes to archiving or distributing files. Gnu Tar archives can contain multiple files and directories, file permissions can be preserved and it supports multiple compression formats. The name tar stands for "**T**ape **Ar**chiver", the format is an official POSIX standard. From f0d475ebf6ebeb1240129eb8edea815f8c6d9a91 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 28 Nov 2015 13:36:04 +0800 Subject: [PATCH 041/160] Translated sources/tech/20151125 The tar command explained.md --- .../20151125 The tar command explained.md | 138 ------------------ .../20151125 The tar command explained.md | 137 +++++++++++++++++ 2 files changed, 137 insertions(+), 138 deletions(-) delete mode 100644 sources/tech/20151125 The tar command explained.md create mode 100644 translated/tech/20151125 The tar command explained.md diff --git a/sources/tech/20151125 The tar command explained.md b/sources/tech/20151125 The tar command explained.md deleted file mode 100644 index 7911dbd1f6..0000000000 --- a/sources/tech/20151125 The tar command explained.md +++ /dev/null @@ -1,138 +0,0 @@ -ictlyh Translating -The tar command explained -================================================================================ -The Linux [tar][1] command is the swiss army of the Linux admin when it comes to archiving or distributing files. Gnu Tar archives can contain multiple files and directories, file permissions can be preserved and it supports multiple compression formats. The name tar stands for "**T**ape **Ar**chiver", the format is an official POSIX standard. - -### Tar file formats ### - -A short introduction into tar compression levels. - -- **No compression** Uncompressed files have the file ending .tar. -- **Gzip Compression** The Gzip format is the most widely used compression format for tar, it is fast for creating and extracting files. Files with gz compression have normally the file ending .tar.gz or .tgz. Here some examples on how to [create][2] and [extract][3] a tar.gz file. -- **Bzip2 Compression** The Bzip2 format offers a better compression then the Gzip format. Creating files is slower, the file ending is usually .tar.bz2. -- **Lzip (LZMA) Compression** The Lzip compression combines the speed of Gzip with a compression level that is similar to Bzip2 (or even better). Independently from these good attributes, this format is not widely used. -- **Lzop Compression** This compress option is probably the fastest compression format for tar, it has a compression level similar to gzip and is not widely used. - -The common formats are tar.gz and tar.bz2. If you goal is fast compression, then use gzip. When the archive file size is critical, then use tar.bz2. - -### What is the tar command used for? ### - -Here a few common use cases of the tar command. - -- Backup of Servers and Desktops. -- Document archiving. -- Software Distribution. - -### Installing tar ### - -The command is installed on most Linux Systems by default. Here are the instructions to install tar in case that the command is missing. - -#### CentOS #### - -Execute the following command as root user on the shell to install tar on CentOS. - - yum install tar - -#### Ubuntu #### - -This command will install tar on Ubuntu. The "sudo" command ensures that the apt command is run with root privileges. - - sudo apt-get install tar - -#### Debian #### - -The following apt command installs tar on Debian. - - apt-get install tar - -#### Windows #### - -The tar command is available for Windows as well, you can download it from the Gunwin project. [http://gnuwin32.sourceforge.net/packages/gtar.htm][4] - -### Create tar.gz Files ### - -Here is the [tar command][5] that has to be run on the shell. I will explain the command line options below. - - tar pczf myarchive.tar.gz /home/till/mydocuments - -This command creates the archive myarchive.tar.gz which contains the files and folders from the path /home/till/mydocuments. **The command line options explained**: - -- **[p]** This option stand for "preserve", it instructs tar to store details on file owner and file permissions in the archive. -- **[c]** Stands for create. This option is mandatory when a file is created. -- **[z]** The z option enables gzip compression. -- **[f]** The file option tells tar to create an archive file. Tar will send the output to stdout if this option is omitted. - -#### Tar command examples #### - -**Example 1: Backup the /etc Directory** Create a backup of the /etc config directory. The backup is stored in the root folder. - - tar pczvf /root/etc.tar.gz /etc - -![Backup the /etc directory with tar.](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png) - -The command should be run as root to ensure that all files in /etc are included in the backup. This time, I've added the [v] option in the command. This option stands for verbose, it tells tar to show all file names that get added into the archive. - -**Example 2: Backup your /home directory** Create a backup of your home directory. The backup will be stored in a directory /backup. - - tar czf /backup/myuser.tar.gz /home/myuser - -Replace myuser with your username. In this command, I've omitted the [p] switch, so the permissions get not preserved. - -**Example 3: A file-based backup of MySQL databases** The MySQL databases are stored in /var/lib/mysql on most Linux distributions. You can check that with the command: - - ls /var/lib/mysql - -![File based MySQL backup with tar.](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png) - -Stop the database server to get a consistent MySQL file backup with tar. The backup will be written to the /backup folder. - -1) Create the backup folder - - mkdir /backup - chmod 600 /backup - -2) Stop MySQL, run the backup with tar and start the database server again. - - service mysql stop - tar pczf /backup/mysql.tar.gz /var/lib/mysql - service mysql start - ls -lah /backup - -![File based MySQL backup.](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png) - -### Extract tar.gz Files ### - -The command to extract tar.gz files is: - - tar xzf myarchive.tar.gz - -#### The tar command options explained #### - -- **[x]** The x stand for extract, it is mandatory when a tar file shall be extracted. -- **[z]** The z option tells tar that the archive that shall be unpacked is in gzip format. -- **[f]** This option instructs tar to read the archive content from a file, in this case the file myarchive.tar.gz. - -The above tar command will silently extract that tar.gz file, it will show only error messages. If you like to see which files get extracted, then add the "v" option. - - tar xzvf myarchive.tar.gz - -The **[v]** option stands for verbose, it will show the file names while they get unpacked. - -![Extract a tar.gz file.](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png) - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/linux-tar-command/ - -作者:[howtoforge][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com/ -[1]:https://en.wikipedia.org/wiki/Tar_(computing) -[2]:http://www.faqforge.com/linux/create-tar-gz/ -[3]:http://www.faqforge.com/linux/extract-tar-gz/ -[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm -[5]:http://www.faqforge.com/linux/tar-command/ \ No newline at end of file diff --git a/translated/tech/20151125 The tar command explained.md b/translated/tech/20151125 The tar command explained.md new file mode 100644 index 0000000000..a3d19ac34f --- /dev/null +++ b/translated/tech/20151125 The tar command explained.md @@ -0,0 +1,137 @@ +tar 命令详解 +================================================================================ +Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这是一种 POSIX 标准。 + +### Tar 文件格式 ### + +tar 压缩等级简介。 + +- **无压缩** 没有压缩的文件用 .tar 结尾。 +- **Gzip 压缩** Gzip 格式是 tar 使用最广泛的压缩格式,它能快速压缩和提取文件。用 gzip 压缩的文件通常用 .tar.gz 或 .tgz 结尾。这里有一些如何[创建][2]和[解压][3] tar.gz 文件的例子。 +- **Bzip2 压缩** 和 Gzip格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。 +- **Lzip(LAMA)压缩** Lizp 压缩结合了 Gzip 快速的优势,以及和 Bzip2 类似(甚至更好) 的压缩率。尽管有这些好处,这个格式并没有得到广泛使用。 +- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,也没有广泛使用。 + +常见的格式是 tar.gz 和 tar.bz2。如果你想快速压缩,那么就是用 gzip。如果归档文件大小比较重要,就是用 tar.bz2。 + +### tar 命令用来干什么? ### + +下面是一些使用 tar 命令的常见情形。 + +- 备份服务器或桌面系统 +- 文档归档 +- 软件分发 + +### 安装 tar ### + +大部分 Linux 系统默认都安装了 tar。如果没有,这里有安装 tar 的命令。 + +#### CentOS #### + +在 CentOS 中,以 root 用户在 shell 中执行下面的命令安装 tar。 + + yum install tar + +#### Ubuntu #### + +下面的命令会在 Ubuntu 上安装 tar。“sudo” 命令确保 apt 命令是以 root 权限运行的。 + + sudo apt-get install tar + +#### Debian #### + +下面的 apt 命令在 Debian 上安装 tar。 + + apt-get install tar + +#### Windows #### + +tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin32.sourceforge.net/packages/gtar.htm][4]中下载它。 + +### 创建 tar.gz 文件 ### + +下面是在 shell 中运行 [tar 命令][5] 的一些例子。下面我会解释这些命令行选项。 + + tar pczf myarchive.tar.gz /home/till/mydocuments + +这个命令会创建归档文件 myarchive.tar.gz,其中包括了路径 /home/till/mydocuments 中的文件和目录。**命令行选项解释**: + +- **[p]** 这个选项表示 “preserve”,它指示 tar 在归档文件中保留文件属主和权限信息。 +- **[c]** 表示创建。要创建文件时不能缺少这个选项。 +- **[z]** z 选项启用 gzip 压缩。 +- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到 stdout。 + +#### Tar 命令事例 #### + +**事例 1: 备份 /etc 目录** 创建 /etc 配置目录的一个备份。备份保存在 root 目录。 + + tar pczvf /root/etc.tar.gz /etc + +![用 tar 备份 /etc 目录](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png) + +要以 root 用户运行命令确保 /etc 中的所有文件都会被包含在备份中。这次,我在命令中添加了 [v] 选项。这个选项表示 verbose,它告诉 tar 显示所有被包含到归档文件中的文件名。 + +**事例 2: 备份你的 /home 目录** 创建你的 home 目录的备份。备份会被保存到 /backup 目录。 + + tar czf /backup/myuser.tar.gz /home/myuser + +用你的用户名替换 myuser。这个命令中,我省略了 [p] 选项,也就不会保存权限。 + +**事例 3: 基于文件的 MySQL 数据库备份** 在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令检查: + + ls /var/lib/mysql + +![使用 tar 基于文件备份 MySQL](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png) + +用 tar 备份 MySQL 文件时为了保持一致性,首先停用数据库服务器。备份会被写到 /backup 目录。 + +1) 创建 backup 目录 + + mkdir /backup + chmod 600 /backup + +2) 停止 MySQL,用 tar 进行备份并重新启动数据库。 + + service mysql stop + tar pczf /backup/mysql.tar.gz /var/lib/mysql + service mysql start + ls -lah /backup + +![基于文件的 MySQL 备份](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png) + +### 提取 tar.gz 文件### + +提取 tar.gz 文件的命令是: + + tar xzf myarchive.tar.gz + +#### tar 命令选项解释 #### + +- **[x]** x 表示提取,提取 tar 文件时这个命令不可缺少。 +- **[z]** z 选项告诉 tar 要解压的归档文件时 gzip 格式。 +- **[f]** 该选项告诉 tar 从一个文件中读取归档内容,本例中是 myarchive.tar.gz。 + +上面的 tar 命令会安静地提取 tar.gz 文件,它只会显示错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。 + + tar xzvf myarchive.tar.gz + +**[v]** 选项表示 verbose,它会向你显示解压的文件名。 + +![提取 tar.gz 文件](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/linux-tar-command/ + +作者:[howtoforge][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com/ +[1]:https://en.wikipedia.org/wiki/Tar_(computing) +[2]:http://www.faqforge.com/linux/create-tar-gz/ +[3]:http://www.faqforge.com/linux/extract-tar-gz/ +[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm +[5]:http://www.faqforge.com/linux/tar-command/ \ No newline at end of file From e1bc4750ebaf10f614f5ac79e61ebb114cf2f297 Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Sat, 28 Nov 2015 22:39:01 +0800 Subject: [PATCH 042/160] Translating --- ... Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md b/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md index d9829e9daa..b3638a61ea 100644 --- a/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md +++ b/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md @@ -1,3 +1,4 @@ +Translating by KnightJoker How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2 ================================================================================ Nginx is free and open source HTTP server and reverse proxy, as well as an mail proxy server for IMAP/POP3. Nginx is high performance web server with rich of features, simple configuration and low memory usage. Originally written by Igor Sysoev on 2002, and until now has been used by a big technology company including Netflix, Github, Cloudflare, WordPress.com etc. From 20cc91897584c156ea3fe785ce014c4c4c1d29e3 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sat, 28 Nov 2015 22:52:23 +0800 Subject: [PATCH 043/160] Update 20151123 How to access Dropbox from the command line in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...23 How to access Dropbox from the command line in Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151123 How to access Dropbox from the command line in Linux.md b/sources/tech/20151123 How to access Dropbox from the command line in Linux.md index d4d2c8af3a..927a046615 100644 --- a/sources/tech/20151123 How to access Dropbox from the command line in Linux.md +++ b/sources/tech/20151123 How to access Dropbox from the command line in Linux.md @@ -1,3 +1,5 @@ +FSSlc translating + How to access Dropbox from the command line in Linux ================================================================================ Cloud storage is everywhere in today's multi-device environment, where people want to access content across multiple devices wherever they go. Dropbox is the most widely used cloud storage service thanks to its elegant UI and flawless multi-platform compatibility. The popularity of Dropbox has led to a flurry of official or unofficial Dropbox clients that are available across different operating system platforms. @@ -94,4 +96,4 @@ via: http://xmodulo.com/access-dropbox-command-line-linux.html [a]:http://xmodulo.com/author/nanni [1]:http://www.andreafabrizi.it/?dropbox_uploader -[2]:https://www.dropbox.com/developers/apps \ No newline at end of file +[2]:https://www.dropbox.com/developers/apps From aa186ea0d700dd245f89c9b50c662a6094194ca5 Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Sun, 29 Nov 2015 01:34:17 +0800 Subject: [PATCH 044/160] Update Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md --- ...age Management with Yum RPM Apt Dpkg Aptitude and Zypper.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md index 6d0f65223f..af967e18d4 100644 --- a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md +++ b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -1,3 +1,4 @@ +Flowsnow translating... Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper ================================================================================ Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. @@ -226,4 +227,4 @@ via: http://www.tecmint.com/linux-package-management/ [2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ [3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ [4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file +[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ From da05b0d2f6bd33a619fb59b18615452655bf53cc Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 29 Nov 2015 11:47:54 +0800 Subject: [PATCH 045/160] Translated sources/tech/20151123 LNAV--Ncurses based log file viewer.md --- ...123 LNAV--Ncurses based log file viewer.md | 84 ------------------- ...123 LNAV--Ncurses based log file viewer.md | 83 ++++++++++++++++++ 2 files changed, 83 insertions(+), 84 deletions(-) delete mode 100644 sources/tech/20151123 LNAV--Ncurses based log file viewer.md create mode 100644 translated/tech/20151123 LNAV--Ncurses based log file viewer.md diff --git a/sources/tech/20151123 LNAV--Ncurses based log file viewer.md b/sources/tech/20151123 LNAV--Ncurses based log file viewer.md deleted file mode 100644 index b6b6ca3c6d..0000000000 --- a/sources/tech/20151123 LNAV--Ncurses based log file viewer.md +++ /dev/null @@ -1,84 +0,0 @@ -ictlyh Translating -LNAV – Ncurses based log file viewer -================================================================================ -The Logfile Navigator, lnav for short, is a curses-based tool for viewing and analyzing log files. The value added by lnav over text viewers / editors is that it takes advantage of any semantic information that can be gleaned from the log file, such as timestamps and log levels. Using this extra semantic information, lnav can do things like: interleaving messages from different files; generate histograms of messages over time; and providing hotkeys for navigating through the file. It is hoped that these features will allow the user to quickly and efficiently zero-in on problems. - -### lnav Features ### - -#### Support for the following log file formats: #### - -Syslog, Apache access log, strace, tcsh history, and generic log files with timestamps. The file format is automatically detected when the file is read in. - -#### Histogram view: #### - -Displays the number of log messages per bucket-of-time. Useful for getting an overview of what was happening over a long period of time. - -#### Filters: #### - -Display only lines that match or do not match a set of regular expressions. Useful for removing extraneous log lines that you are not interested in. - -#### "Live" operation: #### - -Searches are done as you type; new log lines are automatically loaded and searched as they are added; filters apply to lines as they are loaded; and, SQL queries are checked for correctness as you type. - -#### Automatic tailing: #### - -The log file view automatically scrolls down to follow new lines that are added to files. Simply scroll up to lock the view in place and then scroll down to the bottom to resume tailing. - -#### Time-of-day ordering of lines: #### - -The log lines from all the files are loaded and then sorted by time-of-day. Relieves you of having to manually line up log messages from different files. - -#### Syntax highlighting: #### - -Errors and warnings are colored in red and yellow, respectively. Highlights are also applied to: SQL keywords, XML tags, file and line numbers in Java backtraces, and quoted strings. - -#### Navigation: #### - -There are hotkeys for jumping to the next or previous error or warning and moving forward or backward by an amount of time. - -#### Use SQL to query logs: #### - -Each log file line is treated as a row in a database that can be queried using SQL. The columns that are available depend on logs file types being viewed. - -#### Command and search history: #### - -Your previously entered commands and searches are saved so you can access them between sessions. - -#### Compressed files: #### - -Compressed log files are automatically detected and uncompressed on the fly. - -### Install lnav on ubuntu 15.10 ### - -Open the terminal and run the following command - - sudo apt-get install lnav - -### Using lnav ### - -If you want to view logs using lnav you can do using the following command by default it shows syslogs - - lnav - -![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png) - -If you want to view specific logs provide the path - -If you want to view CUPS logs run the following command from your terminal - - lnav /var/log/cups - -![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/6.png) - --------------------------------------------------------------------------------- - -via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html - -作者:[ruchi][a] -译者:[zky001](https://github.com/zky001) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.ubuntugeek.com/author/ubuntufix diff --git a/translated/tech/20151123 LNAV--Ncurses based log file viewer.md b/translated/tech/20151123 LNAV--Ncurses based log file viewer.md new file mode 100644 index 0000000000..e1f99eb07c --- /dev/null +++ b/translated/tech/20151123 LNAV--Ncurses based log file viewer.md @@ -0,0 +1,83 @@ +LNAV - 基于 Ncurses 的日志文件阅读器 +================================================================================ +日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理类似事情:来自不同文件的交错信息;按照时间生成信息直方图;提供在文件中导航的关键字。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。 + +### lnav 功能 ### + +#### 支持以下日志文件格式: #### + +Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳的日志文件。读入文件的时候回自动检测文件格式。 + +#### 直方图视图: #### + +以时间为桶显示日志信息数量。这对于在一段长时间内大概了解发生了什么非常有用。 + +#### 过滤器: #### + +只显示那些匹配或不匹配一些正则表达式的行。对于移除大量你不感兴趣的日志行非常有用。 + +#### 及时操作: #### + +在你输入到时候会同时完成检索;当添加新日志行的时候回自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查正确性。 + +#### 自动显示后文: #### + +日志文件视图会自动往下滚动到新添加到文件中的行。只需要向上滚动就可以锁定当前视图,然后向下滚动到底部恢复显示后文。 + +#### 按照日期顺序排序行: #### + +从所有文件中加载的日志行会按照日期进行排序。使得你不需要手动从不同文件中收集日志信息。 + +#### 语法高亮: #### + +错误和警告会用红色和黄色显示。高亮还可用于: SQL 关键字、XML 标签、Java 文件行号和括起来的字符串。 + +#### 导航: #### + +有快捷键用于跳转到下一个或上一个错误或警告,按照一定的时间向后或向前移动。 + +#### 用 SQL 查询日志: #### + +每个日志文件行都被认为是数据库中可以使用 SQL 查询的一行。可以使用的列取决于查看的日志文件类型。 + +#### 命令和搜索历史: #### + +会自动保存你之前输入的命令和搜素,因此你可以在会话之间使用它们。 + +#### 压缩文件: #### + +会实时自动检测和解压压缩的日志文件。 + +### 在 ubuntu 15.10 上安装 lnav #### + +打开终端运行下面的命令 + + sudo apt-get install lnav + +### 使用 lnav ### + +如果你想使用 lnav 查看日志,你可以使用下面的命令,默认它会显示 syslogs + + lnav + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png) + +如果你想查看特定的日志,那么需要指定路径 + +如果你想看 CPU 日志,在你的终端里运行下面的命令 + + lnav /var/log/cups + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/6.png) + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html + +作者:[ruchi][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix From 9dd2e0075b98d8167ad1ad9590f7d1675d9da1eb Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 29 Nov 2015 22:43:44 +0800 Subject: [PATCH 046/160] PUB:20151012 How to Setup DockerUI--a Web Interface for Docker @oska874 --- ...up DockerUI--a Web Interface for Docker.md | 113 ++++++++++++++++++ ...up DockerUI--a Web Interface for Docker.md | 111 ----------------- 2 files changed, 113 insertions(+), 111 deletions(-) create mode 100644 published/20151012 How to Setup DockerUI--a Web Interface for Docker.md delete mode 100644 translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md diff --git a/published/20151012 How to Setup DockerUI--a Web Interface for Docker.md b/published/20151012 How to Setup DockerUI--a Web Interface for Docker.md new file mode 100644 index 0000000000..10ead7542e --- /dev/null +++ b/published/20151012 How to Setup DockerUI--a Web Interface for Docker.md @@ -0,0 +1,113 @@ +用浏览器管理 Docker +================================================================================ +Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统是一种非常棒的技术和想法。docker 已经通过节省工作时间来拯救了成千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不用关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行 docker 容器和管理它们可能会花费一点点努力和时间,所以现在有一款基于 web 的应用程序-DockerUI,可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉 Linux 命令行,但又很想运行容器化程序的人很有帮助的工具。DockerUI 是一个开源的基于 web 的应用程序,它最值得称道的是它华丽的设计和用来运行和管理 docker 的简洁的操作界面。 + +下面会介绍如何在 Linux 上安装配置 DockerUI。 + +### 1. 安装 docker ### + +首先,我们需要安装 docker。我们得感谢 docker 的开发者,让我们可以简单的在主流 linux 发行版上安装 docker。为了安装 docker,我们得在对应的发行版上使用下面的命令。 + +#### Ubuntu/Fedora/CentOS/RHEL/Debian #### + +docker 维护者已经写了一个非常棒的脚本,用它可以在 Ubuntu 15.04/14.10/14.04、 CentOS 6.x/7、 Fedora 22、 RHEL 7 和 Debian 8.x 这几个 linux 发行版上安装 docker。这个脚本可以识别出我们的机器上运行的 linux 的发行版本,然后将需要的源库添加到文件系统、并更新本地的安装源目录,最后安装 docker 及其依赖库。要使用这个脚本安装docker,我们需要在 root 用户或者 sudo 权限下运行如下的命令, + + # curl -sSL https://get.docker.com/ | sh + +#### OpenSuse/SUSE Linux 企业版 #### + +要在运行了 OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装 docker,我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker: + + # zypper in docker + +#### ArchLinux #### + +docker 在 ArchLinux 的官方源和社区维护的 AUR 库中可以找到。所以在 ArchLinux 上我们有两种方式来安装 docker。使用官方源安装,需要执行下面的 pacman 命令: + + # pacman -S docker + +如果要从社区源 AUR 安装 docker,需要执行下面的命令: + + # yaourt -S docker-git + +### 2. 启动 ### + +安装好 docker 之后,我们需要运行 docker 守护进程,然后才能运行并管理 docker 容器。我们需要使用下列命令来确认 docker 守护进程已经安装并运行了。 + +#### 在 SysVinit 上#### + + # service docker start + +#### 在Systemd 上#### + + # systemctl start docker + +### 3. 安装 DockerUI ### + +安装 DockerUI 比安装 docker 要简单很多。我们仅仅需要从 docker 注册库上拉取 dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令: + + # docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui + +![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png) + +在上面的命令里,dockerui 使用的默认端口是9000,我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker 的 socket。如果主机使用了 SELinux 那么就得使用`--privileged` 标志。 + +执行完上面的命令后,我们要检查 DockerUI 容器是否运行了,或者使用下面的命令检查: + + # docker ps + +![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png) + +### 4. 拉取 docker 镜像 ### + +现在我们还不能直接使用 DockerUI 拉取镜像,所以我们需要在命令行下拉取 docker 镜像。要完成这些我们需要执行下面的命令。 + + # docker pull ubuntu + +![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png) + +上面的命令将会从 docker 官方源 [Docker Hub][1]拉取一个标志为 ubuntu 的镜像。类似的我们可以从 Hub 拉取需要的其它镜像。 + +### 4. 管理 ### + +启动了 DockerUI 容器之后,我们可以用它来执行启动、暂停、终止、删除以及 DockerUI 提供的其它操作 docker 容器的命令。 + +首先,我们需要在 web 浏览器里面打开 dockerui:在浏览器里面输入 http://ip-address:9000 或者 http://mydomain.com:9000,具体要根据你的系统配置。默认情况下登录不需要认证,但是可以配置我们的 web 服务器来要求登录认证。要启动一个容器,我们需要有包含我们要运行的程序的镜像。 + +#### 创建 #### + +创建容器我们需要在 Images 页面里,点击我们想创建的容器的镜像 id。然后点击 `Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。 + +![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png) + +#### 停止 #### + +要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后在 Action 的子菜单里面按下 Stop 就行了。 + +![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png) + +#### 暂停与恢复 #### + +要暂停一个容器,只需要简单的选取目标容器,然后点击 Pause 就行了。恢复一个容器只需要在 Actions 的子菜单里面点击 Unpause 就行了。 + +#### 删除 #### + +类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击 Kill 或者 Remove 就行了。 + +### 结论 ### + +DockerUI 使用了 docker 远程 API 提供了一个很棒的管理 docker 容器的 web 界面。它的开发者们完全使用 HTML 和 JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想要为 DockerUI 做贡献,可以访问它们的 [Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/ + +作者:[Arun Pyasi][a] +译者:[oska874](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://hub.docker.com/ +[2]:https://github.com/crosbymichael/dockerui/ diff --git a/translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md b/translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md deleted file mode 100644 index 52d63d6ac9..0000000000 --- a/translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md +++ /dev/null @@ -1,111 +0,0 @@ -在浏览器上使用Docker -================================================================================ -Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统的这种是一个非常棒的技术和想法。docker 已经通过节省工作时间来拯救了千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不去关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行docker容器和管理他们可能会花费一点点困难和时间,所以现在有一款基于web 的应用程序-DockerUI,可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉Linux 命令行担忧很想运行容器话程序的人很有帮助。DockerUI 是一个开源的基于web 的应用程序,它最著名的是它华丽的设计和简单的用来运行和管理docker 的简单的操作界面。 - -下面会介绍如何在Linux 上安装配置DockerUI。 - -### 1. 安装docker ### - -首先,我们需要安装docker。我们得感谢docker 的开发者,让我们可以简单的在主流linux 发行版上安装docker。为了安装docker,我们得在对应的发行版上使用下面的命令。 - -#### Ubuntu/Fedora/CentOS/RHEL/Debian #### - -docker 维护者已经写了一个非常棒的脚本,用它可以在Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 和Debian 8.x 这几个linux 发行版上安装docker。这个脚本可以识别出我们的机器上运行的linux 的发行版本,然后将需要的源库添加到文件系统、更新本地的安装源目录,最后安装docker 和依赖库。要使用这个脚本安装docker,我们需要在root 用户或者sudo 权限下运行如下的命令, - - # curl -sSL https://get.docker.com/ | sh - -#### OpenSuse/SUSE Linux 企业版 #### - -要在运行了OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装docker,我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker: - - # zypper in docker - -#### ArchLinux #### - -docker 存在于ArchLinux 的官方源和社区维护的AUR 库。所以在ArchLinux 上我们有两条路来安装docker。使用官方源安装,需要执行下面的pacman 命令: - - # pacman -S docker - -如果要从社区源 AUR 安装docker,需要执行下面的命令: - - # yaourt -S docker-git - -### 2. 启动 ### - -安装好docker 之后,我们需要运行docker 监护程序,然后再能运行并管理docker 容器。我们需要使用下列命令来确定docker 监护程序已经安装并运行了。 - -#### 在 SysVinit 上#### - - # service docker start - -#### 在Systemd 上#### - - # systemctl start docker - -### 3. 安装DockerUI ### - -安装DockerUI 比安装docker 要简单很多。我们仅仅需要懂docker 注册表上拉取dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令: - - # docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui - -![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png) - -在上面的命令里,dockerui 使用的默认端口是9000,我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker socket。如果主机使用了SELinux那么就得使用`--privileged` 标志。 - -执行完上面的命令后,我们要检查dockerui 容器是否运行了,或者使用下面的命令检查: - - # docker ps - -![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png) - -### 4. 拉取docker镜像 ### - -现在我们还不能直接使用dockerui 拉取镜像,所以我们需要在命令行下拉取docker 镜像。要完成这些我们需要执行下面的命令。 - - # docker pull ubuntu - -![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png) - -上面的命令将会从docker 官方源[Docker Hub][1]拉取一个标志为ubuntu 的镜像。类似的我们可以从Hub 拉取需要的其它镜像。 - -### 4. 管理 ### - -启动了dockerui 容器之后,我们快乐的用它来执行启动、暂停、终止、删除和其它dockerui 提供的其他用来操作docker 容器的命令。第一,我们需要在web 浏览器里面打开dockerui:在浏览器里面输入http://ip-address:9000 或者 http://mydomain.com:9000,具体要根据你的系统配置。默认情况下登录不需啊哟认证,但是可以配置我们的web 服务器来要求登录认证。要启动一个容器,我们得得到包含我们要运行的程序的景象。 - -#### 创建 #### - -创建容器我们需要在Images 页面,点击我们想创建的容器的镜像id。然后点击`Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。 - -![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png) - -#### 中止 #### - -要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后再Action 的子菜单里面按下Stop 就行了。 - -![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png) - -#### 暂停与恢复 #### - -要暂停一个容器,只需要简单的选取目标容器,然后点击Pause 就行了。恢复一个容器只需要在Actions 的子菜单里面点击Unpause 就行了。 - -#### 删除 #### - -类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击Kill 或者Remove 就行了。 - -### 结论 ### - -dockerui 使用了docker 远程API 完成了一个很棒的管理docker 容器的web 界面。它的开发者们已经使用纯HTML 和JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想参与、贡献dockerui,我们可以访问它们的[Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/ - -作者:[Arun Pyasi][a] -译者:[oska874](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://hub.docker.com/ -[2]:https://github.com/crosbymichael/dockerui/ From 9b4c18c91eacf22737b481fcee20d0bd08f426e7 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 29 Nov 2015 23:12:21 +0800 Subject: [PATCH 047/160] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E6=88=90=E5=91=98?= =?UTF-8?q?=E5=88=97=E8=A1=A8?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 188 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 96 insertions(+), 92 deletions(-) diff --git a/README.md b/README.md index 968c8434a1..68b579d9fc 100644 --- a/README.md +++ b/README.md @@ -51,113 +51,117 @@ LCTT的组成 * 2014/12/25 提升runningwater为Core Translators成员。 * 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。 * 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。 +* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。 活跃成员 ------------------------------- 目前 TP 活跃成员有: - CORE @wxy, -- CORE @carolinewuyan, - CORE @DeadFire, - CORE @geekpi, - CORE @GOLinux, -- CORE @reinoir, -- CORE @bazz2, -- CORE @zpl1025, - CORE @ictlyh, -- CORE @dongfengweixiao +- CORE @carolinewuyan, +- CORE @strugglingyouth, +- CORE @FSSlc +- CORE @zpl1025, +- CORE @bazz2, +- CORE @Vic020, +- CORE @dongfengweixiao, +- CORE @alim0x, +- Senior @reinoir, - Senior @tinyeyeser, - Senior @vito-L, - Senior @jasminepeng, - Senior @willqian, - Senior @vizv, -- @ZTinoZ, -- @Vic020, -- @runningwater, -- @KayGuoWhu, -- @luoxcat, -- @alim0x, -- @2q1w2007, -- @theo-l, -- @FSSlc, -- @su-kaiyao, -- @blueabysm, -- @flsf, -- @martin2011qi, -- @SPccman, -- @wi-cuckoo, -- @Linchenguang, -- @linuhap, -- @crowner, -- @Linux-pdz, -- @H-mudcup, -- @yechunxiao19, -- @woodboow, -- @Stevearzh, -- @disylee, -- @cvsher, -- @wwy-hust, -- @johnhoow, -- @felixonmars, -- @TxmszLou, -- @shipsw, -- @scusjs, -- @wangjiezhe, -- @hyaocuk, -- @MikeCoder, -- @ZhouJ-sh, -- @boredivan, -- @goreliu, -- @l3b2w1, -- @JonathanKang, -- @NearTan, -- @jiajia9linuxer, -- @Love-xuan, -- @coloka, -- @owen-carter, -- @luoyutiantang, -- @JeffDing, -- @icybreaker, -- @tenght, -- @liuaiping, -- @mtunique, -- @rogetfan, -- @nd0104, -- @mr-ping, -- @szrlee, -- @lfzark, -- @CNprober, -- @DongShuaike, -- @ggaaooppeenngg, -- @haimingfg, -- @213edu, -- @Tanete, -- @guodongxiaren, -- @zzlyzq, -- @FineFan, -- @yujianxuechuan, -- @Medusar, -- @shaohaolin, -- @ailurus1991, -- @liaoishere, -- @CHINAANSHE, -- @stduolc, -- @yupmoon, -- @tomatoKiller, -- @zhangboyue, -- @kingname, -- @KevinSJ, -- @zsJacky, -- @willqian, -- @Hao-Ding, -- @JygjHappy, -- @Maclauring, -- @small-Wood, -- @cereuz, -- @fbigun, -- @lijhg, -- @soooogreen, +- runningwater, +- ZTinoZ, +- theo-l, +- luoxcat, +- disylee, +- wi-cuckoo, +- haimingfg, +- KayGuoWhu, +- wwy-hust, +- martin2011qi, +- cvsher, +- su-kaiyao, +- flsf, +- SPccman, +- Stevearzh +- Linchenguang, +- oska874 +- Linux-pdz, +- 2q1w2007, +- felixonmars, +- wyangsun, +- MikeCoder, +- mr-ping, +- xiqingongzi +- H-mudcup, +- zhangboyue, +- goreliu, +- DongShuaike, +- TxmszLou, +- ZhouJ-sh, +- wangjiezhe, +- NearTan, +- icybreaker, +- shipsw, +- johnhoow, +- linuhap, +- boredivan, +- blueabysm, +- liaoishere, +- yechunxiao19, +- l3b2w1, +- XLCYun, +- KevinSJ, +- tenght, +- coloka, +- luoyutiantang, +- yupmoon, +- jiajia9linuxer, +- scusjs, +- tnuoccalanosrep, +- woodboow, +- 1w2b3l, +- crowner, +- mtunique, +- dingdongnigetou, +- CNprober, +- JonathanKang, +- Medusar, +- hyaocuk, +- szrlee, +- Xuanwo, +- nd0104, +- xiaoyu33, +- guodongxiaren, +- zzlyzq, +- yujianxuechuan, +- ailurus1991, +- ggaaooppeenngg, +- Ricky-Gong, +- lfzark, +- 213edu, +- Tanete, +- liuaiping, +- jerryling315, +- tomatoKiller, +- stduolc, +- shaohaolin, +- Timeszoro, +- rogetfan, +- FineFan, +- kingname, +- jasminepeng, +- JeffDing, +- CHINAANSHE, +(按提交行数排名前百) LFS 项目活跃成员有: @@ -169,7 +173,7 @@ LFS 项目活跃成员有: - @KevinSJ - @Yuking-net -(更新于2015/06/09,以Github contributors列表排名) +(更新于2015/11/29) 谢谢大家的支持! From c0d808bba46af4d388fb60e53b757220e1abd1c8 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 30 Nov 2015 00:34:19 +0800 Subject: [PATCH 048/160] PUB:20151123 Install Intel Graphics Installer in Ubuntu 15.10 @XLCYun --- ...ntel Graphics Installer in Ubuntu 15.10.md | 46 ++++++++++++++++++ ...ntel Graphics Installer in Ubuntu 15.10.md | 47 ------------------- 2 files changed, 46 insertions(+), 47 deletions(-) create mode 100644 published/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md delete mode 100644 translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md diff --git a/published/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/published/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md new file mode 100644 index 0000000000..bf6b5c3b11 --- /dev/null +++ b/published/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md @@ -0,0 +1,46 @@ +在 Ubuntu 15.10 上安装 Intel Graphics 安装器 +================================================================================ +![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg) + +Intel 最近发布了一个新版本的 Linux Graphics 安装器。在新版本中,将不支持 Ubuntu 15.04,而必须用 Ubuntu 15.10 Wily。 + +> Linux 版 Intel® Graphics 安装器可以让你很容易的为你的 Intel Graphics 硬件安装最新版的图形与视频驱动。它能保证你一直使用最新的增强与优化功能,并能够安装到 Intel Graphics Stack 中,来保证你在你的 Intel 图形硬件下,享受到最佳的用户体验。*现在 Linux 版的 Intel® Graphics 安装器支持最新版的 Ubuntu。* + +![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg) + +### 安装 ### + +**1.** 从[这个链接页面][1]中下载该安装器。当前支持 Ubuntu 15.10 的版本是1.2.1版。你可以在**系统设置 -> 详细信息**中检查你的操作系统(32位或64位)的类型。 + +![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg) + +**2.** 一旦下载完成,到下载目录中点击 .deb 安装包,用 Ubuntu 软件中心打开它,然最后点击“安装”按钮。 + +![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg) + +**3.** 为了让系统信任 Intel Graphics 安装器,你需要通过下面的命令来为它添加密钥。 + +用快捷键`Ctrl+Alt+T`或者在 Unity Dash 中的“应用程序启动器”中打开终端。依次粘贴运行下面的命令。 + + wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add - + + wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add - + +![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg) + +注意:在运行第一个命令的过程中,如果密钥下载完成后,光标停住不动并且一直闪烁的话,就像上面图片显示的那样,输入你的密码(输入时不会看到什么有变化)然后回车就行了。 + +最后通过 Unity Dash 或应用程序启动器打开 Intel Graphics 安装器。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/ + +作者:[Ji m][a] +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://01.org/linuxgraphics/downloads diff --git a/translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md deleted file mode 100644 index be91927f24..0000000000 --- a/translated/tech/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md +++ /dev/null @@ -1,47 +0,0 @@ -在Ubuntu 15.10上安装Intel图形安装器 -================================================================================ -![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg) - -Intel最近发布了一个新版本的Linux图型安装器。在新版本中,Ubuntu 15.04将不被支持而必须用Ubuntu 15.10 Wily。 - - -> Linux版Intel®图形安装器可以让你很容易的安装最新版的图形与视频驱动。它能保证你一直使用最新的增强与优化功能,并能够安装到Intel图形堆栈中,来保证你在你的Intel图形硬件下,享受到最佳的用户体验。*现在的Linux版的Intel®图形安装器支持最新版的Ubuntu。* - -![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg) - -### 安装 ### - -**1.** 从[链接页面][1]中下载安装器。当前支持Ubuntu 15.10的版本是1.2.1版。你可以在**系统设置 -> 详细信息**中检查你的操作系统(32位或64位)的类型。 - -![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg) - -**2.** 一旦下载完成,到下载目录中点击.deb安装包用Ubuntu软件中心打开它,然最后点击“安装”按钮。 - -![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg) - -**3.** 为了让系统信任Intel图形安装器,你需要通过下面的命令来为它添加钥匙。 - -用快捷键Ctrl+Alt+T或者在Unity Dash中的“应用程序启动器”中打开终端。依次粘贴运行下面的命令。 - - wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add - - - wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add - - -![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg) - -注意:在运行第一个命令的过程中,如果钥匙下载完成后光标停住不动并且一直闪烁的话,就像上面图片显示的那样,输入你的密码(输入时不会看到什么有变化)然后回车就行了。 - -最后通过Unity Dash或应用程序启动器打开Intel图形安装器。 - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/ - -作者:[Ji m][a] -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://01.org/linuxgraphics/downloads From 9cce4556611d28b5ecd1bf778c15687f6236cb89 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 30 Nov 2015 00:46:47 +0800 Subject: [PATCH 049/160] PUB:20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04 @strugglingyouth --- ...IDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) rename {translated/tech => published}/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md (67%) diff --git a/translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md b/published/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md similarity index 67% rename from translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md rename to published/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md index 18684f6eee..6ec1bdc1ec 100644 --- a/translated/tech/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md +++ b/published/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md @@ -1,17 +1,16 @@ - 如何在 Ubuntu 15.10,14.04 中安装 NVIDIA 358.16 驱动程序 ================================================================================ ![nvidia-logo-1](http://ubuntuhandbook.org/wp-content/uploads/2015/06/nvidia-logo-1.png) -[NVIDIA 358.16][1], NVIDIA 358 系列的第一个稳定版本已经发布并在 358.09 中(测试版)做了一些修正,以及一些小的改进。 +[NVIDIA 358.16][1] —— NVIDIA 358 系列的第一个稳定版本已经发布,并对 358.09 中(测试版)做了一些修正,以及一些小的改进。 -NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvidia.ko 内核模块工作来显示 GPU 引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于基本的模式接口,由内核直接传递管理(DRM)。 +NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块,可以配合 nvidia.ko 内核模块工作来调用 GPU 显示引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于模式设置接口的基础,该接口由内核的直接渲染管理器(DRM)所提供。 -在 OpenGL 驱动中,新的驱动程序也有了新的 GLX 扩展协议,对于分配大量内存也有了一种新的系统内存分配机制。新的 GPU **GeForce 805A** 和 **GeForce GTX 960A** 也被支持了。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。 +新的驱动程序也有新的 GLX 协议扩展,以及在 OpenGL 驱动中分配大量内存的系统内存分配新机制。新的 GPU **GeForce 805A** 和 **GeForce GTX 960A** 都支持。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。 ### 如何在 Ubuntu 中安装 NVIDIA 358.16 : ### -> 请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。 +> **请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。** 对于官方的二进制文件,请到 [nvidia.com/object/unix.html][1] 查看。 @@ -19,7 +18,7 @@ NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvi **1. 添加 PPA.** -通过按 Ctrl+Alt+T 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键: +通过按 `Ctrl+Alt+T` 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键: sudo add-apt-repository ppa:graphics-drivers/ppa @@ -35,7 +34,7 @@ NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvi sudo apt-get install nvidia-358 nvidia-settings -### (可选) 卸载: ### +### (如果需要的话,) 卸载: ### 开机从 GRUB 菜单进入恢复模式,进入根控制台。然后逐一运行下面的命令: @@ -59,7 +58,7 @@ via: http://ubuntuhandbook.org/index.php/2015/11/install-nvidia-358-16-driver-ub 作者:[Ji m][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 926598f2f275a7855bc104faa21860c10c7d58ad Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 30 Nov 2015 01:02:49 +0800 Subject: [PATCH 050/160] PUB:20151104 How to Create New File Systems or Partitions in the Terminal on Linux @strugglingyouth --- ... or Partitions in the Terminal on Linux.md | 24 ++++++++----------- 1 file changed, 10 insertions(+), 14 deletions(-) rename {translated/tech => published}/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md (74%) diff --git a/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md b/published/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md similarity index 74% rename from translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md rename to published/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md index 2948e8de61..cce93c0d02 100644 --- a/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md +++ b/published/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md @@ -1,4 +1,3 @@ - 如何在 Linux 终端下创建新的文件系统/分区 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-feature-image.png) @@ -13,8 +12,7 @@ ![cfdisk-lsblk](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-lsblk.png) - -一旦你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。 +当你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。 在终端输入这个命令。它会显示一个功能强大的基于终端的分区编辑程序。 @@ -26,9 +24,7 @@ 当输入此命令后,你将进入分区编辑器中,然后访问你想改变的磁盘。 -Since hard drive partitions are different, depending on a user’s needs, this part of the guide will go over **how to set up a split Linux home/root system layout**. - -由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分布的 Linux home/root 文件分区**。 +由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分离的 Linux home/root 分区布局**。 首先,需要创建根分区。这需要根据磁盘的字节数来进行分割。我测试的磁盘是 32 GB。 @@ -38,7 +34,7 @@ Since hard drive partitions are different, depending on a user’s needs, this p 该程序会要求你输入分区大小。一旦你指定好大小后,按 Enter 键。这将被称为根分区(或 /dev/sdb1)。 -接下来该创建用户分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你用户分区的大小,然后按 Enter 键来创建它。 +接下来该创建 home 分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你的 home 分区的大小,然后按 Enter 键来创建它。 ![cfdisk-create-home-partition](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-create-home-partition.png) @@ -48,7 +44,7 @@ Since hard drive partitions are different, depending on a user’s needs, this p ![cfdisk-specify-partition-type-swap](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-specify-partition-type-swap.png) -现在,交换分区被创建了,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。 +现在,创建了交换分区,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。 ![cfdisk-write-partition-table](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-write-partition-table.jpg) @@ -56,13 +52,13 @@ Since hard drive partitions are different, depending on a user’s needs, this p ### 使用 mkfs 创建文件系统 ### -有时候,你并不需要一个完整的分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。 +有时候,你并不需要一个整个重新分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。 ![cfdisk-mkfs-list-partitions-lsblk](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-list-partitions-lsblk.png) -首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想制作文件系统的分区或盘符。 +首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想创建文件系统的分区或盘符。 -在这个例子中,我将使用 `/dev/sdb1` 的第一个分区。只对 `/dev/sdb` 使用 mkfs(将会使用整个分区)。 +在这个例子中,我将使用第二个硬盘的 `/dev/sdb1` 作为第一个分区。可以对 `/dev/sdb` 使用 mkfs(这将会使用整个分区)。 ![cfdisk-mkfs-make-file-system-ext4](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-make-file-system-ext4.png) @@ -70,13 +66,13 @@ Since hard drive partitions are different, depending on a user’s needs, this p sudo mkfs.ext4 /dev/sdb1 -在终端。应当指出的是,`mkfs.ext4` 可以将你指定的任何文件系统改变。 +在终端。应当指出的是,`mkfs.ext4` 可以换成任何你想要使用的的文件系统。 ### 结论 ### 虽然使用图形工具编辑文件系统和分区更容易,但终端可以说是更有效的。终端的加载速度更快,点击几个按钮即可。GParted 和其它工具一样,它也是一个完整的工具。我希望在本教程的帮助下,你会明白如何在终端中高效的编辑文件系统。 -你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?为什么或为什么不?在下面告诉我们! +你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?不管是不是,请在下面告诉我们。 -------------------------------------------------------------------------------- @@ -84,7 +80,7 @@ via: https://www.maketecheasier.com/create-file-systems-partitions-terminal-linu 作者:[Derrik Diener][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6fbd1e396bfedc13e4e7e913a7bb5f721d5988c3 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Mon, 30 Nov 2015 14:41:47 +0800 Subject: [PATCH 051/160] Update 20151123 How To Install Microsoft Visual Studio Code on Linux.md --- ...123 How To Install Microsoft Visual Studio Code on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md b/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md index 30257ba0cd..96fba9ff33 100644 --- a/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md +++ b/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ How To Install Microsoft Visual Studio Code on Linux ================================================================================ Visual Studio code (VScode) is the cross-platform Chromium-based code editor is being open sourced today by Microsoft. How do I install Microsoft Visual Studio Code on a Debian or Ubuntu or Fedora Linux desktop? @@ -110,4 +111,4 @@ via: http://www.cyberciti.biz/faq/debian-ubuntu-fedora-linux-installing-visual-s 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [1]:https://code.visualstudio.com/Download -[2]:https://code.visualstudio.com/docs \ No newline at end of file +[2]:https://code.visualstudio.com/docs From 5bb769f88fd41422a9a40066d3a1518fe33eee65 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 30 Nov 2015 14:50:21 +0800 Subject: [PATCH 052/160] =?UTF-8?q?=20=E8=B6=85=E6=9C=9F=E5=9B=9E=E6=94=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @icybreaker --- .../20150921 14 tips for teaching open source development.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/talk/20150921 14 tips for teaching open source development.md b/sources/talk/20150921 14 tips for teaching open source development.md index bf8212da70..a580f3b776 100644 --- a/sources/talk/20150921 14 tips for teaching open source development.md +++ b/sources/talk/20150921 14 tips for teaching open source development.md @@ -1,4 +1,3 @@ -icybreaker translating... 14 tips for teaching open source development ================================================================================ Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford. From f8d005e49df9e769639f7c1f87e212dbc9ecb64b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 30 Nov 2015 15:01:18 +0800 Subject: [PATCH 053/160] Revert "Translating by ZTinoZ" --- ...123 How To Install Microsoft Visual Studio Code on Linux.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md b/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md index 96fba9ff33..30257ba0cd 100644 --- a/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md +++ b/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md @@ -1,4 +1,3 @@ -Translating by ZTinoZ How To Install Microsoft Visual Studio Code on Linux ================================================================================ Visual Studio code (VScode) is the cross-platform Chromium-based code editor is being open sourced today by Microsoft. How do I install Microsoft Visual Studio Code on a Debian or Ubuntu or Fedora Linux desktop? @@ -111,4 +110,4 @@ via: http://www.cyberciti.biz/faq/debian-ubuntu-fedora-linux-installing-visual-s 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [1]:https://code.visualstudio.com/Download -[2]:https://code.visualstudio.com/docs +[2]:https://code.visualstudio.com/docs \ No newline at end of file From 83715cd4b4a7e27567a2e997507ca3bae3a98515 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 30 Nov 2015 15:04:14 +0800 Subject: [PATCH 054/160] =?UTF-8?q?=E6=92=A4=E9=94=80=E9=87=8D=E5=A4=8D?= =?UTF-8?q?=E6=96=87=E7=AB=A0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...l Microsoft Visual Studio Code on Linux.md | 113 ------------------ 1 file changed, 113 deletions(-) delete mode 100644 sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md diff --git a/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md b/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md deleted file mode 100644 index 30257ba0cd..0000000000 --- a/sources/tech/20151123 How To Install Microsoft Visual Studio Code on Linux.md +++ /dev/null @@ -1,113 +0,0 @@ -How To Install Microsoft Visual Studio Code on Linux -================================================================================ -Visual Studio code (VScode) is the cross-platform Chromium-based code editor is being open sourced today by Microsoft. How do I install Microsoft Visual Studio Code on a Debian or Ubuntu or Fedora Linux desktop? - -Visual Studio supports debugging Linux apps and code editor now open source by Microsoft. It is a preview (beta) version but you can test it and use it on your own Linux based desktop. - -### Why use Visual Studio Code? ### - -From the project website: - -> Visual Studio Code provides developers with a new choice of developer tool that combines the simplicity and streamlined experience of a code editor with the best of what developers need for their core code-edit-debug cycle. Visual Studio Code is the first code editor, and first cross-platform development tool - supporting OS X, Linux, and Windows - in the Visual Studio family. If you use Unity, ASP.NET 5, NODE.JS or related tool, give it a try. - -### Requirements for Visual Studio Code on Linux ### - -1. Ubuntu Desktop version 14.04 -1. GLIBCXX version 3.4.15 or later -1. GLIBC version 2.15 or later - -The following installation instructions are tested on: - -1. Fedora Linux 22 and 23 -1. Debian Linux 8 -1. Ubuntu Linux 14.04 LTS - -### Download Visual Studio Code ### - -Visit [this page][1] to grab the latest version and save it to ~/Downloads/ folder on Linux desktop: - -![Fig.01: Download Visual Studio Code For Linux](http://s0.cyberciti.org/uploads/faq/2015/11/download-visual-studio-code.jpg) - -Fig.01: Download Visual Studio Code For Linux - -Make a new folder (say $HOME/VSCode) and extract VSCode-linux-x64.zip inside that folder or in /usr/local/ folder. Unzip VSCode-linux64.zip to that folder. - -Make a new folder (say $HOME/VSCode) and extract VSCode-linux-x64.zip inside that folder or in /usr/local/ folder. Unzip VSCode-linux64.zip to that folder. - -### Alternate install method ### - -You can use the wget command to download VScode as follows: - - $ wget 'https://az764295.vo.msecnd.net/public/0.10.1-release/VSCode-linux64.zip' - -Sample outputs: - - --2015-11-18 13:55:23-- https://az764295.vo.msecnd.net/public/0.10.1-release/VSCode-linux64.zip - Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459 - Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 64638315 (62M) [application/octet-stream] - Saving to: 'VSCode-linux64.zip' - - 100%[======================================>] 64,638,315 84.9MB/s in 0.7s - - 2015-11-18 13:55:23 (84.9 MB/s) - 'VSCode-linux64.zip' saved [64638315/64638315] - -### Install VScode using the command line ### - -Cd to ~/Download/ location, enter: - - $ cd ~/Download/ - $ ls -l - -Sample outputs: - -![Fig.02: VSCode downloaded to my ~/Downloads/ folder](http://s0.cyberciti.org/uploads/faq/2015/11/list-vscode-linux.jpg) - -Fig.02: VSCode downloaded to my ~/Downloads/ folder - -Unzip VSCode-linux64.zip in /usr/local/ directory, enter: - - $ sudo unzip VSCode-linux64.zip -d /usr/local/ - -Cd into /usr/local/ to create the soft-link as follows using the ln command for the Code executable. This is useful to run VSCode from the terminal application: - - $ su - - # cd /usr/local/ - # ls -l - # cd bin/ - # ln -s ../VSCode-linux-x64/Code code - # exit - -Sample session: - -![Fig.03 Create the sym-link with the absolute path to the Code executable](http://s0.cyberciti.org/uploads/faq/2015/11/verify-and-ln-vscode.jpg) - -Fig.03 Create the sym-link with the absolute path to the Code executable - -### How do I use VSCode on Linux? ### - -Open the Terminal app and type the following command: - - $ /usr/local/bin/code - -Sample outputs: - -![Fig.04: VSCode in action on Linux](http://s0.cyberciti.org/uploads/faq/2015/11/vscode-welcome.jpg) - -Fig.04: VSCode in action on Linux - -And, there you have it, the VSCode installed and working correctly on the latest version of Debian, Ubuntu and Fedora Linux. I suggest that you read [getting started pages from Microsoft][2] to understand the core concepts that will make you more productive writing and navigating your code. - --------------------------------------------------------------------------------- - -via: http://www.cyberciti.biz/faq/debian-ubuntu-fedora-linux-installing-visual-studio-code/ - -作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://code.visualstudio.com/Download -[2]:https://code.visualstudio.com/docs \ No newline at end of file From 20a8e3e018c683a63da685c6deef2d3dd87c5716 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Mon, 30 Nov 2015 15:12:09 +0800 Subject: [PATCH 055/160] Update 20151123 7 ways hackers can use Wi-Fi against you.md --- .../share/20151123 7 ways hackers can use Wi-Fi against you.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md b/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md index cd39f6a5c1..dbf63d5dce 100644 --- a/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md +++ b/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ 7 ways hackers can use Wi-Fi against you ================================================================================ ![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg) @@ -66,4 +67,4 @@ via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers- [3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html [4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off [5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/ -[6]:https://twitter.com/JoshAlthuser \ No newline at end of file +[6]:https://twitter.com/JoshAlthuser From 000d5c32bf275002d90206ba44c8b7a855530740 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Mon, 30 Nov 2015 15:13:50 +0800 Subject: [PATCH 056/160] Translating by ZTinoZ --- .../share/20151123 7 ways hackers can use Wi-Fi against you.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md b/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md index dbf63d5dce..1cf33a33dc 100644 --- a/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md +++ b/sources/share/20151123 7 ways hackers can use Wi-Fi against you.md @@ -1,4 +1,4 @@ -Translating by ZTinoZ +Translating by ZTinoZ 7 ways hackers can use Wi-Fi against you ================================================================================ ![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg) From 136bf0a9733a5c6784908ce055852048ca8e4ce2 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 30 Nov 2015 16:18:36 +0800 Subject: [PATCH 057/160] =?UTF-8?q?20151130-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Tape Managements Commands For Sysadmins.md | 425 ++++++++++++++++++ 1 file changed, 425 insertions(+) create mode 100644 sources/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md diff --git a/sources/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md b/sources/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md new file mode 100644 index 0000000000..ff0e0219fb --- /dev/null +++ b/sources/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md @@ -0,0 +1,425 @@ +15 Useful Linux and Unix Tape Managements Commands For Sysadmins +================================================================================ +Tape devices should be used on a regular basis only for archiving files or for transferring data from one server to another. Usually, tape devices are all hooked up to Unix boxes, and controlled with mt or mtx. You must backup all data to both disks (may be in cloud) and tape device. In this tutorial you will learn about: + +- Tape device names +- Basic commands to manage tape drive +- Basic backup and restore commands + +### Why backup? ### + +A backup plant is important: + +- Ability to recover from disk failure +- Accidental file deletion +- File or file system corruption +- Complete server destruction, including destruction of on-site backups due to fire or other problems. + +You can use tape based archives to backup the whole server and move tapes off-site. + +### Understanding tape file marks and block size ### + +![Fig.01: Tape file marks](http://s0.cyberciti.org/uploads/cms/2015/10/tape-format.jpg) + +Fig.01: Tape file marks + +Each tape device can store multiple tape backup files. Tape backup files are created using cpio, tar, dd, and so on. However, tape device can be opened, written data to, and closed by various program. You can store several backups (tapes) on physical tape. Between each tape file is a "tape file mark". This is used to indicate where one tape file ends and another begins on physical tape. You need to use mt command to positions the tape (winds forward and rewinds and marks). + +#### How data is stored on a tape #### + +![Fig.02: How data is stored on a tape](http://s0.cyberciti.org/uploads/cms/2015/10/how-data-is-stored-on-a-tape.jpg) + +Fig.02: How data is stored on a tape + +All data is stored subsequently in sequential tape archive format using tar. The first tape archive will start on the physical beginning of the tape (tar #0). The next will be tar #1 and so on. + +### Tape device names on Unix ### + +1. /dev/rmt/0 or /dev/rmt/1 or /dev/rmt/[0-127] : Regular tape device name on Unix. The tape is rewound. +1. /dev/rmt/0n : This is know as no rewind i.e. after using tape, leaves the tape in current status for next command. +1. /dev/rmt/0b : Use magtape interface i.e. BSD behavior. More-readable by a variety of OS's such as AIX, Windows, Linux, FreeBSD, and more. +1. /dev/rmt/0l : Set density to low. +1. /dev/rmt/0m : Set density to medium. +1. /dev/rmt/0u : Set density to high. +1. /dev/rmt/0c : Set density to compressed. +1. /dev/st[0-9] : Linux specific SCSI tape device name. +1. /dev/sa[0-9] : FreeBSD specific SCSI tape device name. +1. /dev/esa0 : FreeBSD specific SCSI tape device name that eject on close (if capable). + +#### Tape device name examples #### + +- The /dev/rmt/1cn indicate that I'm using unity 1, compressed density and no rewind. +- The /dev/rmt/0hb indicate that I'm using unity 0, high density and BSD behavior. +- The auto rewind SCSI tape device name on Linux : /dev/st0 +- The non-rewind SCSI tape device name on Linux : /dev/nst0 +- The auto rewind SCSI tape device name on FreeBSD: /dev/sa0 +- The non-rewind SCSI tape device name on FreeBSD: /dev/nsa0 + +#### How do I list installed scsi tape devices? #### + +Type the following commands: + + ## Linux (read man pages for more info) ## + lsscsi + lsscsi -g + + ## IBM AIX ## + lsdev -Cc tape + lsdev -Cc adsm + lscfg -vl rmt* + + ## Solaris Unix ## + cfgadm –a + cfgadm -al + luxadm probe + iostat -En + + ## HP-UX Unix ## + ioscan Cf + ioscan -funC tape + ioscan -fnC tape + ioscan -kfC tape + + +Sample outputs from my Linux server: + +![Fig.03: Installed tape devices on Linux server](http://s0.cyberciti.org/uploads/cms/2015/10/linux-find-tape-devices-command.jpg) + +Fig.03: Installed tape devices on Linux server + +### mt command examples ### + +In Linux and Unix-like system, mt command is used to control operations of the tape drive, such as finding status or seeking through files on a tape or writing tape control marks to the tape. You must most of the following command as root user. The syntax is: + + mt -f /tape/device/name operation + +#### Setting up environment #### + +You can set TAPE shell variable. This is the pathname of the tape drive. The default (if the variable is unset, but not if it is null) is /dev/nsa0 on FreeBSD. It may be overridden with the -f option passed to the mt command as explained below. + + ## Add to your shell startup file ## + TAPE=/dev/st1 #Linux + TAPE=/dev/rmt/2 #Unix + TAPE=/dev/nsa3 #FreeBSD + export TAPE + +### 1: Display status of the tape/drive ### + + mt status #Use default + mt -f /dev/rmt/0 status #Unix + mt -f /dev/st0 status #Linux + mt -f /dev/nsa0 status #FreeBSD + mt -f /dev/rmt/1 status #Unix unity 1 i.e. tape device no. 1 + +You can use shell loop as follows to poll a system and locate all of its tape drives: + + for d in 0 1 2 3 4 5 + do + mt -f "/dev/rmt/${d}" status + done + +### 2: Rewinds the tape ### + + mt rew + mt rewind + mt -f /dev/mt/0 rewind + mt -f /dev/st0 rewind + +### 3: Eject the tape ### + + mt off + mt offline + mt eject + mt -f /dev/mt/0 off + mt -f /dev/st0 eject + +### 4: Erase the tape (rewind the tape and, if applicable, unload the tape) ### + + mt erase + mt -f /dev/st0 erase #Linux + mt -f /dev/rmt/0 erase #Unix + +### 5: Retensioning a magnetic tape cartridge ### + +If errors occur when a tape is being read, you can retension the tape, clean the tape drive, and then try again as follows: + + mt retension + mt -f /dev/rmt/1 retension #Unix + mt -f /dev/st0 retension #Linux + +### 6: Writes n EOF marks in the current position of tape ### + + mt eof + mt weof + mt -f /dev/st0 eof + +### 7: Forward space count files i.e. jumps n EOF marks ### + +The tape is positioned on the first block of the next file i.e. tape will position on first block of the field (see fig.01): + + mt fsf + mt -f /dev/rmt/0 fsf + mt -f /dev/rmt/1 fsf 1 #go 1 forward file/tape (see fig.01) + +### 8: Backward space count files i.e. rewinds n EOF marks ### + +The tape is positioned on the first block of the next file i.e. tape positions after EOF mark (see fig.01): + + mt bsf + mt -f /dev/rmt/1 bsf + mt -f /dev/rmt/1 bsf 1 #go 1 backward file/tape (see fig.01) + +Here is a list of the tape position commands: + + fsf Forward space count files. The tape is positioned on the first block of the next file. + + fsfm Forward space count files. The tape is positioned on the last block of the previous file. + + bsf Backward space count files. The tape is positioned on the last block of the previous file. + + bsfm Backward space count files. The tape is positioned on the first block of the next file. + + asf The tape is positioned at the beginning of the count file. Positioning is done by first rewinding the tape and then spacing forward over count filemarks. + + fsr Forward space count records. + + bsr Backward space count records. + + fss (SCSI tapes) Forward space count setmarks. + + bss (SCSI tapes) Backward space count setmarks. + +### Basic backup commands ### + +Let us see commands to backup and restore files + +### 9: To backup directory (tar format) ### + + tar cvf /dev/rmt/0n /etc + tar cvf /dev/st0 /etc + +### 10: To restore directory (tar format) ### + + tar xvf /dev/rmt/0n -C /path/to/restore + tar xvf /dev/st0 -C /tmp + +### 11: List or check tape contents (tar format) ### + + mt -f /dev/st0 rewind; dd if=/dev/st0 of=- + + ## tar format ## + tar tvf {DEVICE} {Directory-FileName} + tar tvf /dev/st0 + tar tvf /dev/st0 desktop + tar tvf /dev/rmt/0 foo > list.txt + +### 12: Backup partition with dump or ufsdump ### + + ## Unix backup c0t0d0s2 partition ## + ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s2 + + ## Linux backup /home partition ## + dump 0uf /dev/nst0 /dev/sda5 + dump 0uf /dev/nst0 /home + + ## FreeBSD backup /usr partition ## + dump -0aL -b64 -f /dev/nsa0 /usr + +### 12: Restore partition with ufsrestore or restore ### + + ## Unix ## + ufsrestore xf /dev/rmt/0 + ## Unix interactive restore ## + ufsrestore if /dev/rmt/0 + + ## Linux ## + restore rf /dev/nst0 + ## Restore interactive from the 6th backup on the tape media ## + restore isf 6 /dev/nst0 + + ## FreeBSD restore ufsdump format ## + restore -i -f /dev/nsa0 + +### 13: Start writing at the beginning of the tape (see fig.02) ### + + ## This will overwrite all data on tape ## + mt -f /dev/st1 rewind + + ### Backup home ## + tar cvf /dev/st1 /home + + ## Offline and unload tape ## + mt -f /dev/st0 offline + +To restore from the beginning of the tape: + + mt -f /dev/st0 rewind + tar xvf /dev/st0 + mt -f /dev/st0 offline + +### 14: Start writing after the last tar (see fig.02) ### + + ## This will kee all data written so far ## + mt -f /dev/st1 eom + + ### Backup home ## + tar cvf /dev/st1 /home + + ## Unload ## + mt -f /dev/st0 offline + +### 15: Start writing after tar number 2 (see fig.02) ### + + ## To wrtite after tar number 2 (should be 2+1) + mt -f /dev/st0 asf 3 + tar cvf /dev/st0 /usr + + ## asf equivalent command done using fsf ## + mt -f /dev/sf0 rewind + mt -f /dev/st0 fsf 2 + +To restore tar from tar number 2: + + mt -f /dev/st0 asf 3 + tar xvf /dev/st0 + mt -f /dev/st0 offline + +### How do I verify backup tapes created using tar? ### + +It is important that you do regular full system restorations and service testing, it's the only way to know for sure that the entire system is working correctly. See our [tutorial on verifying tar command tape backups][1] for more information. + +### Sample shell script ### + + #!/bin/bash + # A UNIX / Linux shell script to backup dirs to tape device like /dev/st0 (linux) + # This script make both full and incremental backups. + # You need at two sets of five tapes. Label each tape as Mon, Tue, Wed, Thu and Fri. + # You can run script at midnight or early morning each day using cronjons. + # The operator or sys admin can replace the tape every day after the script has done. + # Script must run as root or configure permission via sudo. + # ------------------------------------------------------------------------- + # Copyright (c) 1999 Vivek Gite + # This script is licensed under GNU GPL version 2.0 or above + # ------------------------------------------------------------------------- + # This script is part of nixCraft shell script collection (NSSC) + # Visit http://bash.cyberciti.biz/ for more information. + # ------------------------------------------------------------------------- + # Last updated on : March-2003 - Added log file support. + # Last updated on : Feb-2007 - Added support for excluding files / dirs. + # ------------------------------------------------------------------------- + LOGBASE=/root/backup/log + + # Backup dirs; do not prefix / + BACKUP_ROOT_DIR="home sales" + + # Get todays day like Mon, Tue and so on + NOW=$(date +"%a") + + # Tape devie name + TAPE="/dev/st0" + + # Exclude file + TAR_ARGS="" + EXCLUDE_CONF=/root/.backup.exclude.conf + + # Backup Log file + LOGFIILE=$LOGBASE/$NOW.backup.log + + # Path to binaries + TAR=/bin/tar + MT=/bin/mt + MKDIR=/bin/mkdir + + # ------------------------------------------------------------------------ + # Excluding files when using tar + # Create a file called $EXCLUDE_CONF using a text editor + # Add files matching patterns such as follows (regex allowed): + # home/vivek/iso + # home/vivek/*.cpp~ + # ------------------------------------------------------------------------ + [ -f $EXCLUDE_CONF ] && TAR_ARGS="-X $EXCLUDE_CONF" + + #### Custom functions ##### + # Make a full backup + full_backup(){ + local old=$(pwd) + cd / + $TAR $TAR_ARGS -cvpf $TAPE $BACKUP_ROOT_DIR + $MT -f $TAPE rewind + $MT -f $TAPE offline + cd $old + } + + # Make a partial backup + partial_backup(){ + local old=$(pwd) + cd / + $TAR $TAR_ARGS -cvpf $TAPE -N "$(date -d '1 day ago')" $BACKUP_ROOT_DIR + $MT -f $TAPE rewind + $MT -f $TAPE offline + cd $old + } + + # Make sure all dirs exits + verify_backup_dirs(){ + local s=0 + for d in $BACKUP_ROOT_DIR + do + if [ ! -d /$d ]; + then + echo "Error : /$d directory does not exits!" + s=1 + fi + done + # if not; just die + [ $s -eq 1 ] && exit 1 + } + + #### Main logic #### + + # Make sure log dir exits + [ ! -d $LOGBASE ] && $MKDIR -p $LOGBASE + + # Verify dirs + verify_backup_dirs + + # Okay let us start backup procedure + # If it is Monday make a full backup; + # For Tue to Fri make a partial backup + # Weekend no backups + case $NOW in + Mon) full_backup;; + Tue|Wed|Thu|Fri) partial_backup;; + *) ;; + esac > $LOGFIILE 2>&1 + +### A note about third party backup utilities ### + +Both Linux and Unix-like system provides many third-party utilities which you can use to schedule the creation of backups including tape backups such as: + +- Amanda +- Bacula +- rsync +- duplicity +- rsnapshot + +See also + +- Man pages - [mt(1)][2], [mtx(1)][3], [tar(1)][4], [dump(8)][5], [restore(8)][6] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/hardware/unix-linux-basic-tape-management-commands/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/unix-verify-tape-backup/ +[2]:http://www.manpager.com/linux/man1/mt.1.html +[3]:http://www.manpager.com/linux/man1/mtx.1.html +[4]:http://www.manpager.com/linux/man1/tar.1.html +[5]:http://www.manpager.com/linux/man8/dump.8.html +[6]:http://www.manpager.com/linux/man8/restore.8.html \ No newline at end of file From a892d62a5ff2793d82f2c9f335c6b49bfd2509cd Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 30 Nov 2015 16:32:37 +0800 Subject: [PATCH 058/160] =?UTF-8?q?20151130-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0 eSpeak--Text To Speech Tool For Linux.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/share/20151130 eSpeak--Text To Speech Tool For Linux.md diff --git a/sources/share/20151130 eSpeak--Text To Speech Tool For Linux.md b/sources/share/20151130 eSpeak--Text To Speech Tool For Linux.md new file mode 100644 index 0000000000..3fc07db228 --- /dev/null +++ b/sources/share/20151130 eSpeak--Text To Speech Tool For Linux.md @@ -0,0 +1,64 @@ +eSpeak: Text To Speech Tool For Linux +================================================================================ +![Text to speech tool in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Text-to-speech-Linux.jpg) + +[eSpeak][1] is a command line tool for Linux that converts text to speech. This is a compact speech synthesizer that provides support to English and many other languages. It is written in C. + +eSpeak reads the text from the standard input or the input file. The voice generated, however, is nowhere close to a human voice. But it is still a compact and handy tool if you want to use it in your projects. + +Some of the main features of eSpeak are: + +- A command line tool for Linux and Windows +- Speaks text from a file or from stdin +- Shared library version for use by other programs +- SAPI5 version for Windows, so it can be used with screen-readers and other programs that support the Windows SAPI5 interface. +- Ported to other platforms, including Android, Mac OSX etc. +- Several voice characteristics to choose from +- speech output can be saved as [.WAV file][2] +- SSML ([Speech Synthesis Markup Language][3]) is supported partially along with HTML +- Tiny in size, the complete program with language support etc is under 2 MB. +- Can translate text into phoneme codes, so it could be adapted as a front end for another speech synthesis engine. +- Development tools available for producing and tuning phoneme data. + +### Install eSpeak ### + +To install eSpeak in Ubuntu based system, use the command below in a terminal: + + sudo apt-get install espeak + +eSpeak is an old tool and I presume that it should be available in the repositories of other Linux distributions such as Arch Linux, Fedora etc. You can install eSpeak easily using dnf, pacman etc. + +To use eSpeak, just use it like: espeak and press enter to hear it aloud. Use Ctrl+C to close the running program. + +![eSpeak command line](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-example.png) + +There are several other options available. You can browse through them through the help section of the program. + +### GUI version: Gespeaker ### + +If you prefer the GUI version over the command line, you can install Gespeaker that provides a GTK front end to eSpeak. + +Use the command below to install Gespeaker: + + sudo apt-get install gespeaker + +The interface is straightforward and easy to use. You can explore it all by yourself. + +![eSpeak GUI tool for text to speech in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-GUI.png) + +While such tools might not be useful for general computing need, it could be handy if you are working on some projects where text to speech conversion is required. I let you decide the usage of this speech synthesizer. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/espeak-text-speech-linux/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://espeak.sourceforge.net/ +[2]:http://en.wikipedia.org/wiki/WAV +[3]:http://en.wikipedia.org/wiki/Speech_Synthesis_Markup_Language \ No newline at end of file From ea8a573ce1372bbb5ea343a2220221063f40344a Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 30 Nov 2015 22:39:07 +0800 Subject: [PATCH 059/160] [Translated]20151123 How to access Dropbox from the command line in Linux.md --- ... Dropbox from the command line in Linux.md | 99 ------------------- ... Dropbox from the command line in Linux.md | 97 ++++++++++++++++++ 2 files changed, 97 insertions(+), 99 deletions(-) delete mode 100644 sources/tech/20151123 How to access Dropbox from the command line in Linux.md create mode 100644 translated/tech/20151123 How to access Dropbox from the command line in Linux.md diff --git a/sources/tech/20151123 How to access Dropbox from the command line in Linux.md b/sources/tech/20151123 How to access Dropbox from the command line in Linux.md deleted file mode 100644 index 927a046615..0000000000 --- a/sources/tech/20151123 How to access Dropbox from the command line in Linux.md +++ /dev/null @@ -1,99 +0,0 @@ -FSSlc translating - -How to access Dropbox from the command line in Linux -================================================================================ -Cloud storage is everywhere in today's multi-device environment, where people want to access content across multiple devices wherever they go. Dropbox is the most widely used cloud storage service thanks to its elegant UI and flawless multi-platform compatibility. The popularity of Dropbox has led to a flurry of official or unofficial Dropbox clients that are available across different operating system platforms. - -Linux has its own share of Dropbox clients: CLI clients as well as GUI-based clients. [Dropbox Uploader][1] is an easy-to-use Dropbox CLI client written in BASH scripting language. In this tutorial, I describe** how to access Dropbox from the command line in Linux by using Dropbox Uploader**. - -### Install and Configure Dropbox Uploader on Linux ### - -To use Dropbox Uploader, download the script and make it executable. - - $ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh - $ chmod +x dropbox_uploader.sh - -Make sure that you have installed curl on your system, since Dropbox Uploader runs Dropbox APIs via curl. - -To configure Dropbox Uploader, simply run dropbox_uploader.sh. When you run the script for the first time, it will ask you to grant the script access to your Dropbox account. - - $ ./dropbox_uploader.sh - -![](https://c2.staticflickr.com/6/5739/22860931599_10c08ff15f_c.jpg) - -As instructed above, go to [https://www.dropbox.com/developers/apps][2] on your web browser, and create a new Dropbox app. Fill in the information of the new app as shown below, and enter the app name as generated by Dropbox Uploader. - -![](https://c2.staticflickr.com/6/5745/22932921350_4123d2dbee_c.jpg) - -After you have created a new app, you will see app key/secret on the next page. Make a note of them. - -![](https://c1.staticflickr.com/1/736/22932962610_7db51aa718_c.jpg) - -Enter the app key and secret in the terminal window where dropbox_uploader.sh is running. dropbox_uploader.sh will then generate an oAUTH URL (e.g., https://www.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXXXXXXX). - -![](https://c1.staticflickr.com/1/563/22601635533_423738baed_c.jpg) - -Go to the oAUTH URL generated above on your web browser, and allow access to your Dropbox account. - -![](https://c1.staticflickr.com/1/675/23202598606_6110c1a31b_c.jpg) - -This completes Dropbox Uploader configuration. To check whether Dropbox Uploader is successfully authenticated, run the following command. - - $ ./dropbox_uploader.sh info - ----------- - - Dropbox Uploader v0.12 - - > Getting info... - - Name: Dan Nanni - UID: XXXXXXXXXX - Email: my@email_address - Quota: 2048 Mb - Used: 13 Mb - Free: 2034 Mb - -### Dropbox Uploader Examples ### - -To list all contents in the top-level directory: - - $ ./dropbox_uploader.sh list - -To list all contents in a specific folder: - - $ ./dropbox_uploader.sh list Documents/manuals - -To upload a local file to a remote Dropbox folder: - - $ ./dropbox_uploader.sh upload snort.pdf Documents/manuals - -To download a remote file from Dropbox to a local file: - - $ ./dropbox_uploader.sh download Documents/manuals/mysql.pdf ./mysql.pdf - -To download an entire remote folder from Dropbox to a local folder: - - $ ./dropbox_uploader.sh download Documents/manuals ./manuals - -To create a new remote folder on Dropbox: - - $ ./dropbox_uploader.sh mkdir Documents/whitepapers - -To delete an entire remote folder (including all its contents) on Dropbox: - - $ ./dropbox_uploader.sh delete Documents/manuals - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/access-dropbox-command-line-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://www.andreafabrizi.it/?dropbox_uploader -[2]:https://www.dropbox.com/developers/apps diff --git a/translated/tech/20151123 How to access Dropbox from the command line in Linux.md b/translated/tech/20151123 How to access Dropbox from the command line in Linux.md new file mode 100644 index 0000000000..6c5f73e596 --- /dev/null +++ b/translated/tech/20151123 How to access Dropbox from the command line in Linux.md @@ -0,0 +1,97 @@ +Linux 中如何从命令行访问 Dropbox +================================================================================ +在当今这个多设备的环境下,云存储无处不在。无论身处何方,人们都想通过多种设备来从云存储中获取所需的内容。由于优雅的 UI 和完美的跨平台兼容性,Dropbox 已成为最为广泛使用的云存储服务。 Dropbox 的流行已引发了一系列官方或非官方 Dropbox 客户端的出现,它们支持不同的操作系统平台。 + +当然 Linux 平台下也有着自己的 Dropbox 客户端: 既有命令行的,也有图形界面。[Dropbox Uploader][1] 是一个简单易用的 Dropbox 命令行客户端,它是用 BASH 脚本语言所编写的。在这篇教程中,我将描述 **在 Linux 中如何使用 Dropbox Uploader 通过命令行来访问 Dropbox**。 + +### Linux 中安装和配置 Dropbox Uploader ### + +要使用 Dropbox Uploader,你需要下载该脚本并使其可被执行。 + + $ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh + $ chmod +x dropbox_uploader.sh + +请确保你已经在系统中安装了 `curl`,因为 Dropbox Uploader 通过 curl 来运行 Dropbox 的 API。 + +要配置 Dropbox Uploader,只需运行 dropbox_uploader.sh 即可。当你第一次运行这个脚本时,它将询问你,以使得它可以访问你的 Dropbox 账户。 + + $ ./dropbox_uploader.sh + +![](https://c2.staticflickr.com/6/5739/22860931599_10c08ff15f_c.jpg) + +如上图所指示的那样,你需要通过浏览器访问 [https://www.dropbox.com/developers/apps][2] 页面,并创建一个新的 Dropbox app。接着像下图那样填入新 app 的相关信息,并输入 app 的名称,它与 Dropbox Uploader 所生成的 app 名称类似。 + +![](https://c2.staticflickr.com/6/5745/22932921350_4123d2dbee_c.jpg) + +在你创建好一个新的 app 之后,你将在下一个页面看到 app key 和 app secret。请记住它们。 + +![](https://c1.staticflickr.com/1/736/22932962610_7db51aa718_c.jpg) + +然后在正运行着 dropbox_uploader.sh 的终端窗口中输入 app key 和 app secret。然后 dropbox_uploader.sh 将产生一个 oAUTH 网址(例如,https://www.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXXXXXXX)。 + +![](https://c1.staticflickr.com/1/563/22601635533_423738baed_c.jpg) + +接着通过浏览器访问那个 oAUTH 网址,并同意访问你的 Dropbox 账户。 + +![](https://c1.staticflickr.com/1/675/23202598606_6110c1a31b_c.jpg) + +这便完成了 Dropbox Uploader 的配置。若要确认 Dropbox Uploader 是否真的被成功地认证了,可以运行下面的命令。 + + $ ./dropbox_uploader.sh info + +---------- + + Dropbox Uploader v0.12 + + > Getting info... + + Name: Dan Nanni + UID: XXXXXXXXXX + Email: my@email_address + Quota: 2048 Mb + Used: 13 Mb + Free: 2034 Mb + +### Dropbox Uploader 示例 ### + +要显示根目录中的所有内容,运行: + + $ ./dropbox_uploader.sh list + +要列出某个特定文件夹中的所有内容,运行: + + $ ./dropbox_uploader.sh list Documents/manuals + +要上传一个本地文件到一个远程的 Dropbox 文件夹,使用: + + $ ./dropbox_uploader.sh upload snort.pdf Documents/manuals + +要从 Dropbox 下载一个远程的文件到本地,使用: + + $ ./dropbox_uploader.sh download Documents/manuals/mysql.pdf ./mysql.pdf + +要从 Dropbox 下载一个完整的远程文件夹到一个本地的文件夹,运行: + + $ ./dropbox_uploader.sh download Documents/manuals ./manuals + +要在 Dropbox 上创建一个新的远程文件夹,使用: + + $ ./dropbox_uploader.sh mkdir Documents/whitepapers + +要完全删除 Dropbox 中某个远程的文件夹(包括它所含的所有内容),运行: + + $ ./dropbox_uploader.sh delete Documents/manuals + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/access-dropbox-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://www.andreafabrizi.it/?dropbox_uploader +[2]:https://www.dropbox.com/developers/apps From 025d3d1d52d85a312e118d276c4a3852c03724bb Mon Sep 17 00:00:00 2001 From: runningwater Date: Tue, 1 Dec 2015 10:43:08 +0800 Subject: [PATCH 060/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es 1--HowTo--Use grep Command In Linux or UNIX--Examples.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md index b18b40e04c..1db160d73f 100644 --- a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md @@ -1,3 +1,4 @@ +(翻译中 by runningwater) HowTo: Use grep Command In Linux / UNIX – Examples ================================================================================ How do I use grep command on Linux, Apple OS X, and Unix-like operating systems? Can you give me a simple examples of the grep command? @@ -142,7 +143,7 @@ Grep command in action via: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c4b3f21b2d9772665e71b156f9bba73708249fc4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 1 Dec 2015 11:58:23 +0800 Subject: [PATCH 061/160] Delete 20151030 80 Linux Monitoring Tools for SysAdmins.md --- ...80 Linux Monitoring Tools for SysAdmins.md | 605 ------------------ 1 file changed, 605 deletions(-) delete mode 100644 sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md diff --git a/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md b/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md deleted file mode 100644 index f9384d4635..0000000000 --- a/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md +++ /dev/null @@ -1,605 +0,0 @@ - -translation by strugglingyouth -80 Linux Monitoring Tools for SysAdmins -================================================================================ -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg) - -The industry is hotting up at the moment, and there are more tools than you can shake a stick at. Here lies the most comprehensive list on the Internet (of Tools). Featuring over 80 ways to your machines. Within this article we outline: - -- Command line tools -- Network related -- System related monitoring -- Log monitoring tools -- Infrastructure monitoring tools - -It’s hard work monitoring and debugging performance problems, but it’s easier with the right tools at the right time. Here are some tools you’ve probably heard of, some you probably haven’t – and when to use them: - -### Top 10 System Monitoring Tools ### - -#### 1. Top #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg) - -This is a small tool which is pre-installed in many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. You can order these processes on different criteria and the default criteria is CPU. - -#### 2. [htop][1] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg) - -Htop is essentially an enhanced version of top. It’s easier to sort by processes. It’s visually easier to understand and has built in commands for common things you would like to do. Plus it’s fully interactive. - -#### 3. [atop][2] #### - -Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load. - -#### 4. [apachetop][3] #### - -Apachetop monitors the overall performance of your apache webserver. It’s largely based on mytop. It displays current number of reads, writes and the overall number of requests processed. - -#### 5. [ftptop][4] #### - -ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is. - -#### 6. [mytop][5] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg) - -mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries it’s processing in real time. - -#### 7. [powertop][6] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg) - -powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key. - -#### 8. [iotop][7] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg) - -iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O. - -### Network related monitoring ### - -#### 9. [ntopng][8] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg) - -ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it. - -#### 10. [iftop][9] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg) - -iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”. - -#### 11. [jnettop][10] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg) - -jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis. - -12. [bandwidthd][11] - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg) - -BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports. - -#### 13. [EtherApe][12] #### - -EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax. - -#### 14. [ethtool][13] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg) - -ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices. - -#### 15. [NetHogs][14] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg) - -NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if there’s a surge in network traffic you can fire up NetHogs and see which process is causing it. - -#### 16. [iptraf][15] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg) - -iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts. - -#### 17. [ngrep][16] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg) - -ngrep is grep but for the network layer. It’s pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of . - -#### 18. [MRTG][17] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg) - -MRTG was orginally developed to monitor router traffic, but now it’s able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails. - -#### 19. [bmon][18] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg) - -Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting. - -#### 20. traceroute #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg) - -Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network. - -#### 21. [IPTState][19] #### - -IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table. - -#### 22. [darkstat][20] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg) - -Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs. - -#### 23. [vnStat][21] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg) - -vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins. - -#### 24. netstat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg) - -Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. It’s used to find problems in the network. - -#### 25. ss #### - -Instead of using netstat, it’s however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command `ss -s`. - -#### 26. [nmap][22] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg) - -Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing. - -#### 27. [MTR][23] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg) - -MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second. - -#### 28. [Tcpdump][24] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg) - -Tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis. - -#### 29. [Justniffer][25] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg) - -Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has. - -### System related monitoring ### - -#### 30. [nmon][26] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg) - -nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis. - -#### 31. [conky][27] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg) - -Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua. - -#### 32. [Glances][28] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg) - -Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface. - -#### 33. [saidar][29] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg) - -Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible. - -#### 34. [RRDtool][30] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg) - -RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format. - -#### 35. [monit][31] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg) - -Monit has the capability of sending you alerts as well as restarting services if they run into trouble. It’s possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes. - -#### 36. [Linux process explorer][32] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg) - -Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses. - -#### 37. df #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg) - -df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to. - -#### 38. [discus][33] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg) - -Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers. - -#### 39. [xosview][34] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg) - -xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ. - -#### 40. [Dstat][35] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg) - -Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind. - -#### 41. [Net-SNMP][36] #### - -SNMP is the protocol ‘simple network management protocol’ and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol. - -#### 42. [incron][37] #### - -Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory ‘b’ once new files appeared in directory ‘a’ that’s exactly what incron does. - -#### 43. [monitorix][38] #### - -Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics. - -#### 44. vmstat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg) - -vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine. - -#### 45. uptime #### - -This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes. - -#### 46. mpstat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg) - -mpstat is a built-in tool that monitors cpu usage. The most common command is using `mpstat -P ALL` which gives you the usage of all the cores. You can also get an interval update of the CPU usage. - -#### 47. pmap #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg) - -pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks. - -#### 48. ps #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg) - -The ps command will give you an overview of all the current processes. You can easily select all processes using the command `ps -A` - -#### 49. [sar][39] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg) - -sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things. - -#### 50. [collectl][40] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg) - -Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively. - -#### 51. [iostat][41] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg) - -iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine. - -#### 52. free #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg) - -This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment. - -#### 53. /Proc file system #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg) - -The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at the [full list of the proc file statistics][42] - -#### 54. [GKrellM][43] #### - -GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice. - -#### 55. [Gnome system monitor][44] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg) - -Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics. - -### Log monitoring tools ### - -#### 56. [GoAccess][45] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg) - -GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. It’s also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things. - -#### 57. [Logwatch][46] #### - -Logwatch is a log analysis system. It parses through your system’s logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine. - -#### 58. [Swatch][47] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg) - -Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example. - -#### 59. [MultiTail][48] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg) - -MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions. - -#### System tools #### - -#### 60. [acct or psacct][49] #### - -acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command ‘sa’. - -#### 61. [whowatch][50] #### - -Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly what’s happening. - -#### 62. [strace][51] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg) - -strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected. - -#### 63. [DTrace][52] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg) - -DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, it’s not for the weak of heart as there is a 1200 book written on the topic. - -#### 64. [webmin][53] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg) - -Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it. - -#### 65. stat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg) - -Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed. - -#### 66. ifconfig #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg) - -ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with `ifconfig eth0 promisc` and return to normal mode with `ifconfig eth0 -promisc`. - -#### 67. [ulimit][54] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg) - -ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources don’t go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine. - -#### 68. [cpulimit][55] #### - -CPUlimit is a small tool that monitors and then limits the CPU usage of a process. It’s particularly useful to make batch jobs not eat up too many CPU cycles. - -#### 69. lshw #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg) - -lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration. - -#### 70. w #### - -W is a built-in command that displays information about the users currently using the machine and their processes. - -#### 71. lsof #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg) - -lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user. - -### Infrastructure monitoring tools ### - -#### 72. Server Density #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png) - -Our [server monitoring tool][56]! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins. - -#### 73. [OpenNMS][57] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg) - -OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. It’s designed to be customizable to work in a variety of network environments. - -#### 74. [SysUsage][58] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg) - -SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats. - -#### 75. [brainypdm][59] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg) - -brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. It’s cross-platform, has custom graphs and is web based. - -#### 76. [PCP][60] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg) - -PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems. - -#### 77. [KDE system guard][61] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg) - -This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard. - -#### 78. [Munin][62] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg) - -Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available. - -#### 79. [Nagios][63] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg) - -Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform. - -#### 80. [Zenoss][64] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg) - -Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins. - -#### 81. [Cacti][65] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg) - -(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts. - -#### 82. [Zabbix][66] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png) - -Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you don’t like installing an agent, Zabbix might be an option for you. - -### Bonus section: ### - -Thanks for your suggestions. It’s an oversight on our part that we’ll have to go back trough and renumber all the headings. In light of that, here’s a short section at the end for some of the Linux monitoring tools recommended by you: - -#### 83. [collectd][67] #### - -Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible. - -#### 84. [Observium][68] #### - -Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network. - -#### 85. Nload #### - -It’s a command line tool that monitors network throughput. It’s neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with - - yum install nload - -or - - sudo apt-get install nload - -#### 84. [SmokePing][69] #### - -SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you it’s there is an ongoing development to make that happen. - -#### 85. [MobaXterm][70] #### - -If you’re working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs! - -#### 86. [Shinken monitoring][71] #### - -Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins. - --------------------------------------------------------------------------------- - -via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/ - -作者:[Jonathan Sundqvist][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - -[a]:https://www.serverdensity.com/ -[1]:http://hisham.hm/htop/ -[2]:http://www.atoptool.nl/ -[3]:https://github.com/JeremyJones/Apachetop -[4]:http://www.proftpd.org/docs/howto/Scoreboard.html -[5]:http://jeremy.zawodny.com/mysql/mytop/ -[6]:https://01.org/powertop -[7]:http://guichaz.free.fr/iotop/ -[8]:http://www.ntop.org/products/ntop/ -[9]:http://www.ex-parrot.com/pdw/iftop/ -[10]:http://jnettop.kubs.info/wiki/ -[11]:http://bandwidthd.sourceforge.net/ -[12]:http://etherape.sourceforge.net/ -[13]:https://www.kernel.org/pub/software/network/ethtool/ -[14]:http://nethogs.sourceforge.net/ -[15]:http://iptraf.seul.org/ -[16]:http://ngrep.sourceforge.net/ -[17]:http://oss.oetiker.ch/mrtg/ -[18]:https://github.com/tgraf/bmon/ -[19]:http://www.phildev.net/iptstate/index.shtml -[20]:https://unix4lyfe.org/darkstat/ -[21]:http://humdi.net/vnstat/ -[22]:http://nmap.org/ -[23]:http://www.bitwizard.nl/mtr/ -[24]:http://www.tcpdump.org/ -[25]:http://justniffer.sourceforge.net/ -[26]:http://nmon.sourceforge.net/pmwiki.php -[27]:http://conky.sourceforge.net/ -[28]:https://github.com/nicolargo/glances -[29]:https://packages.debian.org/sid/utils/saidar -[30]:http://oss.oetiker.ch/rrdtool/ -[31]:http://mmonit.com/monit -[32]:http://sourceforge.net/projects/procexp/ -[33]:http://packages.ubuntu.com/lucid/utils/discus -[34]:http://www.pogo.org.uk/~mark/xosview/ -[35]:http://dag.wiee.rs/home-made/dstat/ -[36]:http://www.net-snmp.org/ -[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en -[38]:http://www.monitorix.org/ -[39]:http://sebastien.godard.pagesperso-orange.fr/ -[40]:http://collectl.sourceforge.net/ -[41]:http://sebastien.godard.pagesperso-orange.fr/ -[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html -[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html -[44]:http://freecode.com/projects/gnome-system-monitor -[45]:http://goaccess.io/ -[46]:http://sourceforge.net/projects/logwatch/ -[47]:http://sourceforge.net/projects/swatch/ -[48]:http://www.vanheusden.com/multitail/ -[49]:http://www.gnu.org/software/acct/ -[50]:http://whowatch.sourceforge.net/ -[51]:http://sourceforge.net/projects/strace/ -[52]:http://dtrace.org/blogs/about/ -[53]:http://www.webmin.com/ -[54]:http://ss64.com/bash/ulimit.html -[55]:https://github.com/opsengine/cpulimit -[56]:https://www.serverdensity.com/server-monitoring/ -[57]:http://www.opennms.org/ -[58]:http://sysusage.darold.net/ -[59]:http://sourceforge.net/projects/brainypdm/ -[60]:http://www.pcp.io/ -[61]:https://userbase.kde.org/KSysGuard -[62]:http://munin-monitoring.org/ -[63]:http://www.nagios.org/ -[64]:http://www.zenoss.com/ -[65]:http://www.cacti.net/ -[66]:http://www.zabbix.com/ -[67]:https://collectd.org/ -[68]:http://www.observium.org/ -[69]:http://oss.oetiker.ch/smokeping/ -[70]:http://mobaxterm.mobatek.net/ -[71]:http://www.shinken-monitoring.org/ From 7e54d84a9c03eff22c306c362dee2c2b7e7b1593 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 1 Dec 2015 11:59:36 +0800 Subject: [PATCH 062/160] Create 20151030 80 Linux Monitoring Tools for SysAdmins.md --- ...80 Linux Monitoring Tools for SysAdmins.md | 604 ++++++++++++++++++ 1 file changed, 604 insertions(+) create mode 100644 translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md diff --git a/translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md b/translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md new file mode 100644 index 0000000000..7c16ca9fc8 --- /dev/null +++ b/translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md @@ -0,0 +1,604 @@ + +为 Linux 系统管理员准备的80个监控工具 +================================================================================ +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg) + +随着行业的不断发展,有许多比你想象中更棒的工具。这里列着网上最全的(工具)。拥有超过80种方式来管理你的机器。在本文中,我们主要讲述以下方面: + +- 命令行工具 +- 与网络相关的 +- 系统相关的监控工具 +- 日志监控工具 +- 基础设施监控工具 + +监控和调试性能问题非常困难,但用对了正确的工具有时也是很容易的。下面是一些你可能听说过的工具,当你使用它们时可能存在一些问题: + +### 十大系统监控工具 ### + +#### 1. Top #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg) + +这是一个被预安装在许多 UNIX 系统中的小工具。当你想要查看在系统中运行的进程或线程时:top 是一个很好的工具。你可以对这些进程以不同的标准进行排序,默认是以 CPU 进行排序的。 + +#### 2. [htop][1] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg) + +HTOP 实质上是 top 的增强版本。它更容易对进程排序。它在视觉上更容易理解并且已经内建了许多通用的命令。它也是完全交互的。 + +#### 3. [atop][2] #### + +Atop 和 top,htop 非常相似,它也能监控所有进程,但不同于 top 和 htop 的是,它会记录进程的日志供以后分析。它也能显示所有进程的资源消耗。它还会高亮显示已经达到临界负载的资源。 + +#### 4. [apachetop][3] #### + +Apachetop 会监视 apache 网络服务器的整体性能。它主要是基于 mytop。它会显示当前 reads, writes 的数量以及 requests 进程的总数。 + +#### 5. [ftptop][4] #### + +ftptop 给你提供了当前所有连接到 ftp 服务器的基本信息,如会话总数,正在上传和下载的客户端数量以及客户端信息。 + +#### 6. [mytop][5] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg) + +mytop 是一个很方便的工具,用于监控线程和 mysql 的性能。它给了你一个实时的数据库查询处理结果。 + +#### 7. [powertop][6] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg) + +powertop 可以帮助你诊断与电量消耗和电源管理相关的问题。它也可以帮你进行电源管理设置,以实现对你服务器最有效的配置。你可以使用 tab 键进行选项切换。 + +#### 8. [iotop][7] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg) + +iotop 用于检查 I/O 的使用情况,并为你提供了一个类似 top 的界面来显示。它每列显示读和写的速率,每行代表一个进程。当出现等待 I/O 交换时,它也显示进程消耗时间的百分比。 + +### 与网络相关的监控 ### + +#### 9. [ntopng][8] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg) + +ntopng 是 ntop 的升级版,它提供了一个能使用浏览器进行网络监控的图形用户界面。它还有其他用途,如:定位主机,显示网络流量和 ip 流量分布并能进行分析。 + +#### 10. [iftop][9] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg) + +iftop 类似于 top,但它主要不是检查 cpu 的使用率而是监听网卡的流量,并以表格的形式显示当前的使用量。像“为什么我的网速这么慢呢?!”这样的问题它可以直接回答。 + +#### 11. [jnettop][10] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg) + +jnettop 以相同的方式来监测网络流量但比 iftop 更形象。它还支持自定义的文本输出并能以友好的交互方式来快速分析日志。 + +#### 12. [bandwidthd][11] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg) + +bandwidthd 可以跟踪 TCP/IP 网络子网的使用情况并能在浏览器中通过 png 图片形象化的构建一个 HTML 页面。它有一个数据库驱动系统,支持搜索,过滤,多传感器和自定义报表。 + +#### 13. [EtherApe][12] #### + +EtherApe 以图形化显示网络流量,可以支持更多的节点。它可以捕获实时流量信息,也可以从 tcpdump 进行读取。也可以使用具有 pcap 语法的网络过滤显示特定信息。 + +#### 14. [ethtool][13] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg) + +ethtool 用于显示和修改网络接口控制器的一些参数。它也可以用来诊断以太网设备,并获得更多的统计数据。 + +#### 15. [NetHogs][14] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg) + +NetHogs 打破了网络流量按协议或子网进行统计的原理。它以进程组来计算。所以,当网络流量猛增时,你可以使用 NetHogs 查看是由哪个进程造成的。 + +#### 16. [iptraf][15] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg) + +iptraf 收集的各种指标,如 TCP 连接数据包和字节数,接口界面和活动指标,TCP/UDP 通信故障,站内数据包和字节数。 + +#### 17. [ngrep][16] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg) + +ngrep 就是 grep 但是相对于网络层的。pcap 意识到后允许其指定扩展规则或十六进制表达式来匹配数据包。 + +#### 18. [MRTG][17] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg) + +MRTG 最初被开发来监控路由器的流量,但现在它也能够监控网络相关的东西。它每五分钟收集一次,然后产生一个 HTML 页面。它还具有发送邮件报警的能力。 + +#### 19. [bmon][18] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg) + +Bmon 能监控并帮助你调试网络。它能捕获网络相关的统计数据,并以友好的方式进行展示。你还可以与 bmon 通过脚本进行交互。 + +#### 20. traceroute #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg) + +Traceroute 一个内置工具,能测试路由和数据包在网络中的延迟。 + +#### 21. [IPTState][19] #### + +IPTState 可以让你跨越 iptables 来监控流量,并通过你指定的条件来进行排序。该工具还允许你从表中删除状态信息。 + +#### 22. [darkstat][20] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg) + +Darkstat 能捕获网络流量并计算统计的数据。该报告需要在浏览器中进行查看,它为你提供了一个非常棒的图形用户界面。 + +#### 23. [vnStat][21] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg) + +vnStat 是一个网络流量监控工具,它的数据统计是由内核进行提供的,其消耗的系统资源非常少。系统重新启动后,它收集的数据仍然存在。它具有颜色选项供系统管理员使用。 + +#### 24. netstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg) + +netstat 是一个内置的工具,它能显示 TCP 网络连接,路由表和网络接口数量,被用来在网络中查找问题。 + +#### 25. ss #### + +并非 netstat,最好使用 ss。ss 命令能够显示的信息比 netstat 更多,也更快。如果你想查看统计结果的总信息,你可以使用命令 `ss -s`。 + +#### 26. [nmap][22] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg) + +Nmap 可以扫描你服务器开放的端口并且可以检测正在使用哪个操作系统。但你也可以使用 SQL 注入漏洞,网络发现和渗透测试相关的其他手段。 + +#### 27. [MTR][23] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg) + +MTR 结合了 traceroute 和 ping 的功能到一个网络诊断工具上。当使用该工具时,它会限制单个数据包的跳数,同时也监视它们的到期时间。然后每秒进行重复。 + +#### 28. [Tcpdump][24] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg) + +Tcpdump 将输出一个你在命令中匹配并捕获到的数据包的信息。你还可以将此数据保存并进一步分析。 + +#### 29. [Justniffer][25] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg) + +Justniffer 是 tcp 数据包嗅探器。使用此嗅探器你可以选择收集低级别的数据还是高级别的数据。它也可以让你以自定义方式生成日志。比如模仿 Apache 的访问日志。 + +### 与系统有关的监控 ### + +#### 30. [nmon][26] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg) + +nmon 将数据输出到屏幕上的,或将其保存在一个以逗号分隔的文件中。你可以查看 CPU,内存,网络,文件系统,top 进程。数据也可以被添加到 RRD 数据库中用于进一步分析。 + +#### 31. [conky][27] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg) + +Conky 能监视不同操作系统并统计数据。它支持 IMAP 和 POP3, 甚至许多流行的音乐播放器!出于方便不同的人,你可以使用自己的 Lua 脚本或程序来进行扩展。 + +#### 32. [Glances][28] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg) + +使用 Glances 监控你的系统,其旨在使用最小的空间为你呈现最多的信息。它可以在客户端/服务器端模式下运行,也有远程监控的能力。它也有一个 Web 界面。 + +#### 33. [saidar][29] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg) + +Saidar 是一个非常小的工具,为你提供有关系统资源的基础信息。它将系统资源在全屏进行显示。重点是 saidar 会尽可能的简化。 + +#### 34. [RRDtool][30] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg) + +RRDtool 是用来处理 RRD 数据库的工具。RRDtool 旨在处理时间序列数据,如 CPU 负载,温度等。该工具提供了一种方法来提取 RRD 数据并以图形界面显示。 + +#### 35. [monit][31] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg) + +如果出现故障时,monit 有发送警报以及重新启动服务的功能。它可以对任何类型进行检查,你可以为 monit 写一个脚本,它有一个 Web 用户界面来分担你眼睛的压力。 + +#### 36. [Linux process explorer][32] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg) + +Linux process explorer 是类似 OSX 或 Windows 的在线监视器。它比 top 或 ps 的使用范围更广。你可以查看每个进程的内存消耗以及 CPU 的使用情况。 + +#### 37. df #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg) + +df 是 disk free 的缩写,它是所有 UNIX 系统预装的程序,用来显示用户有访问权限的文件系统的可用磁盘空间。 + +#### 38. [discus][33] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg) + +Discus 类似于 df,它的目的是通过使用更吸引人的特性,如颜色,图形和数字来对 df 进行改进。 + +#### 39. [xosview][34] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg) + +xosview 是一款经典的系统监控工具,它给你提供包括 IRQ 的各个不同部分的总览。 + +#### 40. [Dstat][35] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg) + +Dstat 旨在替代 vmstat,iostat,netstat 和 ifstat。它可以让你查实时查看所有的系统资源。这些数据可以导出为 CSV。最重要的是 dstat 允许使用插件,因此其可以扩展到更多领域。 + +#### 41. [Net-SNMP][36] #### + +SNMP 是“简单网络管理协议”,Net-SNMP 工具套件使用该协议可帮助你收集服务器的准确信息。 + +#### 42. [incron][37] #### + +Incron 允许你监控一个目录树,然后对这些变化采取措施。如果你想将目录‘a’中的新文件复制到目录‘b’,这正是 incron 能做的。 + +#### 43. [monitorix][38] #### + +Monitorix 是轻量级的系统监控工具。它可以帮助你监控一台机器,并为你提供丰富的指标。它也有一个内置的 HTTP 服务器,来查看图表和所有指标的报告。 + +#### 44. vmstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg) + +vmstat(virtual memory statistics)是一个小的内置工具,能监控和显示机器的内存。 + +#### 45. uptime #### + +这个小程序能快速显示你机器运行了多久,目前有多少用户登录和系统过去1分钟,5分钟和15分钟的平均负载。 + +#### 46. mpstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg) + +mpstat 是一个内置的工具,能监视 cpu 的使用情况。最常见的使用方法是 `mpstat -P ALL`,它给你提供 cpu 的使用情况。你也可以间隔更新 cpu 的使用情况。 + +#### 47. pmap #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg) + +pmap 是一个内置的工具,报告一个进程的内存映射。你可以使用这个命令来找出内存瓶颈的原因。 + +#### 48. ps #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg) + +该命令将给你当前所有进程的概述。你可以使用 `ps -A` 命令查看所有进程。 + +#### 49. [sar][39] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg) + +sar 是 sysstat 包的一部分,可以帮助你收集,报告和保存不同系统的指标。使用不同的参数,它会给你提供 CPU, 内存 和 I/O 使用情况及其他东西。 + +#### 50. [collectl][40] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg) + +类似于 sar,collectl 收集你机器的性能指标。默认情况下,显示 cpu,网络和磁盘统计数据,但它实际收集了很多信息。与 sar 不同的是,collectl 能够处理比秒更小的单位,它可以被直接送入绘图工具并且 collectl 的监控过程更广泛。 + +#### 51. [iostat][41] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg) + +iostat 也是 sysstat 包的一部分。此命令用于监控系统的输入/输出。其报告可以用来进行系统调优,以更好地调节你机器上硬盘的输入/输出负载。 + +#### 52. free #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg) + +这是一个内置的命令用于显示你机器上可用的内存大小以及已使用的内存大小。它还可以显示某时刻内核所使用的缓冲区大小。 + +#### 53. /Proc 文件系统 #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg) + +proc 文件系统可以让你查看内核的统计信息。从这些统计数据可以得到你机器上不同硬件设备的详细信息。看看这个 [ proc文件统计的完整列表 ][42]。 + +#### 54. [GKrellM][43] #### + +GKrellm 是一个图形应用程序来监控你硬件的状态信息,像CPU,内存,硬盘,网络接口以及其他的。它也可以监视并启动你所选择的邮件阅读器。 + +#### 55. [Gnome 系统监控器][44] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg) + +Gnome 系统监控器是一个基本的系统监控工具,其能通过一个树状结构来查看进程的依赖关系,能杀死及调整进程优先级,还能以图表形式显示所有服务器的指标。 + +### 日志监控工具 ### + +#### 56. [GoAccess][45] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg) + +GoAccess 是一个实时的网络日志分析器,它能分析 apache, nginx 和 amazon cloudfront 的访问日志。它也可以将数据输出成 HTML,JSON 或 CSV 格式。它会给你一个基本的统计信息,访问量,404页面,访客位置和其他东西。 + +#### 57. [Logwatch][46] #### + +Logwatch 是一个日志分析系统。它通过分析系统的日志,并为你所指定的区域创建一个分析报告。它每天给你一个报告可以让你花费更少的时间来分析日志。 + +#### 58. [Swatch][47] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg) + +像 Logwatch 一样,Swatch 也监控你的日志,但不是给你一个报告,它会匹配你定义的正则表达式,当匹配到后会通过邮件或控制台通知你。它可用于检测入侵者。 + +#### 59. [MultiTail][48] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg) + +MultiTail 可帮助你在多窗口下监控日志文件。你可以将这些日志文件合并成一个。它也像正则表达式一样使用不同的颜色来显示日志文件以方便你阅读。 + +#### 系统工具 #### + +#### 60. [acct or psacct][49] #### + +acct 也称 psacct(取决于如果你使用 apt-get 还是 yum)可以监控所有用户执行的命令,包括 CPU 和内存在系统内所使用的时间。一旦安装完成后你可以使用命令 ‘sa’ 来查看。 + +#### 61. [whowatch][50] #### + +类似 acct,这个工具监控系统上所有的用户,并允许你实时查看他们正在执行的命令及运行的进程。它将所有进程以树状结构输出,这样你就可以清楚地看到到底发生了什么。 + +#### 62. [strace][51] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg) + +strace 被用于诊断,调试和监控程序之间的相互调用过程。最常见的做法是用 strace 打印系统调用的程序列表,其可以看出程序是否像预期那样被执行了。 + +#### 63. [DTrace][52] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg) + +DTrace 可以说是 strace 的大哥。它动态地跟踪与检测代码实时运行的指令。它允许你深入分析其性能和诊断故障。但是,它并不简单,大约有1200本书中提到过它。 + +#### 64. [webmin][53] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg) + +Webmin 是一个基于 Web 的系统管理工具。它不需要手动编辑 UNIX 配置文件,并允许你远程管理系统。它有一对监控模块用于连接它。 + +#### 65. stat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg) + +Stat 是一个内置的工具,用于显示文件和文件系统的状态信息。它会显示文件被修改,访问或更改的信息。 + +#### 66. ifconfig #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg) + +ifconfig 是一个内置的工具用于配置网络接口。大多数网络监控工具背后都使用 ifconfig 将其设置成混乱模式来捕获所有的数据包。你可以手动执行 `ifconfig eth0 promisc` 并使用 `ifconfig eth0 -promisc` 返回正常模式。 + +#### 67. [ulimit][54] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg) + +ulimit 是一个内置的工具,可监控系统资源,并可以限制任何监控资源不得超标。比如做一个 fork 炸弹,如果使用 ulimit 正确配置了将完全不受影响。 + +#### 68. [cpulimit][55] #### + +CPULimit 是一个小工具用于监控并限制进程对 CPU 的使用率。其特别有用,能限制批处理作业对 CPU 的使用率保持在一定范围。 + +#### 69. lshw #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg) + +lshw 是一个小的内置工具能提取关于本机硬件配置的详细信息。它可以输出 CPU 版本和主板配置。 + +#### 70. w #### + +w 是一个内置命令用于显示当前登录用户的信息及他们所运行的进程。 + +#### 71. lsof #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg) + +lsof 是一个内置的工具可让你列出所有打开的文件和网络连接。从那里你可以看到文件是由哪个进程打开的,基于进程名,可通过一个特定的用户来杀死属于某个用户的所有进程。 + +### 基础架构监控工具 ### + +#### 72. Server Density #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png) + +我们的 [服务器监控工具][56]!它有一个 web 界面,使你可以进行报警设置并可以通过图表来查看所有系统的网络指标。你还可以设置监控的网站,无论是否在线。Server Density 允许你设置用户的权限,你可以根据我们的插件或 api 来扩展你的监控。该服务已经支持 Nagios 的插件了。 + +#### 73. [OpenNMS][57] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg) + +OpenNMS 主要有四个功能区:事件管理和通知;发现和配置;服务监控和数据收集。其设计可被在多种网络环境中定制。 + +#### 74. [SysUsage][58] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg) + +SysUsage 通过 Sar 和其他系统命令持续监控你的系统。一旦达到阈值它也可以进行报警通知。SysUsage 本身也可以收集所有的统计信息并存储在一个地方。它有一个 Web 界面可以让你查看所有的统计数据。 + +#### 75. [brainypdm][59] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg) + +brainypdm 是一个数据管理和监控工具,它能收集来自 nagios 或其它公共资源的数据并以图表显示。它是跨平台的,其基于 Web 并可自定义图形。 + +#### 76. [PCP][60] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg) + +PCP 可以收集来自多个主机的指标,并且效率很高。它也有一个插件框架,所以你可以把它收集的对你很重要的指标使用插件来管理。你可以通过任何一个 Web 界面或 GUI 访问图形数据。它比较适合大型监控系统。 + +#### 77. [KDE 系统保护][61] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg) + +这个工具既是一个系统监控器也是一个任务管理器。你可以通过工作表来查看多台机器的服务指标,如果一个进程需要被杀死或者你需要启动一个进程,它可以在 KDE 系统保护中来完成。 + +#### 78. [Munin][62] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg) + +Munin 既是一个网络也是系统监控工具,当一个指标超出给定的阈值时它会提供报警机制。它运用 RRDtool 创建图表,并且它也有 Web 界面来显示这些图表。它更强调的是即插即用的功能并且有许多可用的插件。 + +#### 79. [Nagios][63] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg) + +Nagios 是系统和网络监控工具,可帮助你监控多台服务器。当发生错误时它也有报警功能。它的平台也有很多的插件。 + +#### 80. [Zenoss][64] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg) + +Zenoss 提供了一个 Web 界面,使你可以监控所有的系统和网络指标。此外,它能自动发现网络资源和修改网络配置。并且会提醒你采取行动,它也支持 Nagios 的插件。 + +#### 81. [Cacti][65] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg) + +(和上一个一样!) Cacti 是一个网络图形解决方案,其使用 RRDtool 进行数据存储。它允许用户在预定的时间间隔进行投票服务并将结果以图形显示。Cacti 可以通过 shell 脚本扩展来监控你所选择的来源。 + +#### 82. [Zabbix][66] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png) + +Zabbix 是一个开源的基础设施监控解决方案。它使用了许多数据库来存放监控统计信息。其核心是用 C 语言编写,并在前端中使用 PHP。如果你不喜欢安装代理,Zabbix 可能是一个最好选择。 + +### 附加部分: ### + +感谢您的建议。这是我们的一个附加部分,由于我们需要重新编排所有的标题,鉴于此,这是在最后的一个简短部分,根据您的建议添加的一些 Linux 监控工具: + +#### 83. [collectd][67] #### + +Collectd 是一个 Unix 守护进程来收集所有的监控数据。它采用了模块化设计并使用插件来填补一些缺陷。这样能使 collectd 保持轻量级并可进行定制。 + +#### 84. [Observium][68] #### + +Observium 是一个自动发现网络的监控平台,支持普通的硬件平台和操作系统。Observium 专注于提供一个优美,功能强大,简单直观的界面来显示网络的健康和状态。 + +#### 85. Nload #### + +这是一个命令行工具来监控网络的吞吐量。它很整洁,因为它使用两个图表和其他一些有用的数据类似传输的数据总量来对进出站流量进行可视化。你可以使用如下方法安装它: + + yum install nload + +或者 + + sudo apt-get install nload + +#### 86. [SmokePing][69] #### + +SmokePing 可以跟踪你网络延迟,并对他们进行可视化。SmokePing 有一个流行的延迟测量插件。如果图形用户界面对你来说非常重要,现在有一个正在开发中的插件来实现此功能。 + +#### 87. [MobaXterm][70] #### + +如果你整天在 windows 环境下工作。你可能会觉得 Windows 下受终端窗口的限制。MobaXterm 正是由此而来的,它允许你使用多个在 Linux 中相似的终端。这将会极大地帮助你在监控方面的需求! + +#### 88. [Shinken monitoring][71] #### + +Shinken 是一个监控框架,其是由 python 对 Nagios 进行完全重写的。它的目的是增强灵活性和管理更大环境。但仍保持所有的 nagios 配置和插件。 + +-------------------------------------------------------------------------------- + +via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/ + +作者:[Jonathan Sundqvist][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:https://www.serverdensity.com/ +[1]:http://hisham.hm/htop/ +[2]:http://www.atoptool.nl/ +[3]:https://github.com/JeremyJones/Apachetop +[4]:http://www.proftpd.org/docs/howto/Scoreboard.html +[5]:http://jeremy.zawodny.com/mysql/mytop/ +[6]:https://01.org/powertop +[7]:http://guichaz.free.fr/iotop/ +[8]:http://www.ntop.org/products/ntop/ +[9]:http://www.ex-parrot.com/pdw/iftop/ +[10]:http://jnettop.kubs.info/wiki/ +[11]:http://bandwidthd.sourceforge.net/ +[12]:http://etherape.sourceforge.net/ +[13]:https://www.kernel.org/pub/software/network/ethtool/ +[14]:http://nethogs.sourceforge.net/ +[15]:http://iptraf.seul.org/ +[16]:http://ngrep.sourceforge.net/ +[17]:http://oss.oetiker.ch/mrtg/ +[18]:https://github.com/tgraf/bmon/ +[19]:http://www.phildev.net/iptstate/index.shtml +[20]:https://unix4lyfe.org/darkstat/ +[21]:http://humdi.net/vnstat/ +[22]:http://nmap.org/ +[23]:http://www.bitwizard.nl/mtr/ +[24]:http://www.tcpdump.org/ +[25]:http://justniffer.sourceforge.net/ +[26]:http://nmon.sourceforge.net/pmwiki.php +[27]:http://conky.sourceforge.net/ +[28]:https://github.com/nicolargo/glances +[29]:https://packages.debian.org/sid/utils/saidar +[30]:http://oss.oetiker.ch/rrdtool/ +[31]:http://mmonit.com/monit +[32]:http://sourceforge.net/projects/procexp/ +[33]:http://packages.ubuntu.com/lucid/utils/discus +[34]:http://www.pogo.org.uk/~mark/xosview/ +[35]:http://dag.wiee.rs/home-made/dstat/ +[36]:http://www.net-snmp.org/ +[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en +[38]:http://www.monitorix.org/ +[39]:http://sebastien.godard.pagesperso-orange.fr/ +[40]:http://collectl.sourceforge.net/ +[41]:http://sebastien.godard.pagesperso-orange.fr/ +[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html +[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html +[44]:http://freecode.com/projects/gnome-system-monitor +[45]:http://goaccess.io/ +[46]:http://sourceforge.net/projects/logwatch/ +[47]:http://sourceforge.net/projects/swatch/ +[48]:http://www.vanheusden.com/multitail/ +[49]:http://www.gnu.org/software/acct/ +[50]:http://whowatch.sourceforge.net/ +[51]:http://sourceforge.net/projects/strace/ +[52]:http://dtrace.org/blogs/about/ +[53]:http://www.webmin.com/ +[54]:http://ss64.com/bash/ulimit.html +[55]:https://github.com/opsengine/cpulimit +[56]:https://www.serverdensity.com/server-monitoring/ +[57]:http://www.opennms.org/ +[58]:http://sysusage.darold.net/ +[59]:http://sourceforge.net/projects/brainypdm/ +[60]:http://www.pcp.io/ +[61]:https://userbase.kde.org/KSysGuard +[62]:http://munin-monitoring.org/ +[63]:http://www.nagios.org/ +[64]:http://www.zenoss.com/ +[65]:http://www.cacti.net/ +[66]:http://www.zabbix.com/ +[67]:https://collectd.org/ +[68]:http://www.observium.org/ +[69]:http://oss.oetiker.ch/smokeping/ +[70]:http://mobaxterm.mobatek.net/ +[71]:http://www.shinken-monitoring.org/ From 1cbef0836a640aebb30360f2d67f563574b2af0e Mon Sep 17 00:00:00 2001 From: runningwater Date: Tue, 1 Dec 2015 12:15:31 +0800 Subject: [PATCH 063/160] =?UTF-8?q?=E4=BB=A3zky001=E6=8F=90=E4=BA=A4?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...grep Command In Linux or UNIX--Examples.md | 152 ------------------ ...grep Command In Linux or UNIX--Examples.md | 143 ++++++++++++++++ 2 files changed, 143 insertions(+), 152 deletions(-) delete mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md create mode 100644 translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md deleted file mode 100644 index 1db160d73f..0000000000 --- a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md +++ /dev/null @@ -1,152 +0,0 @@ -(翻译中 by runningwater) -HowTo: Use grep Command In Linux / UNIX – Examples -================================================================================ -How do I use grep command on Linux, Apple OS X, and Unix-like operating systems? Can you give me a simple examples of the grep command? - -The grep command is used to search text or searches the given file for lines containing a match to the given strings or words. By default, grep displays the matching lines. Use grep to search for lines of text that match one or many regular expressions, and outputs only the matching lines. grep is considered as one of the most useful commands on Linux and Unix-like operating systems. - -### Did you know? ### - -The name, "grep", derives from the command used to perform a similar operation, using the Unix/Linux text editor ed: - - g/re/p - -### The grep command syntax ### - -The syntax is as follows: - - grep 'word' filename - grep 'word' file1 file2 file3 - grep 'string1 string2' filename - cat otherfile | grep 'something' - command | grep 'something' - command option1 | grep 'data' - grep --color 'data' fileName - -### How do I use grep command to search a file? ### - -Search /etc/passwd file for boo user, enter: - - $ grep boo /etc/passwd - -Sample outputs: - - foo:x:1000:1000:foo,,,:/home/foo:/bin/ksh - -You can force grep to ignore word case i.e match boo, Boo, BOO and all other combination with the -i option: - - $ grep -i "boo" /etc/passwd - -### Use grep recursively ### - -You can search recursively i.e. read all files under each directory for a string "192.168.1.5" - - $ grep -r "192.168.1.5" /etc/ - -OR - - $ grep -R "192.168.1.5" /etc/ - -Sample outputs: - - /etc/ppp/options:# ms-wins 192.168.1.50 - /etc/ppp/options:# ms-wins 192.168.1.51 - /etc/NetworkManager/system-connections/Wired connection 1:addresses1=192.168.1.5;24;192.168.1.2; - -You will see result for 192.168.1.5 on a separate line preceded by the name of the file (such as /etc/ppp/options) in which it was found. The inclusion of the file names in the output data can be suppressed by using the -h option as follows: - - $ grep -h -R "192.168.1.5" /etc/ - -OR - - $ grep -hR "192.168.1.5" /etc/ - -Sample outputs: - - # ms-wins 192.168.1.50 - # ms-wins 192.168.1.51 - addresses1=192.168.1.5;24;192.168.1.2; - -### Use grep to search words only ### - -When you search for boo, grep will match fooboo, boo123, barfoo35 and more. You can force the grep command to select only those lines containing matches that form whole words i.e. match only boo word: - - $ grep -w "boo" file - -### Use grep to search 2 different words ### - -Use the egrep command as follows: - - $ egrep -w 'word1|word2' /path/to/file - -### Count line when words has been matched ### - -The grep can report the number of times that the pattern has been matched for each file using -c (count) option: - - $ grep -c 'word' /path/to/file - -Pass the -n option to precede each line of output with the number of the line in the text file from which it was obtained: - - $ grep -n 'root' /etc/passwd - -Sample outputs: - - 1:root:x:0:0:root:/root:/bin/bash - 1042:rootdoor:x:0:0:rootdoor:/home/rootdoor:/bin/csh - 3319:initrootapp:x:0:0:initrootapp:/home/initroot:/bin/ksh - -### Grep invert match ### - -You can use -v option to print inverts the match; that is, it matches only those lines that do not contain the given word. For example print all line that do not contain the word bar: - - $ grep -v bar /path/to/file - -### UNIX / Linux pipes and grep command ### - -grep command often used with [shell pipes][1]. In this example, show the name of the hard disk devices: - - # dmesg | egrep '(s|h)d[a-z]' - -Display cpu model name: - - # cat /proc/cpuinfo | grep -i 'Model' - -However, above command can be also used as follows without shell pipe: - - # grep -i 'Model' /proc/cpuinfo - -Sample outputs: - - model : 30 - model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz - model : 30 - model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz - -### How do I list just the names of matching files? ### - -Use the -l option to list file name whose contents mention main(): - - $ grep -l 'main' *.c - -Finally, you can force grep to display output in colors, enter: - - $ grep --color vivek /etc/passwd - -Sample outputs: - -![Grep command in action](http://files.cyberciti.biz/uploads/faq/2007/08/grep_command_examples.png) - -Grep command in action - --------------------------------------------------------------------------------- - -via: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ - -作者:Vivek Gite -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - -[1]:http://bash.cyberciti.biz/guide/Pipes \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md new file mode 100644 index 0000000000..b539b9a4a8 --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md @@ -0,0 +1,143 @@ +grepƥַ򵥴ʸʽļļͨ˵grep ʾƥ䵽Уʹgrepһʽƥ䵽УֻʾʵУgrepΪLinuxUnixϵͳõ +### ֪ ### +grep֣ԴڱʾһƵΪgrepUnixLinuxı༭ǣ + + g/re/p + +### grep﷨ ### + +﷨ʾ: + + grep 'word' filename + grep 'word' file1 file2 file3 + grep 'string1 string2' filename + cat otherfile | grep 'something' + command | grep 'something' + command option1 | grep 'data' + grep --color 'data' fileName + +###ôʹgrepһļ### + + /etc/passwd ļµbooû,: + + $ grep boo /etc/passwd + +: + + foo:x:1000:1000:foo,,,:/home/foo:/bin/ksh + +ʹgrepȥǿƺԴСд i.e ʹ-iƥ boo, Boo, BOO ѡ: + + $ grep -i "boo" /etc/passwd + +### ݹʹgrep ### + +ʹgrepݹ i.e. ļĿ¼аַ192.168.1.5ļ + + $ grep -r "192.168.1.5" /etc/ + +ǣ + + $ grep -R "192.168.1.5" /etc/ + +ʾ: + + /etc/ppp/options:# ms-wins 192.168.1.50 + /etc/ppp/options:# ms-wins 192.168.1.51 + /etc/NetworkManager/system-connections/Wired connection 1:addresses1=192.168.1.5;24;192.168.1.2; + +ῴҵ 192.168.1.5 ĽļΪʾڵ棬֮аļԼ-hѡֹ + $ grep -h -R "192.168.1.5" /etc/ + + + + $ grep -hR "192.168.1.5" /etc/ + +ʾ: + + # ms-wins 192.168.1.50 + # ms-wins 192.168.1.51 + addresses1=192.168.1.5;24;192.168.1.2; + +### ʹgrepȥı ### + +boogrepƥfoobooboo123, barfoo35 booַʹ-wѡȥǿѡЩǸʵС + + $ grep -w "boo" file + +### ʹegrepȥȽϲͬ ### + +ʹegrep: + + $ egrep -w 'word1|word2' /path/to/file + +### ıƥʱͳ ### + +grepͨ-cʾÿļƥ䵽Ĵ + + $ grep -c 'word' /path/to/file + +-nѡȥʾǰƥ䵽ļ + + $ grep -n 'root' /etc/passwd + +ʾ: + + 1:root:x:0:0:root:/root:/bin/bash + 1042:rootdoor:x:0:0:rootdoor:/home/rootdoor:/bin/csh + 3319:initrootapp:x:0:0:initrootapp:/home/initroot:/bin/ksh + +### תƥ ### + +ʹ-vѡȥӡƥݣݽЩʵУɾbarʵУ + + $ grep -v bar /path/to/file + +### UNIX / Linux ܵ grep ### + +grep ܵһʹãУʾӲ֣ + + # dmesg | egrep '(s|h)d[a-z]' + +ʾCPUģ + + # cat /proc/cpuinfo | grep -i 'Model' + +Ȼ԰·ʹõͬʱʹùܵ: + + # grep -i 'Model' /proc/cpuinfo + +ʾ: + + model : 30 + model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz + model : 30 + model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz + +### νʾƥ䵽ݵļ? ### + +ʹ-lѡȥʾЩļаmainļ: + + $ grep -l 'main' *.c + +ʹgrepɫʵʾ: + + $ grep --color vivek /etc/passwd + +ʾ: + +![Grep command in action](http://files.cyberciti.biz/uploads/faq/2007/08/grep_command_examples.png) + + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ + +ߣVivek Gite +ߣ[zky001](https://github.com/zky001) +Уԣ[УID](https://github.com/УID) + + [LCTT](https://github.com/LCTT/TranslateProject) ԭ룬[Linuxй](https://linux.cn/) Ƴ + +УID +[1]:http://bash.cyberciti.biz/guide/Pipes \ No newline at end of file From ba5bfaf18d7d1e34359756f585292b4302b29b59 Mon Sep 17 00:00:00 2001 From: runningwater Date: Tue, 1 Dec 2015 13:34:52 +0800 Subject: [PATCH 064/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD=20by=20runn?= =?UTF-8?q?ingwater?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...p Command Tutorial series 2--Regular Expressions In grep.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md index 506719d8aa..8bac50fe25 100644 --- a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md @@ -1,3 +1,4 @@ +(translating by runningwater) Regular Expressions In grep ================================================================================ How do I use the Grep command with regular expressions on a Linux and Unix-like operating systems? @@ -283,7 +284,7 @@ References: via: http://www.cyberciti.biz/faq/grep-regular-expressions/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 9b76b5c997145d0e61f5b233adfd953ca3727836 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 1 Dec 2015 14:46:17 +0800 Subject: [PATCH 065/160] =?UTF-8?q?=E5=BD=92=E6=A1=A3201511?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... data structures and algorithms make you a better developer.md | 0 .../20150827 The Strangest Most Unique Linux Distros.md | 0 ... to switch from NetworkManager to systemd-networkd on Linux.md | 0 ...50909 Superclass--15 of the world's best living programmers.md | 0 ... Basic Hardware Info Using screenfetch and linux_logo Tools.md | 0 .../{ => 201511}/20150921 Configure PXE Server In Ubuntu 14.04.md | 0 .../20150929 A Developer's Journey into Linux Containers.md | 0 .../20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md | 0 ...20151012 Curious about Linux Try Linux Desktop on the Cloud.md | 0 .../{ => 201511}/20151012 How To Use iPhone In Antergos Linux.md | 0 ... to Monitor Stock Prices from Ubuntu Command Line Using Mop.md | 0 .../20151012 How to Setup DockerUI--a Web Interface for Docker.md | 0 .../20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md | 0 ... to find information about built-in kernel modules on Linux.md | 0 .../{ => 201511}/20151012 What is a good IDE for R on Linux.md | 0 ... with Answers--How to install Ubuntu desktop behind a proxy.md | 0 ...0151019 Nautilus File Search Is About To Get A Big Power Up.md | 0 .../20151027 How To Install Retro Terminal In Linux.md | 0 published/{ => 201511}/20151027 How To Show Desktop In GNOME 3.md | 0 ...1027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md | 0 ...ate New File Systems or Partitions in the Terminal on Linux.md | 0 ...20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md | 0 ...nage Your To-Do Lists in Ubuntu Using Go For It Application.md | 0 ...s with Answers--How to change default Java version on Linux.md | 0 ...s with Answers--How to find which shell I am using on Linux.md | 0 .../{ => 201511}/20151109 Open Source Alternatives to LastPass.md | 0 ...o set JAVA_HOME environment variable automatically on Linux.md | 0 .../20151117 N1--The Next Generation Open Source Email Client.md | 0 ...ow to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md | 0 .../20151123 Install Intel Graphics Installer in Ubuntu 15.10.md | 0 .../Learn with Linux--Master Your Math with These Linux Apps.md | 0 published/{ => 201511}/LetsEncrypt.md | 0 32 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201511}/20150823 How learning data structures and algorithms make you a better developer.md (100%) rename published/{ => 201511}/20150827 The Strangest Most Unique Linux Distros.md (100%) rename published/{ => 201511}/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md (100%) rename published/{ => 201511}/20150909 Superclass--15 of the world's best living programmers.md (100%) rename published/{ => 201511}/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md (100%) rename published/{ => 201511}/20150921 Configure PXE Server In Ubuntu 14.04.md (100%) rename published/{ => 201511}/20150929 A Developer's Journey into Linux Containers.md (100%) rename published/{ => 201511}/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md (100%) rename published/{ => 201511}/20151012 Curious about Linux Try Linux Desktop on the Cloud.md (100%) rename published/{ => 201511}/20151012 How To Use iPhone In Antergos Linux.md (100%) rename published/{ => 201511}/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md (100%) rename published/{ => 201511}/20151012 How to Setup DockerUI--a Web Interface for Docker.md (100%) rename published/{ => 201511}/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md (100%) rename published/{ => 201511}/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md (100%) rename published/{ => 201511}/20151012 What is a good IDE for R on Linux.md (100%) rename published/{ => 201511}/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md (100%) rename published/{ => 201511}/20151019 Nautilus File Search Is About To Get A Big Power Up.md (100%) rename published/{ => 201511}/20151027 How To Install Retro Terminal In Linux.md (100%) rename published/{ => 201511}/20151027 How To Show Desktop In GNOME 3.md (100%) rename published/{ => 201511}/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md (100%) rename published/{ => 201511}/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md (100%) rename published/{ => 201511}/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md (100%) rename published/{ => 201511}/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md (100%) rename published/{ => 201511}/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md (100%) rename published/{ => 201511}/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md (100%) rename published/{ => 201511}/20151109 Open Source Alternatives to LastPass.md (100%) rename published/{ => 201511}/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md (100%) rename published/{ => 201511}/20151117 N1--The Next Generation Open Source Email Client.md (100%) rename published/{ => 201511}/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md (100%) rename published/{ => 201511}/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md (100%) rename published/{ => 201511}/Learn with Linux--Master Your Math with These Linux Apps.md (100%) rename published/{ => 201511}/LetsEncrypt.md (100%) diff --git a/published/20150823 How learning data structures and algorithms make you a better developer.md b/published/201511/20150823 How learning data structures and algorithms make you a better developer.md similarity index 100% rename from published/20150823 How learning data structures and algorithms make you a better developer.md rename to published/201511/20150823 How learning data structures and algorithms make you a better developer.md diff --git a/published/20150827 The Strangest Most Unique Linux Distros.md b/published/201511/20150827 The Strangest Most Unique Linux Distros.md similarity index 100% rename from published/20150827 The Strangest Most Unique Linux Distros.md rename to published/201511/20150827 The Strangest Most Unique Linux Distros.md diff --git a/published/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/published/201511/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md similarity index 100% rename from published/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md rename to published/201511/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md diff --git a/published/20150909 Superclass--15 of the world's best living programmers.md b/published/201511/20150909 Superclass--15 of the world's best living programmers.md similarity index 100% rename from published/20150909 Superclass--15 of the world's best living programmers.md rename to published/201511/20150909 Superclass--15 of the world's best living programmers.md diff --git a/published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md b/published/201511/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md similarity index 100% rename from published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md rename to published/201511/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md diff --git a/published/20150921 Configure PXE Server In Ubuntu 14.04.md b/published/201511/20150921 Configure PXE Server In Ubuntu 14.04.md similarity index 100% rename from published/20150921 Configure PXE Server In Ubuntu 14.04.md rename to published/201511/20150921 Configure PXE Server In Ubuntu 14.04.md diff --git a/published/20150929 A Developer's Journey into Linux Containers.md b/published/201511/20150929 A Developer's Journey into Linux Containers.md similarity index 100% rename from published/20150929 A Developer's Journey into Linux Containers.md rename to published/201511/20150929 A Developer's Journey into Linux Containers.md diff --git a/published/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md b/published/201511/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md similarity index 100% rename from published/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md rename to published/201511/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md diff --git a/published/20151012 Curious about Linux Try Linux Desktop on the Cloud.md b/published/201511/20151012 Curious about Linux Try Linux Desktop on the Cloud.md similarity index 100% rename from published/20151012 Curious about Linux Try Linux Desktop on the Cloud.md rename to published/201511/20151012 Curious about Linux Try Linux Desktop on the Cloud.md diff --git a/published/20151012 How To Use iPhone In Antergos Linux.md b/published/201511/20151012 How To Use iPhone In Antergos Linux.md similarity index 100% rename from published/20151012 How To Use iPhone In Antergos Linux.md rename to published/201511/20151012 How To Use iPhone In Antergos Linux.md diff --git a/published/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md b/published/201511/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md similarity index 100% rename from published/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md rename to published/201511/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md diff --git a/published/20151012 How to Setup DockerUI--a Web Interface for Docker.md b/published/201511/20151012 How to Setup DockerUI--a Web Interface for Docker.md similarity index 100% rename from published/20151012 How to Setup DockerUI--a Web Interface for Docker.md rename to published/201511/20151012 How to Setup DockerUI--a Web Interface for Docker.md diff --git a/published/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md b/published/201511/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md similarity index 100% rename from published/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md rename to published/201511/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md diff --git a/published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/published/201511/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md similarity index 100% rename from published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md rename to published/201511/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md diff --git a/published/20151012 What is a good IDE for R on Linux.md b/published/201511/20151012 What is a good IDE for R on Linux.md similarity index 100% rename from published/20151012 What is a good IDE for R on Linux.md rename to published/201511/20151012 What is a good IDE for R on Linux.md diff --git a/published/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md b/published/201511/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md similarity index 100% rename from published/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md rename to published/201511/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md diff --git a/published/20151019 Nautilus File Search Is About To Get A Big Power Up.md b/published/201511/20151019 Nautilus File Search Is About To Get A Big Power Up.md similarity index 100% rename from published/20151019 Nautilus File Search Is About To Get A Big Power Up.md rename to published/201511/20151019 Nautilus File Search Is About To Get A Big Power Up.md diff --git a/published/20151027 How To Install Retro Terminal In Linux.md b/published/201511/20151027 How To Install Retro Terminal In Linux.md similarity index 100% rename from published/20151027 How To Install Retro Terminal In Linux.md rename to published/201511/20151027 How To Install Retro Terminal In Linux.md diff --git a/published/20151027 How To Show Desktop In GNOME 3.md b/published/201511/20151027 How To Show Desktop In GNOME 3.md similarity index 100% rename from published/20151027 How To Show Desktop In GNOME 3.md rename to published/201511/20151027 How To Show Desktop In GNOME 3.md diff --git a/published/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md b/published/201511/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md similarity index 100% rename from published/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md rename to published/201511/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md diff --git a/published/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md b/published/201511/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md similarity index 100% rename from published/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md rename to published/201511/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md diff --git a/published/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md b/published/201511/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md similarity index 100% rename from published/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md rename to published/201511/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md diff --git a/published/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md b/published/201511/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md similarity index 100% rename from published/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md rename to published/201511/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md diff --git a/published/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md b/published/201511/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md similarity index 100% rename from published/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md rename to published/201511/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md diff --git a/published/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md b/published/201511/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md similarity index 100% rename from published/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md rename to published/201511/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md diff --git a/published/20151109 Open Source Alternatives to LastPass.md b/published/201511/20151109 Open Source Alternatives to LastPass.md similarity index 100% rename from published/20151109 Open Source Alternatives to LastPass.md rename to published/201511/20151109 Open Source Alternatives to LastPass.md diff --git a/published/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md b/published/201511/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md similarity index 100% rename from published/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md rename to published/201511/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md diff --git a/published/20151117 N1--The Next Generation Open Source Email Client.md b/published/201511/20151117 N1--The Next Generation Open Source Email Client.md similarity index 100% rename from published/20151117 N1--The Next Generation Open Source Email Client.md rename to published/201511/20151117 N1--The Next Generation Open Source Email Client.md diff --git a/published/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md b/published/201511/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md similarity index 100% rename from published/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md rename to published/201511/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md diff --git a/published/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/published/201511/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md similarity index 100% rename from published/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md rename to published/201511/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md diff --git a/published/Learn with Linux--Master Your Math with These Linux Apps.md b/published/201511/Learn with Linux--Master Your Math with These Linux Apps.md similarity index 100% rename from published/Learn with Linux--Master Your Math with These Linux Apps.md rename to published/201511/Learn with Linux--Master Your Math with These Linux Apps.md diff --git a/published/LetsEncrypt.md b/published/201511/LetsEncrypt.md similarity index 100% rename from published/LetsEncrypt.md rename to published/201511/LetsEncrypt.md From 82524b6d174cf8e8cdfed4040eae0f1dbfad271f Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 1 Dec 2015 14:46:45 +0800 Subject: [PATCH 066/160] PUB:20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX @zpl1025 --- ...tory Of Aix HP-UX Solaris BSD And LINUX.md | 101 ++++++++++++++++++ ...tory Of Aix HP-UX Solaris BSD And LINUX.md | 101 ------------------ 2 files changed, 101 insertions(+), 101 deletions(-) create mode 100644 published/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md delete mode 100644 translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md diff --git a/published/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/published/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md new file mode 100644 index 0000000000..2f6780cdc2 --- /dev/null +++ b/published/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md @@ -0,0 +1,101 @@ +UNIX 家族小史 +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png) + +要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。肯·汤普森([Ken Thompson][1]) 和丹尼斯·里奇([Dennis Richie][2]) 两个人就是这句名言很好的实例。他们俩是**20世纪**最优秀的信息技术专家之二,因为他们创造了最具影响力和创新性的软件之一: **UNIX**。 + +### UNIX 系统诞生于贝尔实验室 ### + +**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时(timesharing)的方式使用大型机。 + +UNIX 诞生于 **1969** 年,由**肯·汤普森**以及后来加入的**丹尼斯·里奇**共同完成。这两位优秀的研究员和科学家在一个**通用电器 GE**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。 + +Multics 的目标是整合分时技术以及当时其他先进技术,允许用户在远程终端通过电话(拨号)登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。 + +在之后的五年里,AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如肯·汤普森、 Stuart Feldman、丹尼斯·里奇、道格拉斯·麦克罗伊(M. Douglas McIlroy)、 Joseph F. Ossanna 以及 Robert Morris。但是项目目标太过激进,进度严重滞后。最后,AT&T 高层决定放弃这个项目。 + +贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢汤普森,里奇和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。 + +UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是汤普森测试自己在操作系统设计上的点子的机器,也是汤普森和 里奇一起玩 Space and Travel 游戏的模拟器。 + +> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”丹尼斯·里奇说。 + +UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。 + +UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次用于实际场景中是在 1971 年,贝尔实验室的专利部门配备来做文字处理。 + +### UNIX 上的 C 语言革命 ### + +丹尼斯·里奇在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和肯·汤普森决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在迁移到 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。 + +UNIX 第一次公开露面是 1973 年丹尼斯·里奇和肯·汤普森在操作系统原理(Operating Systems Principles)上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,之后在 1975 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常严格。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。 + +### AIX 系统 ### + +在 **1983** 年,**微软**计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。 + +AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (Logical Volume Manager ,LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。 + +在 2004 年发布的 AIX 5.3 引入了支持高级电源虚拟化( Advanced Power Virtualization,APV)的虚拟化技术,支持对称多线程、微分区,以及共享处理器池。 + +在 2007 年,IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将高级电源虚拟化重新包装成 PowerVM。 + +这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers,但是功能更强。 + +### HP-UX 系统 ### + +**惠普 UNIX (Hewlett-Packard’s UNIX,HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。 + +HP-UX 第 9 版引入了 SAM,一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。 + +第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i,因为 HP 为特定的信息技术用途,引入了操作环境(operating environments)和分级应用(layered applications)的捆绑组(bundled groups)。 + +在 2001 年发布的 11.20 版宣称支持安腾(Itanium)系统。HP-UX 是第一个使用 ACLs(访问控制列表,Access Control Lists)管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器(Logical Volume Manager)的系统之一。 + +如今,HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。 + +HP-UX 目前的最新版本是 11iv3, update 4。 + +### Solaris 系统 ### + +Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD(伯克利软件发行版,Berkeley Software Distribution)风格的 UNIX,但是 SunOS 5.0 版以及之后的版本都是基于重新包装为 Solaris 的 Unix System V 第 4 版。 + +SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年,Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。 + +Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。 + +Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器(Solaris Volume Manager)。之后,2005 年发布了 Solaris 10,带来许多创新,比如支持 Solaris Containers,新的 ZFS 文件系统,以及逻辑域(Logical Domains)。 + +目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。 + +### Linux ### + +到了 1991 年,用来替代商业操作系统的自由(free)操作系统的需求日渐高涨。因此,**Linus Torvalds** 开始构建一个自由的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。 + +2015 年发布了基于 GNU Public License (GPL)授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开源给开发者。 + +如今 GNU Public License 是应用最广泛的自由软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。 + +### UNIX vs. Linux:技术概要 ### + +- Linux 鼓励多样性,Linux 的开发人员来自各种背景,有更多不同经验和意见。 +- Linux 比 UNIX 支持更多的平台和架构。 +- UNIX 商业版本的开发人员针对特定目标平台以及用户设计他们的操作系统。 +- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。截止到现在,Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。 +- 由于 UNIX 命令、工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。 +- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。 +- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。 + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/ + +作者:[M.el Khamlichi][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ +[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/ +[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ diff --git a/translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md deleted file mode 100644 index 921f1a57aa..0000000000 --- a/translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md +++ /dev/null @@ -1,101 +0,0 @@ -Aix, HP-UX, Solaris, BSD, 和 LINUX 简史 -================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png) - -要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。[Ken Thompson][1] 和 [Dennis Richie][2] 两个人就是这句名言很好的实例。他们俩是 **20世纪** 最优秀的信息技术专家,因为他们创造了 **UNIX**,最具影响力和创新性的软件之一。 - -### UNIX 系统诞生于贝尔实验室 ### - -**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时使用大型机。 - -UNIX 诞生于 **1969** 年,由 **Ken Thompson** 以及后来加入的 **Dennis Richie** 共同完成。这两位优秀的研究员和科学家一起在一个**通用电子**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。 - -Multics 的目标是整合分时共享以及当时其他先进技术,允许用户在远程终端通过电话登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。 - -在之后的五年里,AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如 Ken Thompson, Stuart Feldman, Dennis Ritchie, M. Douglas McIlroy, Joseph F. Ossanna, 以及 Robert Morris。但是项目目标太过激进,进度严重滞后。最后,AT&T 高层决定放弃这个项目。 - -贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢 Thompson,Richie 和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。 - -UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是 Thompson 测试自己在操作系统设计上的点子的机器,也是 Thompson 和 Richie 一起玩 Space and Travel 游戏的模拟器。 - -> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时共享主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”Dennis Richie 说。 - -UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。 - -UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次收获是在 1971 年,贝尔实验室的专利部门配备来做文字处理。 - -### UNIX 上的 C 语言革命 ### - -Dennis Richie 在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和 Ken Thompson 决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在使用了 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。 - -UNIX 第一次公开露面是 1973 年 Dennis Ritchie 和 Ken Thompson 在操作系统原理上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,然后在 1976 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常有限。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。 - -### AIX 系统 ### - -在 **1983** 年,**Microsoft** 计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。 - -AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。 - -在 2004 年发布的 AIX 5.3 引入了支持 Advanced Power Virtualization (APV) 的虚拟化技术,支持对称多线程,微分区,以及可分享的处理器池。 - -在 2007 年,IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将 Advanced Power Virtualization 重新包装成 PowerVM。 - -这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers,但是功能更强。 - -### HP-UX 系统 ### - -**惠普 UNIX (HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。 - -HP-UX 第 9 版引入了 SAM,一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。 - -第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i,因为 HP 为特定的信息技术目的,引入了操作环境和分级应用的捆绑组。 - -在 2001 年发布的 11.20 版宣称支持 Itanium 系统。HP-UX 是第一个使用 ACLs(访问控制列表)管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器的系统之一。 - -如今,HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。 - -HP-UX 目前的最新版本是 11iv3, update 4。 - -### Solaris 系统 ### - -Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD(伯克利软件发行版)风格的 UNIX,但是 SunOS 5.0 版以及之后的版本都是基于重新包装成 Solaris 的 Unix System V 第 4 版。 - -SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年,Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。 - -Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。 - -Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器。之后,2005 年发布了 Solaris 10,带来许多创新,比如支持 Solaris Containers,新的 ZFS 文件系统,以及逻辑域。 - -目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。 - -### Linux ### - -到了 1991 年,用来替代商业操作系统的免费系统的需求日渐高涨。因此 **Linus Torvalds** 开始构建一个免费的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。 - -2015 年 发布了基于 GNU Public License 授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开放给开发者。 - -如今 GNU Public License 是应用最广泛的免费软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。 - -### UNIX vs. Linux: 技术概要 ### - -- Linux 鼓励多样性,Linux 的开发人员有更广阔的背景,有更多不同经验和意见。 -- Linux 比 UNIX 支持更多的平台和架构。 -- UNIX 商业版本的开发人员会为他们的操作系统考虑特定目标平台以及用户。 -- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。 -- 通过 UNIX 命令,系统上的工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。 -- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。 -- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。 - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/ - -作者:[M.el Khamlichi][a] -译者:[zpl1025](https://github.com/zpl1025) -校对:[Caroline](https://github.com/carolinewuyan) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/pirat9/ -[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/ -[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ From 301b0a4ae2a97c9ef1c24849227e988c237314a9 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 1 Dec 2015 14:49:18 +0800 Subject: [PATCH 067/160] =?UTF-8?q?=E6=9B=B4=E6=96=B0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 68b579d9fc..7fbc7a5f36 100644 --- a/README.md +++ b/README.md @@ -66,6 +66,7 @@ LCTT的组成 - CORE @strugglingyouth, - CORE @FSSlc - CORE @zpl1025, +- CORE @runningwater, - CORE @bazz2, - CORE @Vic020, - CORE @dongfengweixiao, @@ -76,7 +77,6 @@ LCTT的组成 - Senior @jasminepeng, - Senior @willqian, - Senior @vizv, -- runningwater, - ZTinoZ, - theo-l, - luoxcat, From 8cd0148058c12ff66022c5368419d41ef3667e3f Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Dec 2015 15:44:15 +0800 Subject: [PATCH 068/160] =?UTF-8?q?20151201-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... The Latest Arduino IDE 1.6.6 in Ubuntu.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md diff --git a/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md b/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md new file mode 100644 index 0000000000..6544ad3919 --- /dev/null +++ b/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md @@ -0,0 +1,77 @@ +How to Install The Latest Arduino IDE 1.6.6 in Ubuntu +================================================================================ +![Install latest Arduino in Ubuntu](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-icon.png) + +> Quick tutorial shows you how to the latest Arduino IDE, so far its version 1.6.6, in all current Ubuntu releases. + +The open-source Arduino IDE has reached the 1.6.6 release recently with lots of changes. The new release has switched to Java 8, which is now both bundled and needed for compiling the IDE. See the [RELEASE NOTE][1] for details. + +![Arduino 1.6.6 in Ubuntu 15.10](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-ubuntu.jpg) + +For those who don’t want to use the old 1.0.5 version available in Software Center, you can always follow below steps to install Arduino in all Ubuntu releases: + +注:下面这个说明下面的代码颜色,这个发布的时候要对照一下原文,写点说明,因为颜色在md里标识不出来 +> **Replace the words in red for future releases** + +**1.** Download the latest packages, **Linux 32-bit or Linux 64-bit**, from the official link below: + +- [www.arduino.cc/en/Main/Software][2] + +Don’t know your OS type? Go and check out System Settings -> Details -> Overview. + +**2.** Open **terminal** from Unity Dash, App Launcher, or via Ctrl+Alt+T keys. When it opens, run below commands one by one: + +Navigate to your downloads folder: + + cd ~/Downloads + +![navigate-downloads](http://ubuntuhandbook.org/wp-content/uploads/2015/11/navigate-downloads.jpg) + +Decompress the downloaded archive with tar command: + +注:arduino-1.6.6-*.tar.xz 为红色部分 + tar -xvf arduino-1.6.6-*.tar.xz + +![extract-archive](http://ubuntuhandbook.org/wp-content/uploads/2015/11/extract-archive.jpg) + +Move the result folder to **/opt/** directory for global use: + +注:arduino-1.6.6 为红色部分 + sudo mv arduino-1.6.6 /opt + +![move-opt](http://ubuntuhandbook.org/wp-content/uploads/2015/11/move-opt.jpg) + +**3.** Now the IDE is ready for use with bundled Java. But it would be good to create desktop icon/launcher for the application: + +Navigate to install folder: + +注:arduino-1.6.6 为红色部分 + cd /opt/arduino-1.6.6/ + +Give executable permission to install.sh script in that folder: + + chmod +x install.sh + +Finally run the script to install both desktop shortcut and launcher icon: + + ./install.sh + +In below picture I’ve combined 3 commands into one via “&&”: + +![install-desktop-icon](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-desktop-icon.jpg) + +Finally, launch Arduino IDE from Unity Dash, Application Launcher, or via Desktop shorcut. + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-arduino-ide-1-6-6-ubuntu/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://www.arduino.cc/en/Main/ReleaseNotes +[2]:https://www.arduino.cc/en/Main/Software \ No newline at end of file From dbbc3efa6917508dd1db3057c86a86558fe61fe1 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Dec 2015 15:54:33 +0800 Subject: [PATCH 069/160] =?UTF-8?q?20151201-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/talk/20151201 Cinnamon 2.8 Review.md | 87 ++++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/talk/20151201 Cinnamon 2.8 Review.md diff --git a/sources/talk/20151201 Cinnamon 2.8 Review.md b/sources/talk/20151201 Cinnamon 2.8 Review.md new file mode 100644 index 0000000000..0c44eba14f --- /dev/null +++ b/sources/talk/20151201 Cinnamon 2.8 Review.md @@ -0,0 +1,87 @@ +Cinnamon 2.8 Review +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2-8-featured.jpg) + +Other than Gnome and KDE, Cinnamon is another desktop environment that is used by many people. It is made by the same team that produces Linux Mint (and ships with Linux Mint) and can also be installed on several other distributions. The latest version of this DE – Cinnamon 2.8 – was released earlier this month, and it brings a host of bug fixes and improvements as well as some new features. + +I’m going to go over the major improvements made in this release as well as how to update to Cinnamon 2.8 or install it for the first time. + +### Improvements to Applets ### + +There are several improvements to already existing applets for the panel. + +#### Sound Applet #### + +![cinnamon-28-sound-applet](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-sound-applet.jpg) + +The Sound applet was revamped and now displays track information as well as the media controls on top of the cover art of the audio file. For music players with seeking support (such as Banshee), a progress bar will be displayed in the same region which you can use to change the position of the audio track. Right-clicking on the applet in the panel will display the options to mute input and output devices. + +#### Power Applet #### + +The Power applet now displays the status of each of the connected batteries and devices using the manufacturer’s data instead of generic names. + +#### Window Thumbnails #### + +![cinnamon-2.8-window-thumbnails](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-window-thumbnails.png) + +Cinnamon 2.8 brings the option to show window thumbnails when hovering over the window list in the panel. You can turn it off if you don’t like it, though. + +#### Workspace Switcher Applet #### + +![cinnamon-2.8-workspace-switcher](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-workspace-switcher.png) + +Adding the Workspace switcher applet to your panel will show you a visual representation of your workspaces with little rectangles embedded inside to show the position of your windows. + +#### System Tray #### + +Cinnamon 2.8 brings support for app indicators in the system tray. You can easily disable this in the settings which will force affected apps to fall back to using status icons instead. + +### Visual Improvements ### + +A host of visual improvements were made in Cinnamon 2.8. The classic and preview Alt + Tab switchers were polished with noticeable improvements, while the Alt + F2 dialog received bug fixes and better auto completion for commands. + +Also, the issue with the traditional animation effect for minimizing windows is now sorted and works with multiple panels. + +### Nemo Improvements ### + +![cinnamon-2.8-nemo](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-nemo.jpg) + +The default file manager for Cinnamon also received several bug fixes and has a new “Quick-rename” feature for renaming files and directories. This works by clicking the file or directory twice with a short pause in between to rename the files. + +Nemo also detects issues with thumbnails automatically and prompts you to quickly fix them. + +### Other Notable improvements ### + +- Applets now reload themselves automatically once they are updated. +- Support for multiple monitors was improved significantly. +- Dialog windows have been improved and now attach themselves to their parent windows. +- HiDPI dectection has been improved. +- QT5 applications now look more native and use the default GTK theme. +- Window management and rendering performance has been improved. +- There are various bugfixes. + +### How to Get Cinnamon 2.8 ### + +If you’re running Linux Mint you will get Cinnamon 2.8 as part of the upgrade to Linux Mint 17.3 “Rosa” Cinnamon Edition. The BETA release is already out, so you can grab that if you’d like to get your hands on the new software immediately. + +For Arch users, Cinnamon 2.8 is already in the official Arch repositories, so you can just update your packages and do a system-wide upgrade to get the latest version. + +Finally, for Ubuntu users, you can install or upgrade to Cinnamon 2.8 by running in turn the following commands: + + sudo add-apt-repository -y ppa:moorkai/cinnamon + sudo apt-get update + sudo apt-get install cinnamon + +Have you tried Cinnamon 2.8? What do you think of it? + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/cinnamon-2-8-review/ + +作者:[Ayo Isaiah][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/ayoisaiah/ \ No newline at end of file From ccab0d95067d9880eaee1e3c8d8388b4efd5b16b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Dec 2015 16:01:54 +0800 Subject: [PATCH 070/160] =?UTF-8?q?20151201-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t email client with encrypted passwords.md | 138 ++++++++++++++++++ 1 file changed, 138 insertions(+) create mode 100644 sources/tech/20151201 How to use Mutt email client with encrypted passwords.md diff --git a/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md b/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md new file mode 100644 index 0000000000..5d7414588c --- /dev/null +++ b/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md @@ -0,0 +1,138 @@ +How to use Mutt email client with encrypted passwords +================================================================================ +Mutt is an open-source email client written for Linux/UNIX terminal environment. Together with [Alpine][1], Mutt has the most devoted followers among Linux command-line enthusiasts, and for good reasons. Think of anything you expect from an email client, and Mutt has it: multi-protocol support (e.g., POP3, IMAP and SMTP), S/MIME and PGP/GPG integration, threaded conversation, color coding, customizable macros/keybindings, and so on. Besides, terminal-based Mutt is a lightweight alternative for accessing emails compared to bulky web browser-based (e.g., Gmail, Ymail) or GUI-based email clients (e.g., Thunderbird, MS Outlook). + +When you want to use Mutt to access or send emails via corporate SMTP/IMAP servers or replace web mail services, one concern you may have is how to protect your email credentials (e.g., SMTP/IMAP passwords) stored in a plain-text Mutt configuration file (~/.muttrc). + +For those who are security-conscious, there is actually an easy way to **encrypt Mutt configuration** to prevent such risk. In this tutorial, I describe how you can encrypt sensitive Mutt configuration such as SMTP/IMAP passwords using GnuPG (GPG), an open-source implementation of OpenPGP. + +### Step One (Optional): Create GPG Key ### + +Since we are going to use GPG to encrypt Mutt configuration, the first step is to create a GPG key (public/private keypair) if you don't have one. If you do, skip this step. + +To create a new GPG key, type the following. + + $ gpg --gen-key + +Choose the key type (RSA), keysize (2048 bits), and expiration date (0: no expiration). When prompted for a user ID, type your name (Dan Nanni) and email address (myemail@email.com) to be associated with the private/public keypair. Finally, type a passphrase to protect your private key. + +![](https://c2.staticflickr.com/6/5726/22808727824_7735f11157_c.jpg) + +Generating a GPG key requires a lot of random bytes for entropy, so make sure to perform some random actions on your system (e.g., type on a keyboard, move a mouse or read/write a disk) during key generation. Depending on keysize, it may take a few minutes or more to generate a GPG key. + +![](https://c1.staticflickr.com/1/644/23328597612_6ac5a29944_c.jpg) + +### Step Two: Encrypt Sensitive Mutt Configuration ### + +Next, create a new text file in ~/.mutt directory, and put in the file any sensitive Mutt configuration you want to hide. In this example, I specify SMTP/IMAP passwords. + + $ mkdir ~/.mutt + $ vi ~/.mutt/password + +---------- + + set smtp_pass="XXXXXXX" + set imap_pass="XXXXXXX" + +Now encrypt this file with gpg using your public key as follows. + + $ gpg -r myemail@email.com -e ~/.mutt/password + +This will create ~/.mutt/password.gpg, which is a GPG-encrypted version of the original file. + +Go ahead and remove ~/.mutt/password, leaving only the GPG-encrypted version. + +### Step Three: Create Full Mutt Configuration ### + +Now that you have encrypted sensitive Mutt configuration in a separate file, you can specify the rest of your Mutt configuration in ~/.muttrc. Then add the following line at the end of ~/.muttrc. + + source "gpg -d ~/.mutt/password.gpg |" + +This line will decrypt ~/.mutt/password.gpg when you launch Mutt, and apply the decrypted content to your Mutt configuration. + +The following shows an example of full Mutt configuration which allows you to access Gmail with Mutt, without revealing your SMTP/IMAP passwords. Replace yourgmailaccount with your Gmail ID. + + set from = "yourgmailaccount@gmail.com" + set realname = "Your Name" + set smtp_url = "smtp://yourgmailaccount@smtp.gmail.com:587/" + set imap_user = "yourgmailaccount@gmail.com" + set folder = "imaps://imap.gmail.com:993" + set spoolfile = "+INBOX" + set postponed = "+[Google Mail]/Drafts" + set trash = "+[Google Mail]/Trash" + set header_cache =~/.mutt/cache/headers + set message_cachedir =~/.mutt/cache/bodies + set certificate_file =~/.mutt/certificates + set move = no + set imap_keepalive = 900 + + # encrypted IMAP/SMTP passwords + source "gpg -d ~/.mutt/password.gpg |" + +### Step Four (Optional): Configure GPG-agent ### + +At this point, you will be able to use Mutt with encrypted IMAP/SMTP passwords. However, every time you launch Mutt, you will first be prompted to enter a GPG passphrase in order to decrypt IMAP/SMTP passwords using your private key. + +![](https://c2.staticflickr.com/6/5667/23437064775_20c874940f_c.jpg) + +If you want to avoid such GPG passphrase prompts, you can set up gpg-agent. Running as a daemon, gpg-agent securely caches your GPG passphrase, so that gpg automatically obtains your GPG passphrase from gpg-agent without you typing it manually. If you are using Linux desktop, you can use desktop-specific ways to configure something equivalent to gpg-agent, for example, gnome-keyring-daemon for GNOME desktop. + +You can install gpg-agent on Debian-based systems with: + +$ sudo apt-get install gpg-agent + +gpg-agent comes pre-installed on Red Hat based systems. + +Now add the following to your .bashrc file. + + envfile="$HOME/.gnupg/gpg-agent.env" + if [[ -e "$envfile" ]] && kill -0 $(grep GPG_AGENT_INFO "$envfile" | cut -d: -f 2) 2>/dev/null; then + eval "$(cat "$envfile")" + else + eval "$(gpg-agent --daemon --allow-preset-passphrase --write-env-file "$envfile")" + fi + export GPG_AGENT_INFO + +Reload .bashrc, or simply log out and log back in. + + $ source ~/.bashrc + +Now confirm that GPG_AGENT_INFO environment variable is set properly. + + $ echo $GPG_AGENT_INFO + +---------- + + /tmp/gpg-0SKJw8/S.gpg-agent:942:1 + +Also, when you type gpg-agent command, you should see the following message. + + $ gpg-agent + +---------- + + gpg-agent: gpg-agent running and available + +Once gpg-agent is up and running, it will cache your GPG passphrase the first time you type it at the passphrase prompt. Subsequently when you launch Mutt multiple times, you won't be prompted for a GPG passphrase (as long as gpg-agent is up and the cache entry does not expire). + +![](https://c1.staticflickr.com/1/664/22809928093_3be57698ce_c.jpg) + +### Conclusion ### + +In this tutorial, I presented a way to encrypt sensitive Mutt configuration such as SMTP/IMAP passwords using GnuPG. Note that if you want to use GnuPG within Mutt to encrypt or sign your email message, you can refer to the [official guide][2] on using GPG with Mutt. + +If you know of any security tips for using Mutt, feel free to share it. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/mutt-email-client-encrypted-passwords.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/gmail-command-line-linux-alpine.html +[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG \ No newline at end of file From 335f3ce78f8e85275ca0b80dfda292e24c1e8d5e Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 1 Dec 2015 18:18:49 +0800 Subject: [PATCH 071/160] translating --- ...1 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md b/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md index 6544ad3919..fdfb154aa6 100644 --- a/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md +++ b/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md @@ -1,3 +1,5 @@ +Translating + How to Install The Latest Arduino IDE 1.6.6 in Ubuntu ================================================================================ ![Install latest Arduino in Ubuntu](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-icon.png) @@ -74,4 +76,4 @@ via: http://ubuntuhandbook.org/index.php/2015/11/install-arduino-ide-1-6-6-ubunt [a]:http://ubuntuhandbook.org/index.php/about/ [1]:https://www.arduino.cc/en/Main/ReleaseNotes -[2]:https://www.arduino.cc/en/Main/Software \ No newline at end of file +[2]:https://www.arduino.cc/en/Main/Software From 33e17c80a45338d053e891afb65b2f660ca7e000 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 1 Dec 2015 18:40:52 +0800 Subject: [PATCH 072/160] translated --- ... The Latest Arduino IDE 1.6.6 in Ubuntu.md | 35 +++++++++---------- 1 file changed, 17 insertions(+), 18 deletions(-) rename {sources => translated}/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md (55%) diff --git a/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md b/translated/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md similarity index 55% rename from sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md rename to translated/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md index fdfb154aa6..668b9c3a80 100644 --- a/sources/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md +++ b/translated/tech/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md @@ -1,68 +1,67 @@ -Translating - -How to Install The Latest Arduino IDE 1.6.6 in Ubuntu +如何再Ubuntu中安装最新的Arduino IDE 1.6.6 ================================================================================ ![Install latest Arduino in Ubuntu](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-icon.png) -> Quick tutorial shows you how to the latest Arduino IDE, so far its version 1.6.6, in all current Ubuntu releases. +> 本篇教程会教你如何在现在的Ubuntu发布版中安装最新的 Arduino IDE,目前的版本为1.6.6。 -The open-source Arduino IDE has reached the 1.6.6 release recently with lots of changes. The new release has switched to Java 8, which is now both bundled and needed for compiling the IDE. See the [RELEASE NOTE][1] for details. +开源的Arduino IDE发布了1.6.6,并带来了很多的改变。新的发布已经切换到Java 8,它与IDE绑定并且再编译时需要它。具体见[RELEASE NOTE][1]。 ![Arduino 1.6.6 in Ubuntu 15.10](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-ubuntu.jpg) -For those who don’t want to use the old 1.0.5 version available in Software Center, you can always follow below steps to install Arduino in all Ubuntu releases: +对于那些不想使用软件中心的1.0.5旧版本的人而言,你可以使用下面的步骤再所有的Ubuntu发行版中安装Ardunino。 注:下面这个说明下面的代码颜色,这个发布的时候要对照一下原文,写点说明,因为颜色在md里标识不出来 -> **Replace the words in red for future releases** +> **用红字替换将来的版本** **1.** Download the latest packages, **Linux 32-bit or Linux 64-bit**, from the official link below: +**1.** 从下面的官方链接下载最新的包 **Linux 32-bit 或者 Linux 64-bit**。 - [www.arduino.cc/en/Main/Software][2] -Don’t know your OS type? Go and check out System Settings -> Details -> Overview. +不知道你系统的类型?进入系统设置->详细->概览。 -**2.** Open **terminal** from Unity Dash, App Launcher, or via Ctrl+Alt+T keys. When it opens, run below commands one by one: +**2.** 从Unity Dash、App Launcher或者Ctrl+Alt+T打开终端。打开后,一个个运行下面的命令: -Navigate to your downloads folder: +进入下载文件夹: cd ~/Downloads ![navigate-downloads](http://ubuntuhandbook.org/wp-content/uploads/2015/11/navigate-downloads.jpg) -Decompress the downloaded archive with tar command: +使用tar命令解压 注:arduino-1.6.6-*.tar.xz 为红色部分 tar -xvf arduino-1.6.6-*.tar.xz ![extract-archive](http://ubuntuhandbook.org/wp-content/uploads/2015/11/extract-archive.jpg) -Move the result folder to **/opt/** directory for global use: +将解压后的文件移动到**/opt/**下: 注:arduino-1.6.6 为红色部分 sudo mv arduino-1.6.6 /opt ![move-opt](http://ubuntuhandbook.org/wp-content/uploads/2015/11/move-opt.jpg) -**3.** Now the IDE is ready for use with bundled Java. But it would be good to create desktop icon/launcher for the application: +**3.** 现在IDE已经与最新的Java绑定使用了。但是最好位程序设置一个桌面图标/启动方式: -Navigate to install folder: +进入安装目录 注:arduino-1.6.6 为红色部分 cd /opt/arduino-1.6.6/ -Give executable permission to install.sh script in that folder: +在这个目录给install.sh可执行权限 chmod +x install.sh -Finally run the script to install both desktop shortcut and launcher icon: +最后运行脚本同事安装桌面快捷方式和启动图标: ./install.sh -In below picture I’ve combined 3 commands into one via “&&”: +下图中我用“&&”同事运行这三个命令: ![install-desktop-icon](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-desktop-icon.jpg) -Finally, launch Arduino IDE from Unity Dash, Application Launcher, or via Desktop shorcut. +最后从Unity Dash、程序启动器或者桌面快捷方式运行Arduino IDE。 -------------------------------------------------------------------------------- From 48b50dc0b4978074ff98df66adbc0926ac980fa7 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 1 Dec 2015 18:45:51 +0800 Subject: [PATCH 073/160] Update 20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md --- ...s--How to remove trailing whitespaces in a file on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md b/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md index 84c04e7436..e7c63a47c4 100644 --- a/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md +++ b/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux ================================================================================ > Question: I have a text file in which I need to remove all trailing whitespsaces (e.g., spaces and tabs) in each line for formatting purpose. Is there a quick and easy Linux command line tool I can use for this? @@ -50,4 +51,4 @@ via: http://ask.xmodulo.com/remove-trailing-whitespaces-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From 43ce9400bb4b6cf44a794ae704c4c654f5c9c6c3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 2 Dec 2015 00:35:14 +0800 Subject: [PATCH 074/160] Delete 20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md --- ...trailing whitespaces in a file on Linux.md | 54 ------------------- 1 file changed, 54 deletions(-) delete mode 100644 sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md diff --git a/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md b/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md deleted file mode 100644 index e7c63a47c4..0000000000 --- a/sources/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md +++ /dev/null @@ -1,54 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux -================================================================================ -> Question: I have a text file in which I need to remove all trailing whitespsaces (e.g., spaces and tabs) in each line for formatting purpose. Is there a quick and easy Linux command line tool I can use for this? - -When you are writing code for your program, you must understand that there are standard coding styles to follow. For example, "trailing whitespaces" are typically considered evil because when they get into a code repository for revision control, they can cause a lot of problems and confusion (e.g., "false diffs"). Many IDEs and text editors are capable of highlighting and automatically trimming trailing whitepsaces at the end of each line. - -Here are a few ways to **remove trailing whitespaces in Linux command-line environment**. - -### Method One ### - -A simple command line approach to remove unwanted whitespaces is via sed. - -The following command deletes all spaces and tabs at the end of each line in input.java. - - $ sed -i 's/[[:space:]]*$//' input.java - -If there are multiple files that need trailing whitespaces removed, you can use a combination of find and sed. For example, the following command deletes trailing whitespaces in all *.java files recursively found in the current directory as well as all its sub-directories. - - $ find . -name "*.java" -type f -print0 | xargs -0 sed -i 's/[[:space:]]*$//' - -### Method Two ### - -Vim text editor is able to highlight and trim whitespaces in a file as well. - -To highlight all trailing whitespaces in a file, open the file with Vim editor and enable text highlighting by typing the following in Vim command line mode. - - :set hlsearch - -Then search for trailing whitespaces by typing: - - /\s\+$ - -This will show all trailing spaces and tabs found throughout the file. - -![](https://c1.staticflickr.com/1/757/23198657732_bc40e757b4_b.jpg) - -Then to clean up all trailing whitespaces in a file with Vim, type the following Vim command. - - :%s/\s\+$// - -This command means substituting all whitespace characters found at the end of the line (\s\+$) with no character. - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/remove-trailing-whitespaces-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni From 35d59d293cd41e12f1e508a1d9ef08c6b890a3b6 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 2 Dec 2015 00:38:36 +0800 Subject: [PATCH 075/160] Create 20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md --- ...trailing whitespaces in a file on Linux.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 translated/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md diff --git a/translated/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md b/translated/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md new file mode 100644 index 0000000000..f7644bcabb --- /dev/null +++ b/translated/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md @@ -0,0 +1,60 @@ + +如何在 Ubuntu 16.04,15.10,14.04 中安装 GIMP 2.8.16 +================================================================================ +![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png) + +GIMP 图像编辑器 2.8.16 版本在其20岁生日时发布了。下面是如何安装或升级 GIMP 在 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 及其衍生版本中,如,Linux Mint 17.x/13, Elementary OS Freya。 + +GIMP 2.8.16 支持层组在 OpenRaster 文件中,修复了在 PSD 中的层组支持以及各种用户接口改进,OSX 系统修复构建并有许多新的变化。请阅读 [官方声明][1]。 + +![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg) + +### 如何安装或升级: ### + +多亏了 Otto Meier,[Ubuntu PPA][2] 中最新的 GIMP 包可用于当前所有的 Ubuntu 版本和其衍生版。 + +**1. 添加 GIMP PPA** + +从 Unity Dash 中打开终端,或通过 Ctrl+Alt+T 快捷键。在它打开它后,粘贴下面的命令并回车: + + sudo add-apt-repository ppa:otto-kesselgulasch/gimp + +![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg) + +输入你的密码,密码不会在终端显示,然后回车继续。 + +**2. 安装或升级编辑器** + +在添加了 PPA 后,启动 **Software Updater**(在 Mint 中是 Software Manager)。检查更新后,你将看到 GIMP 的更新列表。点击 “Install Now” 进行升级。 + +![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg) + +对于那些喜欢 Linux 命令的,按顺序执行下面的命令,刷新仓库的缓存然后安装 GIMP: + + sudo apt-get update + + sudo apt-get install gimp + +**3. (可选的) 卸载** + +如果你想卸载或降级 GIMP 图像编辑器。从软件中心直接删除它,或者按顺序运行下面的命令来将 PPA 清除并降级软件: + + sudo apt-get install ppa-purge + + sudo ppa-purge ppa:otto-kesselgulasch/gimp + +就这样。玩的愉快! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/ + +作者:[Ji m][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/ +[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp From bfba7ad3400da4879bb90de7b7a0dadaa29881e4 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Wed, 2 Dec 2015 00:52:25 +0800 Subject: [PATCH 076/160] Update 20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md --- ...tihomed ISC DHCP Server on Debian Linux.md | 91 +++++++++++++++++-- 1 file changed, 85 insertions(+), 6 deletions(-) diff --git a/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md b/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md index 2a8bdb2fbd..8fb67f0697 100644 --- a/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md +++ b/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md @@ -1,159 +1,238 @@ How to Install and Configure Multihomed ISC DHCP Server on Debian Linux +debian linux上安装配置 ISC DHCP Server ================================================================================ Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more. +动态主机控制协议(DHCP)给网络管理员提供一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的DHCP服务工具是 ISC DHCP Server。DHCP服务的目的是给给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP服务一般包括以下信息:DNS服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well. +本节教程介绍4.2.4版的ISC-DHCP-Server如何在Debian7.7上管理多个虚拟局域网(VLAN),但是它也可以非常简单的用于单一网络。 The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation. +测试用的网络是通过思科路由器依赖传统的方式来管DHCP租约地址,目前有12个VLANs需要通过路由器的集中式服务器来管理。把DHCP这个责任转移到一个专用的服务器上面,路由器可以回收资源去用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new host’s host-names will be added to the DNS system when the host requests a DHCP address from the server. - +另一个将DHCP服务移动到专用服务器的好处,后续会讲到,建立动态域名服务器(DDNS)这样当主机从服务器请求DHCP地址的时候,新主机的主机名将被添加到DNS系统里面。 ### Step 1: Installing and Configuring ISC DHCP Server ### +### 安装和配置ISC DHCP Server### 1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the ‘apt‘ utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands. +1. 创建这个多宿主服务器的过程中,需要用apt工具来安装Debian软件仓库中的ISC软件。与其他教程一样需要使用root或者sudo访问权限。请适当的修改以使用下面的命令。(译者注:下面中括号里面是注释,使用的时候请删除,#表示使用的root权限) - # apt-get install isc-dhcp-server [Installs the ISC DHCP Server software] - # dpkg --get-selections isc-dhcp-server [Confirms successful installation] - # dpkg -s isc-dhcp-server [Alternative confirmation of installation] + + # apt-get install isc-dhcp-server [安装 the ISC DHCP Server 软件] + # dpkg --get-selections isc-dhcp-server [确认软件已经成功安装] + # dpkg -s isc-dhcp-server [用另一种方式确认成功安装] ![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg) 2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope: +2. 现在已经确认服务软件安装完毕,现在需要配置服务器,它将分发网络信息。作为管理员你最起码应该了解DHCP的信息如下: - The network addresses +- 网络地址 - The subnet masks +- 子网掩码 - The range of addresses to be dynamically assigned +- 动态分配的地址范围 Other useful information to have the server dynamically assign includes: +其他一些使服务器动态分配的有用信息包括: - Default gateway +- 默认网关 - DNS server IP addresses +- DNS服务器IP地址 - The Domain Name +- 域名 - Host name +- 主机名 - Network Broadcast addresses +- 网络广播地址 These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package: +这只是很少一部分能让ISC DHCP server处理的选项。完整的查看所有选项及其描述需要在安装好软件后输入以下命令: # man dhcpd.conf 3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the server’s interfaces. +3. 一旦管理员已经确定了这台服务器需要分发出去的必要信息,那么是时候配置它和分配必要的地址池了。在配置任何地址池或服务器配置之前,DHCP服务必须配置好,来监听这台服务器上面的一个接口。 + On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial. +在这台特定的服务器上,一个网卡设置好后,DHCP会监听命名为`'bond0'`的接口。请适确保适当的更改服务器以及网络环境。下面默认的配置都是针对于本次教程的。 ![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin: +这行指定的是DHCP服务监听接口(一个或多个)上的DHCP流量。修改主要的配置文件分派DHCP池配置在所需要的网络上。主要的配置文件的位置在/etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 # nano /etc/dhcp/dhcpd.conf This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ‘ddns-update-style‘ clause and for this tutorial it will remain set to ‘none‘ however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates. +这个配置文件可以配置我们想的地址池/主机。文件顶部有‘ddns-update-style‘这样一句,在本教程中它设置为‘none‘。在以后的教程中动态DNS, ISC-DHCP-Server 将被整合到 BIND9为了能够使主机名更新到IP地址。 + 4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file. +4. 接下来的部分 是管理员配置全局网络设置,如DNS域名,默认的租约时间,IP地址,子网的掩码,以及更多的区域。想更多地了解所有的选项,阅读man手册dhcpd.conf文件,命令如下: + # man dhcpd.conf For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldn’t have to be implemented in every single pool created. +对于这台服务器,我们需要在顶部配置一些全局网络设置,这腰就不用到每个地址池中单独去设置了。 + ![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well. +下面我们花一点时间来解释一下这些选项,在本例中虽然它们是一些全局,但是也可以为每一个地址池配置。 - option domain-name “comptech.local”; – All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local” +- option domain-name “comptech.local”; – 所有使用这台DHCP服务器的主机,将成为DNS域名“comptech.local”的一员 - option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host. +- option domain-name-servers 172.27.10.6; DHCP 将向所有配置好的网络主机分发DNS服务器地址172.27.10.6 - option subnet-mask 255.255.255.0; – The subnet mask handed out to every network will be a 255.255.255.0 or a /24 +- option subnet-mask 255.255.255.0; – 分派子网掩码到每一个网络设备 255.255.255.0 或a /24 - default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host is done with the lease, they can hand the address back early. +- default-lease-time 3600; – 有效的地址租约时间,单位是秒。如果租约时间耗尽主机可以重新申请租约。如果租约完成那么相应的地址也将被尽快回收。 - max-lease-time 86400; – This is the maximum amount of time in seconds a lease can be held by a host. +- max-lease-time 86400; – 这是一个主机最大的租约时间,单位为秒。 - ping-check true; – This is an extra test to ensure that the address the server wants to assign out isn’t in use by another host on the network already. +- ping-check true; – 这是一个额外的测试,以确保服务器分配出的地址不是一个网络内另一台主机已使用的网络地址。 - ping-timeout; – This is how long in second the server will wait for a response to a ping before assuming the address isn’t in use. +- ping-timeout; – 假设地址以前没有使用,用这个来检测2个ping值回应之间的时间长度。 - ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS. +- ignore client-updates; 现在这个选项是可以忽略的,因为DDNS在前面已在配置文件中被禁用,但是当DDNS运行,此选项会忽略一个主机请求的DNS更新其主机名。 5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza. +5. 文件中的下面一行是权威DHCP所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么就取消注释权威节(uncomment the authoritative stanza.)。 This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the ‘#’ in front of the keyword authoritative. +通过去掉关键字authoritative 前面的‘#’,取消注释全局权威节。那么该服务器将是它管理网络里面的唯一权威。 ![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) Enable ISC Authoritative +开启 ISC Authoritative By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldn’t, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire network’s DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot. +默认情况下服务器被假定为不是网络上的权威。这样做的理由是为了安全。如果有人因为不了解DHCP服务的配置导致配置不当或在一个不该出现的网络里面,这都将会造成非常严重的重连接问题。这行还可用在每个网络中单独使用。这意味着如果该服务器不是整个网络的DHCP服务器,authoritative行可以用在每个单独的网络中,而不是像上面截图显示中的全局配置那样。 6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc). +6. 下一步是配置所有的服务器将要管理的DHCP地址池/网络。简短起见,这个教程将只配置地址池。作为管理员需要收集所有必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255. This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, let’s open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo! +以下这个地址池所用到的信息都是管理员收集整理的:网络id 172.27.60.0, 子网掩码 255.255.255.0 or a /24, 默认网关172.27.60.1,广播地址 172.27.60.255. # nano /etc/dhcp/dhcpd.conf ![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) Configure DHCP Pools and Networks +配置DHCP的地址池和网络 This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network. +这例子是分配IP地址给用VMWare创建的虚拟服务器。第一行标明是该网络的子网掩码。括号里面是DHCP服务器应该提供给当前网络上主机的所有选项。 + The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses aren’t in the pool and can be assigned statically to hosts if needed. +第一节, range 172.27.60.50 172.27.60.254;是DHCP服务在这个网络上能够给主机动态分发的地址范围。 + The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network. +第二节,option routers 172.27.60.1;给网络里面所有的主机分发默认网关地址。 + The last stanza, option broadcast-address 172.27.60.255;, indicates what the network’s broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address can’t be assigned to a host. +最后一节, option broadcast-address 172.27.60.255;,说明当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }. +必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在{}里面。 + 7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command: +7. 如果是创建多个网络,持续的创建它们的相应选项最终保存文本文件。一旦配置完成, ISC-DHCP-Server进程需要重启来使新的更改生效。可以通过下面的命令来完成: # service isc-dhcp-server restart This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]: +这条命令将重启DHCP服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理dhcp请求。最简单的方法是通过lsof命令[1]来查看服务器是否在监听67端口,命令如下: + # lsof -i :67 ![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) Check DHCP Listening Port +检查DHCP监听端口 This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to ‘bootps‘ due to a port number mapping for port 67 in /etc/services file. +这里的输出表明DHCPD(DHCP服务守护进程)正在运行并且监听67端口。由于/etc/services文件中67端口的端口映射,输出中的67端口实际上被转换成了“bootps”。 + This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server. +在大多数的系统中这是非常普遍的,此时,服务器应该已经为网络连接做好准备,可以通过将一台主机接入网络请求DHCP地址来验证服务是否正常。 + ### Step 2: Testing Client Connectivity ### +### 测试客户端连接 ### 8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active. +8. 现在许多系统使用网络管理器来维护网络连接状态,因此这个装置应该预先配置,当接口是活跃的时候来获取DHCP。 + However on machines that aren’t using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses. +然而当一台设备无法使用网络管理器时,它可能需要手动获取DHCP地址。下面的几步将展示如何做到这一点以及如何查看服务器是否分发地址。 + The ‘[ifconfig][2]‘ utility can be used to check an interface’s configuration. The machine used to test the DHCP server only has one network adapter and it is called ‘eth0‘. + ‘[ifconfig][2]‘工具能够用来检查接口配置。这台设备被用来监测DHCP服务它只有一个网络适配器(网卡),这块网卡被命名为‘eth0‘。 + # ifconfig eth0 ![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) Check Network Interface IP Address +检查网络接口IP地址 From this output, this machine currently doesn’t have an IPv4 address, great! Let’s instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as ‘dhclient‘ installed. The DHCP client utility may very from system to system. +从输出结果上看,这台设备目前没有一个IPv4地址,很好这样便于测试。让这台设备连接到DHCP服务器并发出一个请求。这台设备上安装了一个名为‘dhclient‘ 的DHCP客户端工具。这个客户端工具会因为系统不同而不同。 # dhclient eth0 ![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) Request IP Address from DHCP +从DHCP请求IP地址 Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network. +现在 `'inet addr:'` 字段显示了属于172.27.60.0网络地址范围内的IPv4地址。另外要注意:目前该网络配置了正确的子网掩码并且分发了广播地址。 + Things are looking promising but let’s check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the server’s system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as ‘tail‘ will be used to only show the last few lines of the log file. +看起来还不错,让我们来测试一下,看看它是不是这台设备收到新IP地址的地方。我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具‘tail’,它只显示日志文件的最后几行,而不是使用一个完整的文本编辑器去查看日志文件。命令如下: + # tail /var/log/syslog ![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) Check DHCP Logs +检查DHCP日志文件 Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary. +OK!服务器记录表明它分发了一个地址给这台主机(HRTDEBXENSRV)。服务器按预期运行,给它充当权威的网络分发适合的网络地址。到这里DHCP服务器搭建成功并且运行起来了。配置其他的网络,排查故障,确保安全。 + Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article. +更多新的 ISC-DHCP-Server 的功能在以后的Debian教程中会被提及。不久以后将写一篇关于Bind9和DDNS的教程,插入到这篇文章里面。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/ 作者:[Rob Turner][a] -译者:[译者ID](https://github.com/译者ID) +译者:[ivo-wang](https://github.com/ivo-wang) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/robturner/ [1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/ -[2]:http://www.tecmint.com/ifconfig-command-examples/ \ No newline at end of file +[2]:http://www.tecmint.com/ifconfig-command-examples/ From b4298a2bb67499c265dee5b7db50c8123b67ead4 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Wed, 2 Dec 2015 00:55:12 +0800 Subject: [PATCH 077/160] Delete 20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md --- ...tihomed ISC DHCP Server on Debian Linux.md | 238 ------------------ 1 file changed, 238 deletions(-) delete mode 100644 sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md diff --git a/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md b/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md deleted file mode 100644 index 8fb67f0697..0000000000 --- a/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md +++ /dev/null @@ -1,238 +0,0 @@ -How to Install and Configure Multihomed ISC DHCP Server on Debian Linux -debian linux上安装配置 ISC DHCP Server -================================================================================ -Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more. -动态主机控制协议(DHCP)给网络管理员提供一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的DHCP服务工具是 ISC DHCP Server。DHCP服务的目的是给给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP服务一般包括以下信息:DNS服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 - -This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well. -本节教程介绍4.2.4版的ISC-DHCP-Server如何在Debian7.7上管理多个虚拟局域网(VLAN),但是它也可以非常简单的用于单一网络。 - -The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation. -测试用的网络是通过思科路由器依赖传统的方式来管DHCP租约地址,目前有12个VLANs需要通过路由器的集中式服务器来管理。把DHCP这个责任转移到一个专用的服务器上面,路由器可以回收资源去用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 - -The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new host’s host-names will be added to the DNS system when the host requests a DHCP address from the server. -另一个将DHCP服务移动到专用服务器的好处,后续会讲到,建立动态域名服务器(DDNS)这样当主机从服务器请求DHCP地址的时候,新主机的主机名将被添加到DNS系统里面。 -### Step 1: Installing and Configuring ISC DHCP Server ### -### 安装和配置ISC DHCP Server### - -1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the ‘apt‘ utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands. -1. 创建这个多宿主服务器的过程中,需要用apt工具来安装Debian软件仓库中的ISC软件。与其他教程一样需要使用root或者sudo访问权限。请适当的修改以使用下面的命令。(译者注:下面中括号里面是注释,使用的时候请删除,#表示使用的root权限) - - - # apt-get install isc-dhcp-server [安装 the ISC DHCP Server 软件] - # dpkg --get-selections isc-dhcp-server [确认软件已经成功安装] - # dpkg -s isc-dhcp-server [用另一种方式确认成功安装] - -![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg) - -2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope: - -2. 现在已经确认服务软件安装完毕,现在需要配置服务器,它将分发网络信息。作为管理员你最起码应该了解DHCP的信息如下: -- The network addresses -- 网络地址 -- The subnet masks -- 子网掩码 -- The range of addresses to be dynamically assigned -- 动态分配的地址范围 - -Other useful information to have the server dynamically assign includes: - -其他一些使服务器动态分配的有用信息包括: -- Default gateway -- 默认网关 -- DNS server IP addresses -- DNS服务器IP地址 -- The Domain Name -- 域名 -- Host name -- 主机名 -- Network Broadcast addresses -- 网络广播地址 - -These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package: - -这只是很少一部分能让ISC DHCP server处理的选项。完整的查看所有选项及其描述需要在安装好软件后输入以下命令: - # man dhcpd.conf - -3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the server’s interfaces. - -3. 一旦管理员已经确定了这台服务器需要分发出去的必要信息,那么是时候配置它和分配必要的地址池了。在配置任何地址池或服务器配置之前,DHCP服务必须配置好,来监听这台服务器上面的一个接口。 - -On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial. - -在这台特定的服务器上,一个网卡设置好后,DHCP会监听命名为`'bond0'`的接口。请适确保适当的更改服务器以及网络环境。下面默认的配置都是针对于本次教程的。 -![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) - -This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin: - -这行指定的是DHCP服务监听接口(一个或多个)上的DHCP流量。修改主要的配置文件分派DHCP池配置在所需要的网络上。主要的配置文件的位置在/etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 - # nano /etc/dhcp/dhcpd.conf - -This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ‘ddns-update-style‘ clause and for this tutorial it will remain set to ‘none‘ however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates. - -这个配置文件可以配置我们想的地址池/主机。文件顶部有‘ddns-update-style‘这样一句,在本教程中它设置为‘none‘。在以后的教程中动态DNS, ISC-DHCP-Server 将被整合到 BIND9为了能够使主机名更新到IP地址。 - -4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file. - -4. 接下来的部分 是管理员配置全局网络设置,如DNS域名,默认的租约时间,IP地址,子网的掩码,以及更多的区域。想更多地了解所有的选项,阅读man手册dhcpd.conf文件,命令如下: - - # man dhcpd.conf - -For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldn’t have to be implemented in every single pool created. - -对于这台服务器,我们需要在顶部配置一些全局网络设置,这腰就不用到每个地址池中单独去设置了。 - -![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) - -Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well. -下面我们花一点时间来解释一下这些选项,在本例中虽然它们是一些全局,但是也可以为每一个地址池配置。 - -- option domain-name “comptech.local”; – All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local” -- option domain-name “comptech.local”; – 所有使用这台DHCP服务器的主机,将成为DNS域名“comptech.local”的一员 -- option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host. -- option domain-name-servers 172.27.10.6; DHCP 将向所有配置好的网络主机分发DNS服务器地址172.27.10.6 -- option subnet-mask 255.255.255.0; – The subnet mask handed out to every network will be a 255.255.255.0 or a /24 -- option subnet-mask 255.255.255.0; – 分派子网掩码到每一个网络设备 255.255.255.0 或a /24 -- default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host is done with the lease, they can hand the address back early. -- default-lease-time 3600; – 有效的地址租约时间,单位是秒。如果租约时间耗尽主机可以重新申请租约。如果租约完成那么相应的地址也将被尽快回收。 -- max-lease-time 86400; – This is the maximum amount of time in seconds a lease can be held by a host. -- max-lease-time 86400; – 这是一个主机最大的租约时间,单位为秒。 -- ping-check true; – This is an extra test to ensure that the address the server wants to assign out isn’t in use by another host on the network already. -- ping-check true; – 这是一个额外的测试,以确保服务器分配出的地址不是一个网络内另一台主机已使用的网络地址。 -- ping-timeout; – This is how long in second the server will wait for a response to a ping before assuming the address isn’t in use. -- ping-timeout; – 假设地址以前没有使用,用这个来检测2个ping值回应之间的时间长度。 -- ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS. -- ignore client-updates; 现在这个选项是可以忽略的,因为DDNS在前面已在配置文件中被禁用,但是当DDNS运行,此选项会忽略一个主机请求的DNS更新其主机名。 - -5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza. -5. 文件中的下面一行是权威DHCP所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么就取消注释权威节(uncomment the authoritative stanza.)。 - -This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the ‘#’ in front of the keyword authoritative. - -通过去掉关键字authoritative 前面的‘#’,取消注释全局权威节。那么该服务器将是它管理网络里面的唯一权威。 -![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) -Enable ISC Authoritative -开启 ISC Authoritative - -By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldn’t, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire network’s DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot. -默认情况下服务器被假定为不是网络上的权威。这样做的理由是为了安全。如果有人因为不了解DHCP服务的配置导致配置不当或在一个不该出现的网络里面,这都将会造成非常严重的重连接问题。这行还可用在每个网络中单独使用。这意味着如果该服务器不是整个网络的DHCP服务器,authoritative行可以用在每个单独的网络中,而不是像上面截图显示中的全局配置那样。 - -6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc). - -6. 下一步是配置所有的服务器将要管理的DHCP地址池/网络。简短起见,这个教程将只配置地址池。作为管理员需要收集所有必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) -For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255. -This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, let’s open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo! - -以下这个地址池所用到的信息都是管理员收集整理的:网络id 172.27.60.0, 子网掩码 255.255.255.0 or a /24, 默认网关172.27.60.1,广播地址 172.27.60.255. - # nano /etc/dhcp/dhcpd.conf - -![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) -Configure DHCP Pools and Networks -配置DHCP的地址池和网络 - -This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network. - -这例子是分配IP地址给用VMWare创建的虚拟服务器。第一行标明是该网络的子网掩码。括号里面是DHCP服务器应该提供给当前网络上主机的所有选项。 - -The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses aren’t in the pool and can be assigned statically to hosts if needed. - -第一节, range 172.27.60.50 172.27.60.254;是DHCP服务在这个网络上能够给主机动态分发的地址范围。 - -The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network. - -第二节,option routers 172.27.60.1;给网络里面所有的主机分发默认网关地址。 - -The last stanza, option broadcast-address 172.27.60.255;, indicates what the network’s broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address can’t be assigned to a host. - -最后一节, option broadcast-address 172.27.60.255;,说明当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 -Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }. - -必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在{}里面。 - -7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command: - -7. 如果是创建多个网络,持续的创建它们的相应选项最终保存文本文件。一旦配置完成, ISC-DHCP-Server进程需要重启来使新的更改生效。可以通过下面的命令来完成: - # service isc-dhcp-server restart - -This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]: - -这条命令将重启DHCP服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理dhcp请求。最简单的方法是通过lsof命令[1]来查看服务器是否在监听67端口,命令如下: - - # lsof -i :67 - -![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) -Check DHCP Listening Port -检查DHCP监听端口 - -This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to ‘bootps‘ due to a port number mapping for port 67 in /etc/services file. - -这里的输出表明DHCPD(DHCP服务守护进程)正在运行并且监听67端口。由于/etc/services文件中67端口的端口映射,输出中的67端口实际上被转换成了“bootps”。 - -This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server. - -在大多数的系统中这是非常普遍的,此时,服务器应该已经为网络连接做好准备,可以通过将一台主机接入网络请求DHCP地址来验证服务是否正常。 - -### Step 2: Testing Client Connectivity ### - -### 测试客户端连接 ### -8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active. - -8. 现在许多系统使用网络管理器来维护网络连接状态,因此这个装置应该预先配置,当接口是活跃的时候来获取DHCP。 - -However on machines that aren’t using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses. - -然而当一台设备无法使用网络管理器时,它可能需要手动获取DHCP地址。下面的几步将展示如何做到这一点以及如何查看服务器是否分发地址。 - -The ‘[ifconfig][2]‘ utility can be used to check an interface’s configuration. The machine used to test the DHCP server only has one network adapter and it is called ‘eth0‘. - - ‘[ifconfig][2]‘工具能够用来检查接口配置。这台设备被用来监测DHCP服务它只有一个网络适配器(网卡),这块网卡被命名为‘eth0‘。 - - # ifconfig eth0 - -![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) -Check Network Interface IP Address -检查网络接口IP地址 - -From this output, this machine currently doesn’t have an IPv4 address, great! Let’s instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as ‘dhclient‘ installed. The DHCP client utility may very from system to system. - -从输出结果上看,这台设备目前没有一个IPv4地址,很好这样便于测试。让这台设备连接到DHCP服务器并发出一个请求。这台设备上安装了一个名为‘dhclient‘ 的DHCP客户端工具。这个客户端工具会因为系统不同而不同。 - # dhclient eth0 - -![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) -Request IP Address from DHCP -从DHCP请求IP地址 - -Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network. - -现在 `'inet addr:'` 字段显示了属于172.27.60.0网络地址范围内的IPv4地址。另外要注意:目前该网络配置了正确的子网掩码并且分发了广播地址。 - -Things are looking promising but let’s check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the server’s system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as ‘tail‘ will be used to only show the last few lines of the log file. - -看起来还不错,让我们来测试一下,看看它是不是这台设备收到新IP地址的地方。我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具‘tail’,它只显示日志文件的最后几行,而不是使用一个完整的文本编辑器去查看日志文件。命令如下: - - # tail /var/log/syslog - -![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) -Check DHCP Logs -检查DHCP日志文件 - -Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary. - -OK!服务器记录表明它分发了一个地址给这台主机(HRTDEBXENSRV)。服务器按预期运行,给它充当权威的网络分发适合的网络地址。到这里DHCP服务器搭建成功并且运行起来了。配置其他的网络,排查故障,确保安全。 - -Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article. - -更多新的 ISC-DHCP-Server 的功能在以后的Debian教程中会被提及。不久以后将写一篇关于Bind9和DDNS的教程,插入到这篇文章里面。 --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/ - -作者:[Rob Turner][a] -译者:[ivo-wang](https://github.com/ivo-wang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/robturner/ -[1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/ -[2]:http://www.tecmint.com/ifconfig-command-examples/ From d213569c9b0691f9853c407534d30d19a43f778e Mon Sep 17 00:00:00 2001 From: ivo wang Date: Wed, 2 Dec 2015 00:58:23 +0800 Subject: [PATCH 078/160] Create How to Install and Configure Multihomed ISC DHCP Server on Debian Linux --- ...Multihomed ISC DHCP Server on Debian Linux | 238 ++++++++++++++++++ 1 file changed, 238 insertions(+) create mode 100644 translated/tech/How to Install and Configure Multihomed ISC DHCP Server on Debian Linux diff --git a/translated/tech/How to Install and Configure Multihomed ISC DHCP Server on Debian Linux b/translated/tech/How to Install and Configure Multihomed ISC DHCP Server on Debian Linux new file mode 100644 index 0000000000..8fb67f0697 --- /dev/null +++ b/translated/tech/How to Install and Configure Multihomed ISC DHCP Server on Debian Linux @@ -0,0 +1,238 @@ +How to Install and Configure Multihomed ISC DHCP Server on Debian Linux +debian linux上安装配置 ISC DHCP Server +================================================================================ +Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more. +动态主机控制协议(DHCP)给网络管理员提供一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的DHCP服务工具是 ISC DHCP Server。DHCP服务的目的是给给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP服务一般包括以下信息:DNS服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 + +This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well. +本节教程介绍4.2.4版的ISC-DHCP-Server如何在Debian7.7上管理多个虚拟局域网(VLAN),但是它也可以非常简单的用于单一网络。 + +The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation. +测试用的网络是通过思科路由器依赖传统的方式来管DHCP租约地址,目前有12个VLANs需要通过路由器的集中式服务器来管理。把DHCP这个责任转移到一个专用的服务器上面,路由器可以回收资源去用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 + +The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new host’s host-names will be added to the DNS system when the host requests a DHCP address from the server. +另一个将DHCP服务移动到专用服务器的好处,后续会讲到,建立动态域名服务器(DDNS)这样当主机从服务器请求DHCP地址的时候,新主机的主机名将被添加到DNS系统里面。 +### Step 1: Installing and Configuring ISC DHCP Server ### +### 安装和配置ISC DHCP Server### + +1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the ‘apt‘ utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands. +1. 创建这个多宿主服务器的过程中,需要用apt工具来安装Debian软件仓库中的ISC软件。与其他教程一样需要使用root或者sudo访问权限。请适当的修改以使用下面的命令。(译者注:下面中括号里面是注释,使用的时候请删除,#表示使用的root权限) + + + # apt-get install isc-dhcp-server [安装 the ISC DHCP Server 软件] + # dpkg --get-selections isc-dhcp-server [确认软件已经成功安装] + # dpkg -s isc-dhcp-server [用另一种方式确认成功安装] + +![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg) + +2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope: + +2. 现在已经确认服务软件安装完毕,现在需要配置服务器,它将分发网络信息。作为管理员你最起码应该了解DHCP的信息如下: +- The network addresses +- 网络地址 +- The subnet masks +- 子网掩码 +- The range of addresses to be dynamically assigned +- 动态分配的地址范围 + +Other useful information to have the server dynamically assign includes: + +其他一些使服务器动态分配的有用信息包括: +- Default gateway +- 默认网关 +- DNS server IP addresses +- DNS服务器IP地址 +- The Domain Name +- 域名 +- Host name +- 主机名 +- Network Broadcast addresses +- 网络广播地址 + +These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package: + +这只是很少一部分能让ISC DHCP server处理的选项。完整的查看所有选项及其描述需要在安装好软件后输入以下命令: + # man dhcpd.conf + +3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the server’s interfaces. + +3. 一旦管理员已经确定了这台服务器需要分发出去的必要信息,那么是时候配置它和分配必要的地址池了。在配置任何地址池或服务器配置之前,DHCP服务必须配置好,来监听这台服务器上面的一个接口。 + +On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial. + +在这台特定的服务器上,一个网卡设置好后,DHCP会监听命名为`'bond0'`的接口。请适确保适当的更改服务器以及网络环境。下面默认的配置都是针对于本次教程的。 +![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) + +This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin: + +这行指定的是DHCP服务监听接口(一个或多个)上的DHCP流量。修改主要的配置文件分派DHCP池配置在所需要的网络上。主要的配置文件的位置在/etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 + # nano /etc/dhcp/dhcpd.conf + +This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ‘ddns-update-style‘ clause and for this tutorial it will remain set to ‘none‘ however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates. + +这个配置文件可以配置我们想的地址池/主机。文件顶部有‘ddns-update-style‘这样一句,在本教程中它设置为‘none‘。在以后的教程中动态DNS, ISC-DHCP-Server 将被整合到 BIND9为了能够使主机名更新到IP地址。 + +4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file. + +4. 接下来的部分 是管理员配置全局网络设置,如DNS域名,默认的租约时间,IP地址,子网的掩码,以及更多的区域。想更多地了解所有的选项,阅读man手册dhcpd.conf文件,命令如下: + + # man dhcpd.conf + +For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldn’t have to be implemented in every single pool created. + +对于这台服务器,我们需要在顶部配置一些全局网络设置,这腰就不用到每个地址池中单独去设置了。 + +![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) + +Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well. +下面我们花一点时间来解释一下这些选项,在本例中虽然它们是一些全局,但是也可以为每一个地址池配置。 + +- option domain-name “comptech.local”; – All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local” +- option domain-name “comptech.local”; – 所有使用这台DHCP服务器的主机,将成为DNS域名“comptech.local”的一员 +- option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host. +- option domain-name-servers 172.27.10.6; DHCP 将向所有配置好的网络主机分发DNS服务器地址172.27.10.6 +- option subnet-mask 255.255.255.0; – The subnet mask handed out to every network will be a 255.255.255.0 or a /24 +- option subnet-mask 255.255.255.0; – 分派子网掩码到每一个网络设备 255.255.255.0 或a /24 +- default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host is done with the lease, they can hand the address back early. +- default-lease-time 3600; – 有效的地址租约时间,单位是秒。如果租约时间耗尽主机可以重新申请租约。如果租约完成那么相应的地址也将被尽快回收。 +- max-lease-time 86400; – This is the maximum amount of time in seconds a lease can be held by a host. +- max-lease-time 86400; – 这是一个主机最大的租约时间,单位为秒。 +- ping-check true; – This is an extra test to ensure that the address the server wants to assign out isn’t in use by another host on the network already. +- ping-check true; – 这是一个额外的测试,以确保服务器分配出的地址不是一个网络内另一台主机已使用的网络地址。 +- ping-timeout; – This is how long in second the server will wait for a response to a ping before assuming the address isn’t in use. +- ping-timeout; – 假设地址以前没有使用,用这个来检测2个ping值回应之间的时间长度。 +- ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS. +- ignore client-updates; 现在这个选项是可以忽略的,因为DDNS在前面已在配置文件中被禁用,但是当DDNS运行,此选项会忽略一个主机请求的DNS更新其主机名。 + +5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza. +5. 文件中的下面一行是权威DHCP所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么就取消注释权威节(uncomment the authoritative stanza.)。 + +This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the ‘#’ in front of the keyword authoritative. + +通过去掉关键字authoritative 前面的‘#’,取消注释全局权威节。那么该服务器将是它管理网络里面的唯一权威。 +![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) +Enable ISC Authoritative +开启 ISC Authoritative + +By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldn’t, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire network’s DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot. +默认情况下服务器被假定为不是网络上的权威。这样做的理由是为了安全。如果有人因为不了解DHCP服务的配置导致配置不当或在一个不该出现的网络里面,这都将会造成非常严重的重连接问题。这行还可用在每个网络中单独使用。这意味着如果该服务器不是整个网络的DHCP服务器,authoritative行可以用在每个单独的网络中,而不是像上面截图显示中的全局配置那样。 + +6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc). + +6. 下一步是配置所有的服务器将要管理的DHCP地址池/网络。简短起见,这个教程将只配置地址池。作为管理员需要收集所有必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) +For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255. +This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, let’s open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo! + +以下这个地址池所用到的信息都是管理员收集整理的:网络id 172.27.60.0, 子网掩码 255.255.255.0 or a /24, 默认网关172.27.60.1,广播地址 172.27.60.255. + # nano /etc/dhcp/dhcpd.conf + +![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) +Configure DHCP Pools and Networks +配置DHCP的地址池和网络 + +This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network. + +这例子是分配IP地址给用VMWare创建的虚拟服务器。第一行标明是该网络的子网掩码。括号里面是DHCP服务器应该提供给当前网络上主机的所有选项。 + +The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses aren’t in the pool and can be assigned statically to hosts if needed. + +第一节, range 172.27.60.50 172.27.60.254;是DHCP服务在这个网络上能够给主机动态分发的地址范围。 + +The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network. + +第二节,option routers 172.27.60.1;给网络里面所有的主机分发默认网关地址。 + +The last stanza, option broadcast-address 172.27.60.255;, indicates what the network’s broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address can’t be assigned to a host. + +最后一节, option broadcast-address 172.27.60.255;,说明当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 +Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }. + +必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在{}里面。 + +7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command: + +7. 如果是创建多个网络,持续的创建它们的相应选项最终保存文本文件。一旦配置完成, ISC-DHCP-Server进程需要重启来使新的更改生效。可以通过下面的命令来完成: + # service isc-dhcp-server restart + +This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]: + +这条命令将重启DHCP服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理dhcp请求。最简单的方法是通过lsof命令[1]来查看服务器是否在监听67端口,命令如下: + + # lsof -i :67 + +![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) +Check DHCP Listening Port +检查DHCP监听端口 + +This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to ‘bootps‘ due to a port number mapping for port 67 in /etc/services file. + +这里的输出表明DHCPD(DHCP服务守护进程)正在运行并且监听67端口。由于/etc/services文件中67端口的端口映射,输出中的67端口实际上被转换成了“bootps”。 + +This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server. + +在大多数的系统中这是非常普遍的,此时,服务器应该已经为网络连接做好准备,可以通过将一台主机接入网络请求DHCP地址来验证服务是否正常。 + +### Step 2: Testing Client Connectivity ### + +### 测试客户端连接 ### +8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active. + +8. 现在许多系统使用网络管理器来维护网络连接状态,因此这个装置应该预先配置,当接口是活跃的时候来获取DHCP。 + +However on machines that aren’t using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses. + +然而当一台设备无法使用网络管理器时,它可能需要手动获取DHCP地址。下面的几步将展示如何做到这一点以及如何查看服务器是否分发地址。 + +The ‘[ifconfig][2]‘ utility can be used to check an interface’s configuration. The machine used to test the DHCP server only has one network adapter and it is called ‘eth0‘. + + ‘[ifconfig][2]‘工具能够用来检查接口配置。这台设备被用来监测DHCP服务它只有一个网络适配器(网卡),这块网卡被命名为‘eth0‘。 + + # ifconfig eth0 + +![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) +Check Network Interface IP Address +检查网络接口IP地址 + +From this output, this machine currently doesn’t have an IPv4 address, great! Let’s instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as ‘dhclient‘ installed. The DHCP client utility may very from system to system. + +从输出结果上看,这台设备目前没有一个IPv4地址,很好这样便于测试。让这台设备连接到DHCP服务器并发出一个请求。这台设备上安装了一个名为‘dhclient‘ 的DHCP客户端工具。这个客户端工具会因为系统不同而不同。 + # dhclient eth0 + +![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) +Request IP Address from DHCP +从DHCP请求IP地址 + +Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network. + +现在 `'inet addr:'` 字段显示了属于172.27.60.0网络地址范围内的IPv4地址。另外要注意:目前该网络配置了正确的子网掩码并且分发了广播地址。 + +Things are looking promising but let’s check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the server’s system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as ‘tail‘ will be used to only show the last few lines of the log file. + +看起来还不错,让我们来测试一下,看看它是不是这台设备收到新IP地址的地方。我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具‘tail’,它只显示日志文件的最后几行,而不是使用一个完整的文本编辑器去查看日志文件。命令如下: + + # tail /var/log/syslog + +![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) +Check DHCP Logs +检查DHCP日志文件 + +Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary. + +OK!服务器记录表明它分发了一个地址给这台主机(HRTDEBXENSRV)。服务器按预期运行,给它充当权威的网络分发适合的网络地址。到这里DHCP服务器搭建成功并且运行起来了。配置其他的网络,排查故障,确保安全。 + +Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article. + +更多新的 ISC-DHCP-Server 的功能在以后的Debian教程中会被提及。不久以后将写一篇关于Bind9和DDNS的教程,插入到这篇文章里面。 +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/ + +作者:[Rob Turner][a] +译者:[ivo-wang](https://github.com/ivo-wang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/robturner/ +[1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/ +[2]:http://www.tecmint.com/ifconfig-command-examples/ From aa181f5e40c147e90a2e99105d63dc0820515b04 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Wed, 2 Dec 2015 00:59:44 +0800 Subject: [PATCH 079/160] Rename How to Install and Configure Multihomed ISC DHCP Server on Debian Linux to 20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md --- ...l and Configure Multihomed ISC DHCP Server on Debian Linux.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{How to Install and Configure Multihomed ISC DHCP Server on Debian Linux => 20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md} (100%) diff --git a/translated/tech/How to Install and Configure Multihomed ISC DHCP Server on Debian Linux b/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md similarity index 100% rename from translated/tech/How to Install and Configure Multihomed ISC DHCP Server on Debian Linux rename to translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md From b0e82b8011cd4be511214353d91546bfbb8fa83c Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 2 Dec 2015 08:32:45 +0800 Subject: [PATCH 080/160] Rename 20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md to 20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md --- ...w to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md => 20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md} (100%) diff --git a/translated/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md b/translated/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md similarity index 100% rename from translated/tech/20151126 Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux.md rename to translated/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md From 9722476d3f2c3e5183a0369b253ffa7505796942 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 2 Dec 2015 08:33:21 +0800 Subject: [PATCH 081/160] Delete 20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md --- ....8.16 in Ubuntu 16.04 or 15.10 or 14.04.md | 60 ------------------- 1 file changed, 60 deletions(-) delete mode 100644 sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md diff --git a/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md b/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md deleted file mode 100644 index 8465520fc5..0000000000 --- a/sources/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md +++ /dev/null @@ -1,60 +0,0 @@ -translation by strugglingyouth -How to Install GIMP 2.8.16 in Ubuntu 16.04, 15.10, 14.04 -================================================================================ -![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png) - -GIMP image editor 2.8.16 was released on its 20th birthday. Here’s how to install or upgrade in Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 and their derivatives, e.g., Linux Mint 17.x/13, Elementary OS Freya. - -GIMP 2.8.16 features support for layer groups in OpenRaster files, fixes for layer groups support in PSD, various user inrterface improvements, OSX build system fixes, translation updates, and more changes. Read the [official announcement][1]. - -![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg) - -### How to Install or Upgrade: ### - -Thanks to Otto Meier, an [Ubuntu PPA][2] with latest GIMP packages is available for all current Ubuntu releases and derivatives. - -**1. Add GIMP PPA** - -Open terminal from Unity Dash, App launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below command and hit Enter: - - sudo add-apt-repository ppa:otto-kesselgulasch/gimp - -![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg) - -Type in your password when it asks, no visual feedback so just type in mind, and hit enter to continue. - -**2. Install or Upgrade the editor.** - -After added the PPA, launch **Software Updater** (or Software Manager in Mint). After checking for updates, you’ll see GIMP in the update list. Click “Install Now” to upgrade it. - -![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg) - -For those who prefer Linux commands, run below commands one by one to refresh your repository caches and install GIMP: - - sudo apt-get update - - sudo apt-get install gimp - -**3. (Optional) Uninstall.** - -Just in case you want to uninstall or downgrade GIMP image editor. Use Software Center to remove it, or run below commands one by one to purge PPA as well as downgrade the software: - - sudo apt-get install ppa-purge - - sudo ppa-purge ppa:otto-kesselgulasch/gimp - -That’s it. Enjoy! - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/ -[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp From c53445a869b7bbe515c5ce6d9df45e19641e92ec Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Dec 2015 16:19:11 +0800 Subject: [PATCH 082/160] =?UTF-8?q?20151201-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ur Ubuntu or Linux Mint with SystemBack.md | 40 ++++++++ ... Port Scanning With netcat [nc] Command.md | 96 +++++++++++++++++++ 2 files changed, 136 insertions(+) create mode 100644 sources/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md create mode 100644 sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md diff --git a/sources/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md b/sources/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md new file mode 100644 index 0000000000..98193a8f72 --- /dev/null +++ b/sources/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md @@ -0,0 +1,40 @@ +Backup (System Restore Point) your Ubuntu/Linux Mint with SystemBack +================================================================================ +System Restore is must have feature for any OS that allows the user to revert their computer's state (including system files, installed applications, and system settings) to that of a previous point in time, which can be used to recover from system malfunctions or other problems. +Sometimes installing a program or driver can make your OS go to blank screen. System Restore can return your PC's system files and programs to a time when everything was working fine, potentially preventing hours of troubleshooting headaches. It won't affect your documents, pictures, or other data. +Simple system backup and restore application with extra features. [Systemback][1] makes it easy to create backups of system and users configuration files. In case of problems you can easily restore the previous state of the system. There are extra features like system copying, system installation and Live system creation. + +Screenshots + +![systemback](http://2.bp.blogspot.com/-2UPS3yl3LHw/VlilgtGAlvI/AAAAAAAAGts/ueRaAghXNvc/s1600/systemback-1.jpg) + +![systemback](http://2.bp.blogspot.com/-7djBLbGenxE/Vlilgk-FZHI/AAAAAAAAGtk/2PVNKlaPO-c/s1600/systemback-2.jpg) + +![](http://3.bp.blogspot.com/-beZYwKrsT4o/VlilgpThziI/AAAAAAAAGto/cwsghXFNGRA/s1600/systemback-3.jpg) + +![](http://1.bp.blogspot.com/-t_gmcoQZrvM/VlilhLP--TI/AAAAAAAAGt0/GWBg6bGeeaI/s1600/systemback-5.jpg) + +**Note**: Using System Restore will not restore documents, music, emails, or personal files of any kind. Depending on your perspective, this is both a positive and negative feature. The bad news is that it won't restore that accidentally deleted file you wish you could get back, though a file recovery program might solve that problem. +If no restore point exists on your computer, System Restore has nothing to revert to so the tool won't work for you. If you're trying to recover from a major problem, you'll need to move on to another troubleshooting step. + +>>> Available for Ubuntu 15.10 Wily/16.04/15.04 Vivid/14.04 Trusty/Linux Mint 17.x/other Ubuntu derivatives +To install SystemBack Application in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal: + +Terminal Commands: + + sudo add-apt-repository ppa:nemh/systemback + sudo apt-get update + sudo apt-get install systemback + +That's it + +-------------------------------------------------------------------------------- + +via: http://www.noobslab.com/2015/11/backup-system-restore-point-your.html + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://launchpad.net/systemback \ No newline at end of file diff --git a/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md b/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md new file mode 100644 index 0000000000..1358968910 --- /dev/null +++ b/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md @@ -0,0 +1,96 @@ +Linux and Unix Port Scanning With netcat [nc] Command +================================================================================ +How do I find out which ports are opened on my own server? How do I run port scanning using the nc command instead of [the nmap command on a Linux or Unix-like][1] systems? + +The nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. If nmap is not installed and you do not wish to use all of nmap options you can use netcat/nc command for scanning ports. This may useful to know which ports are open and running services on a target machine. You can use [nmap command for port scanning][2] too. + +### How do I use nc to scan Linux, UNIX and Windows server port scanning? ### + +If nmap is not installed try nc / netcat command as follow. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. Run nc command with -z flag. You need to specify host name / ip along with the port range to limit and speedup operation: + + ## syntax ## + nc -z -v {host-name-here} {port-range-here} + nc -z -v host-name-here ssh + nc -z -v host-name-here 22 + nc -w 1 -z -v server-name-here port-Number-her + + ## scan 1 to 1023 ports ## + nc -zv vip-1.vsnl.nixcraft.in 1-1023 + +Sample outputs: + + Connection to localhost 25 port [tcp/smtp] succeeded! + Connection to vip-1.vsnl.nixcraft.in 25 port [tcp/smtp] succeeded! + Connection to vip-1.vsnl.nixcraft.in 80 port [tcp/http] succeeded! + Connection to vip-1.vsnl.nixcraft.in 143 port [tcp/imap] succeeded! + Connection to vip-1.vsnl.nixcraft.in 199 port [tcp/smux] succeeded! + Connection to vip-1.vsnl.nixcraft.in 783 port [tcp/*] succeeded! + Connection to vip-1.vsnl.nixcraft.in 904 port [tcp/vmware-authd] succeeded! + Connection to vip-1.vsnl.nixcraft.in 993 port [tcp/imaps] succeeded! + +You can scan individual port too: + + nc -zv v.txvip1 443 + nc -zv v.txvip1 80 + nc -zv v.txvip1 22 + nc -zv v.txvip1 21 + nc -zv v.txvip1 smtp + nc -zvn v.txvip1 ftp + + ## really fast scanner with 1 timeout value ## + netcat -v -z -n -w 1 v.txvip1 1-1023 + +Sample outputs: + +![Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server](http://s0.cyberciti.org/uploads/faq/2007/07/scan-with-nc.jpg) + +Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server + +Where, + +1. -z : Port scanning mode i.e. zero I/O mode. +1. -v : Be verbose [use twice -vv to be more verbose]. +1. -n : Use numeric-only IP addresses i.e. do not use DNS to resolve ip addresses. +1. -w 1 : Set time out value to 1. + +More examples: + + $ netcat -z -vv www.cyberciti.biz http + www.cyberciti.biz [75.126.153.206] 80 (http) open + sent 0, rcvd 0 + $ netcat -z -vv google.com https + DNS fwd/rev mismatch: google.com != maa03s16-in-f2.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f6.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f5.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f3.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f8.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f0.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f7.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f4.1e100.net + google.com [74.125.236.162] 443 (https) open + sent 0, rcvd 0 + $ netcat -v -z -n -w 1 192.168.1.254 1-1023 + (UNKNOWN) [192.168.1.254] 989 (ftps-data) open + (UNKNOWN) [192.168.1.254] 443 (https) open + (UNKNOWN) [192.168.1.254] 53 (domain) open + +See also + +- [Scanning network for open ports with the nmap command][3] for more info. +- Man pages - [nc(1)][4], [nmap(1)][5] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/linux-port-scanning/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ +[2]:http://www.cyberciti.biz/tips/linux-scanning-network-for-open-ports.html +[3]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ +[4]:http://www.manpager.com/linux/man1/nc.1.html +[5]:http://www.manpager.com/linux/man1/nmap.1.html \ No newline at end of file From 6aec0ceb4b364ceb20f1c65cb568063119bfcb44 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 2 Dec 2015 16:10:59 +0800 Subject: [PATCH 083/160] =?UTF-8?q?20151202-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20151202 KDE vs GNOME vs XFCE Desktop.md | 53 +++++++ ... to up- and download files on the shell.md | 146 ++++++++++++++++++ 2 files changed, 199 insertions(+) create mode 100644 sources/talk/20151202 KDE vs GNOME vs XFCE Desktop.md create mode 100644 sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md diff --git a/sources/talk/20151202 KDE vs GNOME vs XFCE Desktop.md b/sources/talk/20151202 KDE vs GNOME vs XFCE Desktop.md new file mode 100644 index 0000000000..5cfbc31ace --- /dev/null +++ b/sources/talk/20151202 KDE vs GNOME vs XFCE Desktop.md @@ -0,0 +1,53 @@ +KDE vs GNOME vs XFCE Desktop +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2013/07/300px-Xfce_logo.svg_.png) + +Over many years, many people spent a long time with Linux desktop using either KDE or GNOME. These two environments have grown through the previous years and each of these desktops continued to expand their current user-base. For example, sleeper desktop environment has been XFCE as XFCE offers more robustness than LXDE that lacks much of XFCE’s polish in the default configuration. The XFCE provides all benefits which users enjoyed in the GNOME 2, but with some lightweight experiences which made it a hit on the older computers. + +### The Desktop Theming ### + +After the user has fresh installation, the XFCE will be a bit boring, which lacks some certain visual attractiveness to it. So, don’t misunderstand my words here, the XFCE is still having nice looking desktop, but it may be like vanilla in users’ eyes as well as most people who are new to the XFCE desktop environment. The good news here is that while installing new theme to the XFCE, it is a reasonably easy process as you can easily find the right XFCE theme which appeals to you, after that, you can extract that theme to the proper directory. From this point, the XFCE comes with an important tool located under the Appearance for helping the user to select the chosen theme easily throughout the Graphical User Interface (GUID). There’re no other tools that might be required here, and if the user follows the above directions, it will be a bit simple for everyone who is caring to have a try. + +On the GNOME desktop, the user should follow the similar above approach. The main key difference for this point is that users have to download and then install GNOME Tweak Tool before proceeding with anything. It does not have any huge barriers under any means, but it is simple valid oversight when the user consider that the XFCE does not require any tweak tool in order for installing and activating the new desktop themes. By being under the GNOME, and especially after installing that Tweak tool which is mentioned above, you will need to go ahead and also to make sure that you have the extension of User Themes installed. + +The same as with the XFCE, the user will want to search for, and then download the theme which most appeals personally to him. Then, user can revisit to the GNOME Tweak tool, and click on the Appearance option on left side of that Tweak tool. Then, the user can simply look at the bottom of the page and click on file browse button to right of the Shell Theme. User then can browse to the zipped folder, and click open. In case if this process was successfully done, the user will see an alert that tell him that it was installed without any problems. From this point, user can simply use the pull down menu in order for selecting the theme he wants to use. The same as with the XFCE, process of theme activation is very easy, however, a need to download the non-included application for using a new theme will leave much to be desired. + +Finally, there is the process of the KDE desktop theming. The same as with XFCE, there is no need at all to install any extra tools for making it work. This is one area where there is a feeling that the XFCE has to make the KDE the winner. Not only the installing themes in the KDE is accomplished entirely within the Graphical User Interface, but it’s also even possible to click on (Get New Themes) button and user will be able to locate, view, and also install the new themes automatically. + +However, it should be noted that the KDE is a bit more robust desktop environment comparing to the XFCE. Therefore, it is a bit reasonable now to see why such extra functionalities could be missing from the desktops which are mainly designed to be minimalist. So, we all have to give the KDE props for such outstanding functionality. + +### MATE is not Lightweight Desktop ### + +Before continuing with the comparison between the XFCE, the GNOME 3 and the KDE, it should be clear for experts that we can’t touch the MATE desktop as an option in the comparison. MATE can be considered as the GNOME 2 desktop’s next incarnation, but it’s not mainly marketed to be a lightweight or fast desktop. But instead of that, its primary goal is to be more traditional and comfortable desktop environment where the users can feel right at their home to use it. + +On the other hand, the XFCE comes with a completely other goal set. The XFCE offers its users a more lightweight and yet still visually appealing desktop experience. Then, for everyone who points out that MATE is a lightweight desktop too, it isn’t really targeting that lightweight desktop crowd. Both options may be dressed up for looking quite attractive with the proper theme installed. + +### The Navigation of Desktop ### + +The XFCE honestly offers an obvious navigation which is out of the box. Anyone who is used to the traditional Windows or the GNOME 2/MATE desktop experience will be going to have the ability to navigate around the new XFCE installation without any kind of help. Straight away, adding the applets to panel is still very obvious. The same as with locating installed applications, just use the launcher and simply click on any desired application. With an exception of LXDE and MATE, there is no other desktop that can make the navigation that simple. What can be even better is that fact which the control panel is very easy to use, that is a really big benefit to everyone who is new to the desktop environment. If the user prefer older methods to use his desktop, then GNOME is not an option. With the hot corners as well as the no minimize button, plus the other application layout method, it’ll take the most newcomers getting easily used to it. + +If the user is coming from, as an example, Windows environment, then he is going to be put off by the inability to add applets to the top of his workspace simply with just a mere right-click. Just instead of this, it can be handled by using extensions. Installing extensions in the GNOME is granted and is a brain-dead easy, based on the easy to use (on/off) toggle switches located on the extensions page of the GNOME. Users have to know, sadly, to actually visit that page to enjoy this functionality. + +On the other side, the GNOME is sharing its desire for providing a straight forward and an easy to use control panel, which many of you may think that it is not be a big deal, but it is really something that I by myself find commendable and worth to be mentioned. The KDE offers its users a bit more traditional desktop experience, throughout familiar launchers as well as the ability for getting to the software in more familiar way if they are coming from Windows desktop. The process of adding widgets or applets to the KDE desktop is an easy matter of just right-clicking on the bottom of the desktop. Only the problem with the KDE’s approach is to be that, as many things KDE, the feature which users are actually looking for are hidden. The KDE users might berate my opinion for this, but I still stand by my statement. + +In order for adding a Widget, just right-click on “my panel”, just to see the panel options, but not as an immediate method to install Widgets. You will not actually see the Add Widgets until you select the Panel Options, then the Add Widgets. This not a big deal to me, but later for some users, it becomes unnecessary tidbit of confusion. To make things here more convoluted, after the users manage to locate Widgets area they discover later a brand new term called “Activities”. It is in the same area as the Widgets, yet it is somehow in its own area as to what it does. + +Now don’t misunderstand me, the Activities feature in the KDE is totally great and actually valued. But to look at it from the usability standpoint, I think that it would be better suited in another menu option in order to not confuse the newbies. User is welcome to differ, but to test this with newbies for some extended periods of time can prove the correct over and over again. The rant against the Activities placement aside, the KDE approach to add new widgets is really great. The same as with the KDE themes, user can’t browse through and install the Widgets automatically via using the provided Graphical User Interface. It is a bit fantastic of functionality, and also it could be celebrated such way. The control panel of the KDE is not as easy as the user might like it to be, yet it is a bit clear that this’s something that they are still working on. + +### So, the XFCE is the best desktop, right? ### + +I, by myself, actually run GNOME, KDE, and XFCE on my computers in my office and home. I also have some older machines with OpenBox and LXDE too. Each desktop experience can offer something that is a bit useful to me and may help me to use each machine as I see that it is fit. For me, I have a soft spot in my heart for the XFCE as it is one of the desktop environments which I stuck with for years. But in this article, I’m just writing it on my daily use computer which is in fact, GNOME. + +The main idea here is that I still feel that the XFCE provides a bit better user experience for users who are looking for stable, traditional, and easy to understand desktop environment. You are also welcome to share with us your opinion in the comments section. + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/kde-vs-gnome-vs-xfce-desktop/ + +作者:[M.el Khamlichi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ \ No newline at end of file diff --git a/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md b/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md new file mode 100644 index 0000000000..54b69555c4 --- /dev/null +++ b/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md @@ -0,0 +1,146 @@ +How to use the Linux ftp command to up- and download files on the shell +================================================================================ +In this tutorial, I will explain how to use the Linux ftp command on the shell. I will show you how to connect to an FTP server, up- and download files and create directories. While there are many nice desktops FTP clients available, the FTP command is still useful when you work remotely on a server over an SSH session and e.g. want to fetch a backup file from your FTP storage. + +### Step 1: Establishing an FTP connection ### + +To connect to the FTP server, we have to type in the terminal window '**ftp**' and then the domain name 'domain.com' or IP address of the FTP server. + +#### Examples: #### + + ftp domain.com + + ftp 192.168.0.1 + + ftp user@ftpdomain.com + +**Note: for this example we used an anonymous server.** + +Replace the IP and domain in the above examples with the IP address or domain of your FTP server. + +![The FTP login.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png) + +### Step 2: Login with User and Password ### + +Most FTP servers logins are password protected, so the server will ask us for a '**username**' and a '**password**'. + +If you connect to a so-called anonymous FTP server, then try to use "anonymous" as user name and a nempty password: + + Name: anonymous + + Password: + +The terminal will return a message like this: + + 230 Login successful. + Remote system type is UNIX. + Using binary mode to transfer files. + ftp> + +When you are logged in successfully. + +![Successful FTP login.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png) + +### Step 3: Working with Directories ### + +The commands to list, move and create folders on an FTP server are almost the same as we would use locally on our computer, ls for list, cd to change directories, mkdir to create directories... + +#### Listing directories with security settings: #### + + ftp> ls + +The server will return: + + 200 PORT command successful. Consider using PASV. + 150 Here comes the directory listing. + directory list + .... + .... + 226 Directory send OK. + +![List directories](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/listing.png) + +#### Changing Directories: #### + +To change the directory we can type: + + ftp> cd directory + +The server will return: + + 250 Directory succesfully changed. + +![Change a directory in FTP.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/directory.png) + +### Step 4: Downloading files with FTP ### + +Before downloading a file, we should set the local ftp file download directory by using 'lcd' command: + + lcd /home/user/yourdirectoryname + +If you dont specify the download directory, the file will be downloaded to the current directory where you were at the time you started the FTP session. + +Now, we can use the command 'get' command to download a file, the usage is: + + get file + +The file will be downloaded to the directory previously set with the 'lcd' command. + +The server will return the next message: + + local: file remote: file + 200 PORT command successful. Consider using PASV. + 150 Opening BINARY mode data connection for file (xxx bytes). + 226 File send OK. + XXX bytes received in x.xx secs (x.xxx MB/s). + +![Download a file with FTP.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/gettingfile.png) + +To download several files we can use wildcards. In this example I will download all files with the .xls file extension. + + mget *.xls + +### Step 5: Uploading Files with FTP ### + +We can upload files that are in the local directory where we made the FTP connection. + +To upload a file, we can use 'put' command. + + put file + +When the file that you want to upload is not in the local directory, you can use the absolute path starting with "/" as well: + + put /path/file + +To upload several files we can use the mput command similar to the mget example from above: + + mput *.xls + +### Step 6: Closing the FTP connection ### + +Once we have done the FTP work, we should close the connection for security reasons. There are three commands that we can use to close the connection: + + bye + + exit + + quit + +Any of them will disconnect our PC from the FTP server and will return: + + 221 Goodbye + +![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/goodbye.png) + +If you need some additional help, once you are connected to the FTP server, type 'help' and this will show you all the available FTP commands. + +![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/helpwindow.png) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-use-ftp-on-the-linux-shell/ + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 8c790538aeb08d65a53ed1aba7138654683216b4 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 2 Dec 2015 16:26:52 +0800 Subject: [PATCH 084/160] =?UTF-8?q?20151202-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... do after installing openSUSE Leap 42.1.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md diff --git a/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md b/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md new file mode 100644 index 0000000000..bbd79c19a3 --- /dev/null +++ b/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md @@ -0,0 +1,108 @@ +8 things to do after installing openSUSE Leap 42.1 +================================================================================ +![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg) +Credit: [Metropolitan Transportation/Flicrk][1] + +> You've installed openSUSE on your PC. Here's what to do next. + +[openSUSE Leap is indeed a huge leap][2], allowing users to run a distro that has the same DNA of SUSE Linux Enterprise. Like any other operating system, some work is needed to get it set up for optimal use. + +Following are some of the things that I did after installing openSUSE Leap on my PC (these are not applicable for server installations). None of them are mandatory, and you may be fine with the basic install. But if you need more out of your openSUSE Leap, follow me. + +### 1. Adding Packman repository ### + +Due to software patents and licences, openSUSE, like many Linux distributions, doesn't offer many applications, codecs, and drivers through official repositories (repos). Instead, these are made available through 3rd party or community repos. The first and most important repository is 'Packman'. Since these repos are not enabled by default, we have to add them. You can do so either using YaST (one of the gems of openSUSE) or by command line (instructions below). + +![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png) +Adding Packman repositories. + +Using YaST, go to the Software Repositories section. Click on the 'Add’ button and select 'Community Repositories.' Click 'next.' And once the repos are loaded, select the Packman Repository. Click 'OK,' then import the trusted GnuPG key by clicking on the 'Trust' button. + +Or, using the terminal you can add and enable the Packman repo using the following command: + + zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman + +Once the repo is added, you have access to many more packages. To install any application or package, open YaST Software Manager, search for the package and install it. + +### 2. Install VLC ### + +VLC is the Swiss Army knife of media players and can play virtually any media file. You can install VLC from YaST Software Manager or from software.opensuse.org. You will need to install two packages: vlc and vlc-codecs. + +If using terminal, run the following command: + + sudo zypper install vlc vlc-codecs + +### 3. Install Handbrake ### + +If you need to transcode or convert your video files from one format to another, [Handbrake is the tools for you][3]. Handbrake is available through repositories we enabled, so just search for it in YaST and install. + +If you are using the terminal, run the following command: + + sudo zypper install handbrake-cli handbrake-gtk + +(Pro tip: VLC can also transcode audio and video files.) + +### 4. Install Chrome ### + +OpenSUSE comes with Firefox as the default browser. But since Firefox isn't capable of playing restricted media such as Netflix, I recommend installing Chrome. This takes some extra work. First you need to import the trusted key from Google. Open the terminal app and run the 'wget' command to download the key: + + wget https://dl.google.com/linux/linux_signing_key.pub + +Then import the key: + + sudo rpm --import linux_signing_key.pub + +Now head over to the [Google Chrome website][4] and download the 64 bit .rpm file. Once downloaded run the following command to install the browser: + + sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm + +### 5. Install Nvidia drivers ### + +OpenSUSE Leap will work out of the box even if you have Nvidia or ATI graphics cards. However, if you do need the proprietary drivers for gaming or any other purpose, you can install such drivers, but some extra work is needed. + +First you need to add the Nvidia repositories; it's the same procedure we used to add Packman repositories using YaST. The only difference is that you will choose Nvidia from the Community Repositories section. Once it's added, go to **Software Management > Extras** and select 'Extras/Install All Matching Recommended Packages'. + +![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png) + +It will open a dialogue box showing all the packages it's going to install, click OK and follow the instructions. You can also run the following command after adding the Nvidia repository to install the needed Nvidia drivers: + + sudo zypper inr + +(Note: I have never used AMD/ATI cards so I have no experience with them.) + +### 6. Install media codecs ### + +Once you have VLC installed you won't need to install media codecs, but if you are using other apps for media playback you will need to install such codecs. Some developers have written scripts/tools which makes it a much easier process. Just go to [this page][5] and install the entire pack by clicking on the appropriate button. It will open YaST and install the packages automatically (of source you will have to give the root password and trust the GnuPG key, as usual). + +### 7. Install your preferred email client ### + +OpenSUSE comes with Kmail or Evolution, depending on the Desktop Environment you installed on the system. I run Plasma, which comes with Kmail, and this email client leaves a lot to be desired. I suggest trying Thunderbird or Evolution mail. All major email clients are available through official repositories. You can also check my [handpicked list of the best email clients for Linux][7]. + +### 8. Enable Samba services from Firewall ### + +OpenSUSE offers a much more secure system out of the box, compared to other distributions. But it also requires a little bit more work for a new user. If you are using Samba protocol to share files within your local network then you will have to allow that service from the Firewall. + +![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png) +Allow Samba Client and Server from Firewall settings. + +Open YaST and search for Firewall. Once in Firewall settings, go to 'Allowed Services' where you will see a drop down list under 'Service to allow.' Select 'Samba Client,' then click 'Add.' Do the same with the 'Samba Server' option. Once both are added, click 'Next,' then click 'Finish,' and now you will be able to share folders from your openSUSE system and also access other machines over the local network. + +That's pretty much all that I did on my new openSUSE system to set it up just the way I like it. If you have any questions, please feel free to ask in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ +[1]:https://www.flickr.com/photos/mtaphotos/11200079265/ +[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution +[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line +[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha +[5]:http://opensuse-community.org/ +[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html \ No newline at end of file From 65404bd425548d5a65d95292c40d5d5c1ff1063f Mon Sep 17 00:00:00 2001 From: KS Date: Wed, 2 Dec 2015 17:42:07 +0800 Subject: [PATCH 085/160] Update 20151201 How to use Mutt email client with encrypted passwords.md --- ...01 How to use Mutt email client with encrypted passwords.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md b/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md index 5d7414588c..758fe7c8b2 100644 --- a/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md +++ b/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md @@ -1,3 +1,4 @@ +wyangsun translating How to use Mutt email client with encrypted passwords ================================================================================ Mutt is an open-source email client written for Linux/UNIX terminal environment. Together with [Alpine][1], Mutt has the most devoted followers among Linux command-line enthusiasts, and for good reasons. Think of anything you expect from an email client, and Mutt has it: multi-protocol support (e.g., POP3, IMAP and SMTP), S/MIME and PGP/GPG integration, threaded conversation, color coding, customizable macros/keybindings, and so on. Besides, terminal-based Mutt is a lightweight alternative for accessing emails compared to bulky web browser-based (e.g., Gmail, Ymail) or GUI-based email clients (e.g., Thunderbird, MS Outlook). @@ -135,4 +136,4 @@ via: http://xmodulo.com/mutt-email-client-encrypted-passwords.html [a]:http://xmodulo.com/author/nanni [1]:http://xmodulo.com/gmail-command-line-linux-alpine.html -[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG \ No newline at end of file +[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG From 486ffcf4aa6bc7ab449374b95c593614353617b5 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 2 Dec 2015 17:44:12 +0800 Subject: [PATCH 086/160] =?UTF-8?q?20151202-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20151202 A new Mindcraft moment.md | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 sources/tech/20151202 A new Mindcraft moment.md diff --git a/sources/tech/20151202 A new Mindcraft moment.md b/sources/tech/20151202 A new Mindcraft moment.md new file mode 100644 index 0000000000..79930e8202 --- /dev/null +++ b/sources/tech/20151202 A new Mindcraft moment.md @@ -0,0 +1,43 @@ +A new Mindcraft moment? +======================= + +Credit:Jonathan Corbet + +It is not often that Linux kernel development attracts the attention of a mainstream newspaper like The Washington Post; lengthy features on the kernel community's approach to security are even more uncommon. So when just such a feature hit the net, it attracted a lot of attention. This article has gotten mixed reactions, with many seeing it as a direct attack on Linux. The motivations behind the article are hard to know, but history suggests that we may look back on it as having given us a much-needed push in a direction we should have been going for some time. + +Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on. + +Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to thethundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably. + +The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then. + +The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point. + +We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not. + +The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made. + +Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues. + +The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run. + +Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying. + +Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies. + +There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel. + +We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer. + +--------------------------- + +via: https://lwn.net/Articles/663474/ + +作者:Jonathan Corbet + +译者:[译者ID](https://github.com/译者ID) + +校对:[校对者ID](https://github.com/校对者ID) + + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9f46ee570208be4a4da6123098c760442fb3a1e9 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 2 Dec 2015 23:18:14 +0800 Subject: [PATCH 087/160] PUB:20151123 LNAV--Ncurses based log file viewer @ictlyh --- ...123 LNAV--Ncurses based log file viewer.md | 20 +++++++++---------- 1 file changed, 9 insertions(+), 11 deletions(-) rename {translated/tech => published}/20151123 LNAV--Ncurses based log file viewer.md (63%) diff --git a/translated/tech/20151123 LNAV--Ncurses based log file viewer.md b/published/20151123 LNAV--Ncurses based log file viewer.md similarity index 63% rename from translated/tech/20151123 LNAV--Ncurses based log file viewer.md rename to published/20151123 LNAV--Ncurses based log file viewer.md index e1f99eb07c..d51ebe8e76 100644 --- a/translated/tech/20151123 LNAV--Ncurses based log file viewer.md +++ b/published/20151123 LNAV--Ncurses based log file viewer.md @@ -1,6 +1,6 @@ -LNAV - 基于 Ncurses 的日志文件阅读器 +LNAV:基于 Ncurses 的日志文件阅读器 ================================================================================ -日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理类似事情:来自不同文件的交错信息;按照时间生成信息直方图;提供在文件中导航的关键字。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。 +日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 的,用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理像这样的事情:来自不同文件的交错的信息;按照时间生成信息直方图;支持在文件中导航的快捷键。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。 ### lnav 功能 ### @@ -10,15 +10,15 @@ Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳 #### 直方图视图: #### -以时间为桶显示日志信息数量。这对于在一段长时间内大概了解发生了什么非常有用。 +以时间区划来显示日志信息数量。这对于大概了解在一长段时间内发生了什么非常有用。 #### 过滤器: #### 只显示那些匹配或不匹配一些正则表达式的行。对于移除大量你不感兴趣的日志行非常有用。 -#### 及时操作: #### +#### 即时操作: #### -在你输入到时候会同时完成检索;当添加新日志行的时候回自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查正确性。 +在你输入到时候会同时完成检索;当添加了新日志行的时候会自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查其正确性。 #### 自动显示后文: #### @@ -34,11 +34,11 @@ Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳 #### 导航: #### -有快捷键用于跳转到下一个或上一个错误或警告,按照一定的时间向后或向前移动。 +有快捷键用于跳转到下一个或上一个错误或警告,按照指定的时间向后或向前翻页。 #### 用 SQL 查询日志: #### -每个日志文件行都被认为是数据库中可以使用 SQL 查询的一行。可以使用的列取决于查看的日志文件类型。 +每个日志文件行都相当于数据库中的一行,可以使用 SQL 进行查询。可以使用的列取决于查看的日志文件类型。 #### 命令和搜索历史: #### @@ -62,9 +62,7 @@ Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳 ![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png) -如果你想查看特定的日志,那么需要指定路径 - -如果你想看 CPU 日志,在你的终端里运行下面的命令 +如果你想查看特定的日志,那么需要指定路径。如果你想看 CPU 日志,在你的终端里运行下面的命令 lnav /var/log/cups @@ -76,7 +74,7 @@ via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html 作者:[ruchi][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3d58709c5a4aefb00fb73d1b819bb09db63b3684 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 3 Dec 2015 00:06:59 +0800 Subject: [PATCH 088/160] Update 20151201 Linux and Unix Port Scanning With netcat [nc] Command.md --- ...01 Linux and Unix Port Scanning With netcat [nc] Command.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md b/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md index 1358968910..f4019db6eb 100644 --- a/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md +++ b/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux and Unix Port Scanning With netcat [nc] Command ================================================================================ How do I find out which ports are opened on my own server? How do I run port scanning using the nc command instead of [the nmap command on a Linux or Unix-like][1] systems? @@ -93,4 +94,4 @@ via: http://www.cyberciti.biz/faq/linux-port-scanning/ [2]:http://www.cyberciti.biz/tips/linux-scanning-network-for-open-ports.html [3]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ [4]:http://www.manpager.com/linux/man1/nc.1.html -[5]:http://www.manpager.com/linux/man1/nmap.1.html \ No newline at end of file +[5]:http://www.manpager.com/linux/man1/nmap.1.html From 700146203af6d5dcfe1a46757e70289009f527b1 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 3 Dec 2015 02:00:25 +0800 Subject: [PATCH 089/160] Create 20151203 Getting started with Docker by Dockerizing this Blog.md --- ...ed with Docker by Dockerizing this Blog.md | 375 ++++++++++++++++++ 1 file changed, 375 insertions(+) create mode 100644 sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md diff --git a/sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md b/sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md new file mode 100644 index 0000000000..1f69a4adba --- /dev/null +++ b/sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md @@ -0,0 +1,375 @@ +Getting started with Docker by Dockerizing this Blog +====================== +>This article covers the basic concepts of Docker and how to Dockerize an application by creating a custom Dockerfile +>Written by Benjamin Cane on 2015-12-01 10:00:00 + +Docker is an interesting technology that over the past 2 years has gone from an idea, to being used by organizations all over the world to deploy applications. In today's article I am going to cover how to get started with Docker by "Dockerizing" an existing application. The application in question is actually this very blog! + +What is Docker +============ +============ + +Before we dive into learning the basics of Docker let's first understand what Docker is and why it is so popular. Docker, is an operating system container management tool that allows you to easily manage and deploy applications by making it easy to package them within operating system containers. + +### Containers vs. Virtual Machines + +Containers may not be as familiar as virtual machines but they are another method to provide Operating System Virtualization. However, they differ quite a bit from standard virtual machines. + +Standard virtual machines generally include a full Operating System, OS Packages and eventually an Application or two. This is made possible by a Hypervisor which provides hardware virtualization to the virtual machine. This allows for a single server to run many standalone operating systems as virtual guests. + +Containers are similar to virtual machines in that they allow a single server to run multiple operating environments, these environments however, are not full operating systems. Containers generally only include the necessary OS Packages and Applications. They do not generally contain a full operating system or hardware virtualization. This also means that containers have a smaller overhead than traditional virtual machines. + +Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding. Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to BSD Jails and chroot'ed processes than full virtual machines. + +### What Docker provides on top of containers + +Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support Solaris Zones and BSD Jails. What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker. + +Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container. + +## Starting with Installation +As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager. + +# apt-get install docker.io +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + aufs-tools cgroup-lite git git-man liberror-perl +Suggested packages: + btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc + git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki + git-svn +The following NEW packages will be installed: + aufs-tools cgroup-lite docker.io git git-man liberror-perl +0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. +Need to get 7,553 kB of archives. +After this operation, 46.6 MB of additional disk space will be used. +Do you want to continue? [Y/n] y +To check if any containers are running we can execute the docker command using the ps option. + +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +The ps function of the docker command works similar to the Linux ps command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers. + +## Deploying a pre-built nginx Docker container +One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with yum or apt-get. To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the docker command again, however, this time with the run option. + +# docker run -d nginx +Unable to find image 'nginx' locally +Pulling repository nginx +5c82215b03d1: Download complete +e2a4fb18da48: Download complete +58016a5acc80: Download complete +657abfa43d82: Download complete +dcb2fe003d16: Download complete +c79a417d7c6f: Download complete +abb90243122c: Download complete +d6137c9e2964: Download complete +85e566ddc7ef: Download complete +69f100eb42b5: Download complete +cd720b803060: Download complete +7cc81e9a118a: Download complete +The run function of the docker command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute docker run your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the -d (detach) flag. + +By executing docker ps again we can see the nginx container running. + +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande +In the above output we can see the running container desperate_lalande and that this container has been built from the nginx:latest image. + +## Docker Images +Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with yum. To get a better understanding of how this works let's look back at the output of the docker run execution. + +# docker run -d nginx +Unable to find image 'nginx' locally +The first message we see is that docker could not find an image named nginx locally. The reason we see this message is that when we executed docker run we told Docker to startup a container, a container based on an image named nginx. Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name. + +Since this system is brand new there is no Docker image with the name nginx, which means Docker will need to download it from a Docker repository. + +Pulling repository nginx +5c82215b03d1: Download complete +e2a4fb18da48: Download complete +58016a5acc80: Download complete +657abfa43d82: Download complete +dcb2fe003d16: Download complete +c79a417d7c6f: Download complete +abb90243122c: Download complete +d6137c9e2964: Download complete +85e566ddc7ef: Download complete +69f100eb42b5: Download complete +cd720b803060: Download complete +7cc81e9a118a: Download complete +This is exactly what the second part of the output is showing us. By default, Docker uses the Docker Hub repository, which is a repository service that Docker (the company) runs. + +Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as docker run registry. For this article we will not be deploying a custom registry service. + +## Stopping and Removing the Container +Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it. + +To start a container we executed docker with the run option, in order to stop this same container we simply need to execute the docker with the kill option specifying the container name. + +# docker kill desperate_lalande +desperate_lalande +If we execute docker ps again we will see that the container is no longer running. + +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +However, at this point we have only stopped the container; while it may no longer be running it still exists. By default, docker ps will only show running containers, if we add the -a (all) flag it will show all containers running or not. + +# docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande +In order to fully remove the container we can use the docker command with the rm option. + +# docker rm desperate_lalande +desperate_lalande +While this container has been removed; we still have a nginx image available. If we were to re-run docker run -d nginx again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system. + +To see a full list of local images we can simply run the docker command with the images option. + +# docker images +REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE +nginx latest 9fab4090484a 5 days ago 132.8 MB +## Building our own custom image +At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a Dockerfile. + +With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog. + +### Understanding the Application +Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog. + +The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; hamerkop. The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public GitHub repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install Python along with some Python modules and execute the hamerkop application. To serve the generated content we will use nginx; which means we will also need nginx to be installed. + +So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the Dockerfile Syntax. To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; vi in my case. + +# git clone https://github.com/madflojo/blog.git +Cloning into 'blog'... +remote: Counting objects: 622, done. +remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 +Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. +Resolving deltas: 100% (242/242), done. +Checking connectivity... done. +# cd blog/ +# vi Dockerfile +### FROM - Inheriting a Docker image +The first instruction of a Dockerfile is the FROM instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same nginx image we were using before, if we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying ubuntu:latest. + +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane +In addition to the FROM instruction, I also included a MAINTAINER instruction which is used to show the Author of the Dockerfile. + +As Docker supports using # as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile. + +### Running a test build +Since we inherited the nginx Docker image our current Dockerfile also inherited all the instructions within the Dockerfile used to build that nginx image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the nginx image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process. + +In order to start the build from a Dockerfile we can simply execute the docker command with the build option. + +# docker build -t blog /root/blog +Sending build context to Docker daemon 23.6 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Running in c97f36450343 + ---> 60a44f78d194 +Removing intermediate container c97f36450343 +Successfully built 60a44f78d194 +In the above example I used the -t (tag) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an Image ID that Docker assigns. In this case the Image ID is 60a44f78d194 which we can see from the docker command's build success message. + +In addition to the -t flag, I also specified the directory /root/blog. This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container. + +Now that we have run through a successful build, let's start customizing this image. + +### Using RUN to execute apt-get +The static site generator used to generate the HTML pages is written in Python and because of this the first custom task we should perform within this Dockerfile is to install Python. To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that apt-get update and apt-get install python-dev are executed; we can do this with the RUN instruction. + +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip +In the above we are simply using the RUN instruction to tell Docker that when it builds this image it will need to execute the specified apt-get commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though python-dev and python-pip are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the pip command will execute, outside the container, the pip command does not exist. + +It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the RUN instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by RUN require user input. + +### Installing Python modules +With Python installed we now need to install some Python modules. To do this outside of Docker, we would generally use the pip command and reference a file within the blog's Git repository named requirements.txt. In an earlier step we used the git command to "clone" the blog's GitHub repository to the /root/blog directory; this directory also happens to be the directory that we have created the Dockerfile. This is important as it means the contents of the Git repository are accessible to Docker during the build process. + +When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible. + +In order to install the required Python modules we will need to copy the requirements.txt file from the build directory into the container. We can do this using the COPY instruction within the Dockerfile. + +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip + +## Create a directory for required files +RUN mkdir -p /build/ + +## Add requirements file and run pip +COPY requirements.txt /build/ +RUN pip install -r /build/requirements.txt +Within the Dockerfile we added 3 instructions. The first instruction uses RUN to create a /build/ directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the COPY instruction which copies the requirements.txt file from the "build directory" (/root/blog) into the /build directory within the container. The third is using the RUN instruction to execute the pip command; installing all the modules specified within the requirements.txt file. + +COPY is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies. + +### Re-running a build +Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again. + +# docker build -t blog /root/blog +Sending build context to Docker daemon 19.52 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Using cache + ---> 8e0f1899d1eb +Step 2 : RUN apt-get update + ---> Using cache + ---> 78b36ef1a1a2 +Step 3 : RUN apt-get install -y python-dev python-pip + ---> Using cache + ---> ef4f9382658a +Step 4 : RUN mkdir -p /build/ + ---> Running in bde05cf1e8fe + ---> f4b66e09fa61 +Removing intermediate container bde05cf1e8fe +Step 5 : COPY requirements.txt /build/ + ---> cef11c3fb97c +Removing intermediate container 9aa8ff43f4b0 +Step 6 : RUN pip install -r /build/requirements.txt + ---> Running in c50b15ddd8b1 +Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) +Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) + +Successfully installed jinja2 PyYaml mistune markdown MarkupSafe +Cleaning up... + ---> abab55c20962 +Removing intermediate container c50b15ddd8b1 +Successfully built abab55c20962 +From the above build output we can see the build was successful, but we can also see another interesting message; ---> Using cache. What this message is telling us is that Docker was able to use its build cache during the build of this image. + +#### Docker build cache + +When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image. + + Step 5 : COPY requirements.txt /build/ + ---> cef11c3fb97c +The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the Image ID; cef11c3fb97c. The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the blog image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the python-dev and python-pip packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the mkdir command, each subsequent step was executed. + +The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the requirements.txt file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the requirements.txt file. The execution of the apt-get commands however, are another story. If the Apt repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the python-pip package it could be a problem if the installation was caching a package with a known vulnerability. + +For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify --no-cache=True when executing a Docker build. + +### Deploying the rest of the blog +With the Python packages and modules installed this leaves us at the point of copying the required application files and running the hamerkop application. To do this we will simply use more COPY and RUN instructions. + +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip + +## Create a directory for required files +RUN mkdir -p /build/ + +## Add requirements file and run pip +COPY requirements.txt /build/ +RUN pip install -r /build/requirements.txt + +## Add blog code nd required files +COPY static /build/static +COPY templates /build/templates +COPY hamerkop /build/ +COPY config.yml /build/ +COPY articles /build/articles + +## Run Generator +RUN /build/hamerkop -c /build/config.yml +Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully. + +# docker build -t blog /root/blog/ +Sending build context to Docker daemon 19.52 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Using cache + ---> 8e0f1899d1eb +Step 2 : RUN apt-get update + ---> Using cache + ---> 78b36ef1a1a2 +Step 3 : RUN apt-get install -y python-dev python-pip + ---> Using cache + ---> ef4f9382658a +Step 4 : RUN mkdir -p /build/ + ---> Using cache + ---> f4b66e09fa61 +Step 5 : COPY requirements.txt /build/ + ---> Using cache + ---> cef11c3fb97c +Step 6 : RUN pip install -r /build/requirements.txt + ---> Using cache + ---> abab55c20962 +Step 7 : COPY static /build/static + ---> 15cb91531038 +Removing intermediate container d478b42b7906 +Step 8 : COPY templates /build/templates + ---> ecded5d1a52e +Removing intermediate container ac2390607e9f +Step 9 : COPY hamerkop /build/ + ---> 59efd1ca1771 +Removing intermediate container b5fbf7e817b7 +Step 10 : COPY config.yml /build/ + ---> bfa3db6c05b7 +Removing intermediate container 1aebef300933 +Step 11 : COPY articles /build/articles + ---> 6b61cc9dde27 +Removing intermediate container be78d0eb1213 +Step 12 : RUN /build/hamerkop -c /build/config.yml + ---> Running in fbc0b5e574c5 +Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux +Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux + +Successfully created file /usr/share/nginx/html//archive.html +Successfully created file /usr/share/nginx/html//sitemap.xml + ---> 3b25263113e1 +Removing intermediate container fbc0b5e574c5 +Successfully built 3b25263113e1 +### Running a custom container +With a successful build we can now start our custom container by running the docker command with the run option, similar to how we started the nginx container earlier. + +# docker run -d -p 80:80 --name=blog blog +5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1 +Once again the -d (detach) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is --name, which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is -p, this flag allows users to map a port from the host machine to a port within the container. + +The base nginx image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the -p flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax -p 8080:80. + +From the above command it appears that our container was started successfully, we can verify this by executing docker ps. + +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog +## Wrapping up + +At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout Docker's reference page, which explains the instructions very well. + +Another good resource is their Dockerfile Best Practices page which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the COPY instruction for the articles directory as the last COPY instruction. The reason for this is that the articles directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached. + +In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below. From 1d74e9bd5bda27b4937d2bc1f8346b4f45931c3f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 3 Dec 2015 06:40:27 +0800 Subject: [PATCH 090/160] PUB:20151109 How to Set Up AWStats On Ubuntu Server @strugglingyouth --- ...151109 How to Set Up AWStats On Ubuntu Server.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) rename {translated/tech => published}/20151109 How to Set Up AWStats On Ubuntu Server.md (84%) diff --git a/translated/tech/20151109 How to Set Up AWStats On Ubuntu Server.md b/published/20151109 How to Set Up AWStats On Ubuntu Server.md similarity index 84% rename from translated/tech/20151109 How to Set Up AWStats On Ubuntu Server.md rename to published/20151109 How to Set Up AWStats On Ubuntu Server.md index 11bfdde3ab..7bea4e40d8 100644 --- a/translated/tech/20151109 How to Set Up AWStats On Ubuntu Server.md +++ b/published/20151109 How to Set Up AWStats On Ubuntu Server.md @@ -1,16 +1,14 @@ - 如何在 Ubuntu 服务器中配置 AWStats ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/10/Apache_awstats_featured.jpg) - -AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FTP 或邮件服务器统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它采用的是部分信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 Apache,IIS 等。 +AWStats 是一个开源的网站分析报告工具,可以生成强大的网站、流媒体、FTP 或邮件服务器的访问统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它可以“部分”读取信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 Apache,IIS 等。 本文将帮助你在 Ubuntu 上安装配置 AWStats。 ### 安装 AWStats 包 ### -默认情况下,AWStats 的包在 Ubuntu 仓库中。 +默认情况下,AWStats 的包可以在 Ubuntu 仓库中找到。 可以通过运行下面的命令来安装: @@ -18,7 +16,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT 接下来,你需要启用 Apache 的 CGI 模块。 -运行以下命令来启动: +运行以下命令来启动 CGI: sudo a2enmod cgi @@ -38,7 +36,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT sudo nano /etc/awstats/awstats.test.com.conf -像下面这样修改下: +像下面这样修改一下: # Change to Apache log file, by default it's /var/log/apache2/access.log LogFile="/var/log/apache2/access.log" @@ -73,6 +71,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT ### 测试 AWStats ### 现在,您可以通过访问 url “http://your-server-ip/cgi-bin/awstats.pl?config=test.com.” 来查看 AWStats 的页面。 + 它的页面像下面这样: ![awstats_page](https://www.maketecheasier.com/assets/uploads/2015/10/awstats_page.jpg) @@ -101,7 +100,7 @@ via: https://www.maketecheasier.com/set-up-awstats-ubuntu/ 作者:[Hitesh Jethva][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7669b8bb43f56e6b679a0d084266ac2a88dfbd8f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 3 Dec 2015 06:52:08 +0800 Subject: [PATCH 091/160] PUB:20151125 The tar command explained @ictlyh --- .../20151125 The tar command explained.md | 34 +++++++++++-------- 1 file changed, 20 insertions(+), 14 deletions(-) rename {translated/tech => published}/20151125 The tar command explained.md (75%) diff --git a/translated/tech/20151125 The tar command explained.md b/published/20151125 The tar command explained.md similarity index 75% rename from translated/tech/20151125 The tar command explained.md rename to published/20151125 The tar command explained.md index a3d19ac34f..22244bf89c 100644 --- a/translated/tech/20151125 The tar command explained.md +++ b/published/20151125 The tar command explained.md @@ -1,16 +1,16 @@ -tar 命令详解 +tar 命令使用介绍 ================================================================================ -Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这是一种 POSIX 标准。 +Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留其文件权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这种格式是 POSIX 标准。 ### Tar 文件格式 ### -tar 压缩等级简介。 +tar 压缩等级简介: - **无压缩** 没有压缩的文件用 .tar 结尾。 - **Gzip 压缩** Gzip 格式是 tar 使用最广泛的压缩格式,它能快速压缩和提取文件。用 gzip 压缩的文件通常用 .tar.gz 或 .tgz 结尾。这里有一些如何[创建][2]和[解压][3] tar.gz 文件的例子。 -- **Bzip2 压缩** 和 Gzip格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。 +- **Bzip2 压缩** 和 Gzip 格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。 - **Lzip(LAMA)压缩** Lizp 压缩结合了 Gzip 快速的优势,以及和 Bzip2 类似(甚至更好) 的压缩率。尽管有这些好处,这个格式并没有得到广泛使用。 -- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,也没有广泛使用。 +- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,但也没有广泛使用。 常见的格式是 tar.gz 和 tar.bz2。如果你想快速压缩,那么就是用 gzip。如果归档文件大小比较重要,就是用 tar.bz2。 @@ -59,11 +59,13 @@ tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin - **[p]** 这个选项表示 “preserve”,它指示 tar 在归档文件中保留文件属主和权限信息。 - **[c]** 表示创建。要创建文件时不能缺少这个选项。 - **[z]** z 选项启用 gzip 压缩。 -- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到 stdout。 +- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到标准输出( LCTT 译注:如果没有指定,标准输出默认是屏幕,显然你不会想在屏幕上显示一堆乱码,通常你可以用管道符号送到其它程序去)。 -#### Tar 命令事例 #### +#### Tar 命令示例 #### -**事例 1: 备份 /etc 目录** 创建 /etc 配置目录的一个备份。备份保存在 root 目录。 +**示例 1: 备份 /etc 目录** + +创建 /etc 配置目录的一个备份。备份保存在 root 目录。 tar pczvf /root/etc.tar.gz /etc @@ -71,19 +73,23 @@ tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin 要以 root 用户运行命令确保 /etc 中的所有文件都会被包含在备份中。这次,我在命令中添加了 [v] 选项。这个选项表示 verbose,它告诉 tar 显示所有被包含到归档文件中的文件名。 -**事例 2: 备份你的 /home 目录** 创建你的 home 目录的备份。备份会被保存到 /backup 目录。 +**示例 2: 备份你的 /home 目录** + +创建你的 home 目录的备份。备份会被保存到 /backup 目录。 tar czf /backup/myuser.tar.gz /home/myuser 用你的用户名替换 myuser。这个命令中,我省略了 [p] 选项,也就不会保存权限。 -**事例 3: 基于文件的 MySQL 数据库备份** 在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令检查: +**示例 3: 基于文件的 MySQL 数据库备份** + +在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令来查看: ls /var/lib/mysql ![使用 tar 基于文件备份 MySQL](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png) -用 tar 备份 MySQL 文件时为了保持一致性,首先停用数据库服务器。备份会被写到 /backup 目录。 +用 tar 备份 MySQL 数据文件时为了保持数据一致性,首先停用数据库服务器。备份会被写到 /backup 目录。 1) 创建 backup 目录 @@ -108,10 +114,10 @@ tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin #### tar 命令选项解释 #### - **[x]** x 表示提取,提取 tar 文件时这个命令不可缺少。 -- **[z]** z 选项告诉 tar 要解压的归档文件时 gzip 格式。 +- **[z]** z 选项告诉 tar 要解压的归档文件是 gzip 格式。 - **[f]** 该选项告诉 tar 从一个文件中读取归档内容,本例中是 myarchive.tar.gz。 -上面的 tar 命令会安静地提取 tar.gz 文件,它只会显示错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。 +上面的 tar 命令会安静地提取 tar.gz 文件,除非有错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。 tar xzvf myarchive.tar.gz @@ -125,7 +131,7 @@ via: https://www.howtoforge.com/tutorial/linux-tar-command/ 作者:[howtoforge][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 943120754f7a3e9637299e80ea234cc77c2803e8 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 3 Dec 2015 07:51:38 +0800 Subject: [PATCH 092/160] PUB:20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04 @strugglingyouth --- ....8.16 in Ubuntu 16.04 or 15.10 or 14.04.md | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) rename {translated/tech => published}/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md (66%) diff --git a/translated/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md b/published/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md similarity index 66% rename from translated/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md rename to published/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md index f7644bcabb..7c2e304403 100644 --- a/translated/tech/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md +++ b/published/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md @@ -1,11 +1,10 @@ - 如何在 Ubuntu 16.04,15.10,14.04 中安装 GIMP 2.8.16 ================================================================================ ![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png) -GIMP 图像编辑器 2.8.16 版本在其20岁生日时发布了。下面是如何安装或升级 GIMP 在 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 及其衍生版本中,如,Linux Mint 17.x/13, Elementary OS Freya。 +GIMP 图像编辑器 2.8.16 版本在其20岁生日时发布了。下面是如何安装或升级 GIMP 在 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 及其衍生版本中,如 Linux Mint 17.x/13, Elementary OS Freya。 -GIMP 2.8.16 支持层组在 OpenRaster 文件中,修复了在 PSD 中的层组支持以及各种用户接口改进,OSX 系统修复构建并有许多新的变化。请阅读 [官方声明][1]。 +GIMP 2.8.16 支持 OpenRaster 文件中的层组,修复了 PSD 中的层组支持以及各种用户界面改进,修复了 OSX 上的构建系统,以及更多新的变化。请阅读 [官方声明][1]。 ![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg) @@ -15,21 +14,21 @@ GIMP 2.8.16 支持层组在 OpenRaster 文件中,修复了在 PSD 中的层组 **1. 添加 GIMP PPA** -从 Unity Dash 中打开终端,或通过 Ctrl+Alt+T 快捷键。在它打开它后,粘贴下面的命令并回车: +从 Unity Dash 中打开终端,或通过 Ctrl+Alt+T 快捷键打开。在它打开它后,粘贴下面的命令并回车: sudo add-apt-repository ppa:otto-kesselgulasch/gimp ![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg) -输入你的密码,密码不会在终端显示,然后回车继续。 +输入你的密码,密码不会在终端显示,然后回车继续。 **2. 安装或升级编辑器** -在添加了 PPA 后,启动 **Software Updater**(在 Mint 中是 Software Manager)。检查更新后,你将看到 GIMP 的更新列表。点击 “Install Now” 进行升级。 +在添加了 PPA 后,启动 **Software Updater**(在 Mint 中是 Software Manager)。检查更新后,你将看到 GIMP 的更新列表。点击 “Install Now” 进行升级。 ![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg) -对于那些喜欢 Linux 命令的,按顺序执行下面的命令,刷新仓库的缓存然后安装 GIMP: +对于那些喜欢 Linux 命令的,按顺序执行下面的命令,刷新仓库的缓存然后安装 GIMP: sudo apt-get update @@ -37,13 +36,13 @@ GIMP 2.8.16 支持层组在 OpenRaster 文件中,修复了在 PSD 中的层组 **3. (可选的) 卸载** -如果你想卸载或降级 GIMP 图像编辑器。从软件中心直接删除它,或者按顺序运行下面的命令来将 PPA 清除并降级软件: +如果你想卸载或降级 GIMP 图像编辑器。从软件中心直接删除它,或者按顺序运行下面的命令来将 PPA 清除并降级软件: sudo apt-get install ppa-purge sudo ppa-purge ppa:otto-kesselgulasch/gimp -就这样。玩的愉快! +就这样。玩的愉快! -------------------------------------------------------------------------------- @@ -51,8 +50,8 @@ via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-u 作者:[Ji m][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - +校对:[wxy](https://github.com/wxy) + 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ubuntuhandbook.org/index.php/about/ From 7cd69840281c4048c21d822fcdfeebc9a3f2c376 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Thu, 3 Dec 2015 11:29:59 +0800 Subject: [PATCH 093/160] Update 20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 校对来里面的一些错误,修改了一些话的说法 --- ...tihomed ISC DHCP Server on Debian Linux.md | 131 ++++++++++++------ 1 file changed, 86 insertions(+), 45 deletions(-) diff --git a/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md b/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md index 8fb67f0697..fc45af4db0 100644 --- a/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md +++ b/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md @@ -2,21 +2,28 @@ How to Install and Configure Multihomed ISC DHCP Server on Debian Linux debian linux上安装配置 ISC DHCP Server ================================================================================ Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more. -动态主机控制协议(DHCP)给网络管理员提供一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的DHCP服务工具是 ISC DHCP Server。DHCP服务的目的是给给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP服务一般包括以下信息:DNS服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 + +动态主机控制协议(DHCP)给网络管理员提供一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的DHCP服务工具是 ISC DHCP Server。DHCP服务的目的是给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP服务一般包括以下信息:DNS服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well. -本节教程介绍4.2.4版的ISC-DHCP-Server如何在Debian7.7上管理多个虚拟局域网(VLAN),但是它也可以非常简单的用于单一网络。 + +本教程介绍4.2.4版的ISC-DHCP-Server如何在Debian7.7上管理多个虚拟局域网(VLAN),它也可以非常容易的配置的用于单一网络。 The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation. -测试用的网络是通过思科路由器依赖传统的方式来管DHCP租约地址,目前有12个VLANs需要通过路由器的集中式服务器来管理。把DHCP这个责任转移到一个专用的服务器上面,路由器可以回收资源去用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 + +测试用的网络是通过思科路由器使用传统的方式来管理DHCP租约地址的,目前有12个VLANs需要通过路由器的集中式服务器来管理。把DHCP的任务转移到一个专用的服务器上面,路由器可以收回相应的资源,把资源用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new host’s host-names will be added to the DNS system when the host requests a DHCP address from the server. -另一个将DHCP服务移动到专用服务器的好处,后续会讲到,建立动态域名服务器(DDNS)这样当主机从服务器请求DHCP地址的时候,新主机的主机名将被添加到DNS系统里面。 + +另一个将DHCP服务转移到专用服务器的好处,以后会讲到,它可以建立动态域名服务器(DDNS)这样当主机从服务器请求DHCP地址的时候,新主机的主机名将被添加到DNS系统里面。 + ### Step 1: Installing and Configuring ISC DHCP Server ### + ### 安装和配置ISC DHCP Server### 1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the ‘apt‘ utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands. -1. 创建这个多宿主服务器的过程中,需要用apt工具来安装Debian软件仓库中的ISC软件。与其他教程一样需要使用root或者sudo访问权限。请适当的修改以使用下面的命令。(译者注:下面中括号里面是注释,使用的时候请删除,#表示使用的root权限) + +1. 使用apt工具用来安装Debian软件仓库中的ISC软件,来创建这个多宿主服务器。与其他教程一样需要使用root或者sudo访问权限。请适当的修改,以便使用下面的命令。(译者注:下面中括号里面是注释,使用的时候请删除,#表示使用的root权限) # apt-get install isc-dhcp-server [安装 the ISC DHCP Server 软件] @@ -27,7 +34,7 @@ The other benefit to moving DHCP to a dedicated server will, in a later guide, i 2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope: -2. 现在已经确认服务软件安装完毕,现在需要配置服务器,它将分发网络信息。作为管理员你最起码应该了解DHCP的信息如下: +2. 确认服务软件已经安装完成,现在需要一些网络的需求来配置服务器,这样服务器才能够根据我们的需要来分发网络信息。作为管理员最起码需要了解的DHCP信息如下: - The network addresses - 网络地址 - The subnet masks @@ -37,7 +44,7 @@ The other benefit to moving DHCP to a dedicated server will, in a later guide, i Other useful information to have the server dynamically assign includes: -其他一些使服务器动态分配的有用信息包括: +其他一些服务器动态分配的有用信息包括: - Default gateway - 默认网关 - DNS server IP addresses @@ -51,178 +58,212 @@ Other useful information to have the server dynamically assign includes: These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package: -这只是很少一部分能让ISC DHCP server处理的选项。完整的查看所有选项及其描述需要在安装好软件后输入以下命令: +这只是能让ISC DHCP server处理的选项中非常少的一部分。如果你想查看所有选项及其描述需要在安装好软件后输入以下命令: # man dhcpd.conf 3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the server’s interfaces. -3. 一旦管理员已经确定了这台服务器需要分发出去的必要信息,那么是时候配置它和分配必要的地址池了。在配置任何地址池或服务器配置之前,DHCP服务必须配置好,来监听这台服务器上面的一个接口。 +3. 一旦管理员已经确定了这台服务器需要分发的需求信息,那么是时候配置服务器并且分配必要的地址池了。在配置任何地址池或服务器配置之前,DHCP服务必须配置好,来侦听这台服务器上面的一个接口。 On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial. -在这台特定的服务器上,一个网卡设置好后,DHCP会监听命名为`'bond0'`的接口。请适确保适当的更改服务器以及网络环境。下面默认的配置都是针对于本次教程的。 +在这台特定的服务器上,设置好网卡后,DHCP会侦听名称名为`'bond0'`的接口。请适根据你的实际情况来更改服务器以及网络环境。下面的配置都是针对本教程的。 + ![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin: -这行指定的是DHCP服务监听接口(一个或多个)上的DHCP流量。修改主要的配置文件分派DHCP池配置在所需要的网络上。主要的配置文件的位置在/etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 +这行指定的是DHCP服务侦听接口(一个或多个)上的DHCP流量。修改主要的配置文件分配适合的DHCP地址池到所需要的网络上。配置文件所在置/etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 # nano /etc/dhcp/dhcpd.conf This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ‘ddns-update-style‘ clause and for this tutorial it will remain set to ‘none‘ however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates. -这个配置文件可以配置我们想的地址池/主机。文件顶部有‘ddns-update-style‘这样一句,在本教程中它设置为‘none‘。在以后的教程中动态DNS, ISC-DHCP-Server 将被整合到 BIND9为了能够使主机名更新到IP地址。 +这个配置文件可以配置我们所需要的地址池/主机。文件顶部有‘ddns-update-style‘这样一句,在本教程中它设置为‘none‘。在以后的教程中动态DNS,ISC-DHCP-Server 将被整合到 BIND9,它能够使主机名更新到IP地址。 4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file. -4. 接下来的部分 是管理员配置全局网络设置,如DNS域名,默认的租约时间,IP地址,子网的掩码,以及更多的区域。想更多地了解所有的选项,阅读man手册dhcpd.conf文件,命令如下: +4. 接下来的部分是管理员配置全局网络设置,如DNS域名,默认的租约时间,IP地址,子网的掩码,以及更多的区域。如果你想了解所有的选项,请阅读man手册中的dhcpd.conf文件,命令如下: # man dhcpd.conf For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldn’t have to be implemented in every single pool created. -对于这台服务器,我们需要在顶部配置一些全局网络设置,这腰就不用到每个地址池中单独去设置了。 +对于这台服务器,我们需要在顶部配置一些全局网络设置,这样就不用到每个地址池中去单独设置了。 ![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well. -下面我们花一点时间来解释一下这些选项,在本例中虽然它们是一些全局,但是也可以为每一个地址池配置。 + +我们花一点时间来解释一下这些选项,在本教程中虽然它们是一些全局设置,但是也可以为单独的为某一个地址池进行配置。 - option domain-name “comptech.local”; – All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local” -- option domain-name “comptech.local”; – 所有使用这台DHCP服务器的主机,将成为DNS域名“comptech.local”的一员 + +- option domain-name “comptech.local”; – 所有使用这台DHCP服务器的主机,都将成为DNS域名为“comptech.local”的一员 + - option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host. -- option domain-name-servers 172.27.10.6; DHCP 将向所有配置好的网络主机分发DNS服务器地址172.27.10.6 + +- option domain-name-servers 172.27.10.6; DHCP向所有配置这台DHCP服务器的的网络主机分发DNS服务器地址为172.27.10.6 + - option subnet-mask 255.255.255.0; – The subnet mask handed out to every network will be a 255.255.255.0 or a /24 + - option subnet-mask 255.255.255.0; – 分派子网掩码到每一个网络设备 255.255.255.0 或a /24 -- default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host is done with the lease, they can hand the address back early. -- default-lease-time 3600; – 有效的地址租约时间,单位是秒。如果租约时间耗尽主机可以重新申请租约。如果租约完成那么相应的地址也将被尽快回收。 + +- default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host +is done with the lease, they can hand the address back early. + +- default-lease-time 3600; – 默认有效的地址租约时间(单位是秒)。如果租约时间耗尽,那么主机可以重新申请租约。如果租约完成,那么相应的地址也将被尽快回收。 + - max-lease-time 86400; – This is the maximum amount of time in seconds a lease can be held by a host. -- max-lease-time 86400; – 这是一个主机最大的租约时间,单位为秒。 + +- max-lease-time 86400; – 这是一台主机最大的租约时间(单位为秒)。 + - ping-check true; – This is an extra test to ensure that the address the server wants to assign out isn’t in use by another host on the network already. -- ping-check true; – 这是一个额外的测试,以确保服务器分配出的地址不是一个网络内另一台主机已使用的网络地址。 + +- ping-check true; – 这是一个额外的测试,以确保服务器分发出的网络地址不是当前网络中另一台主机已使用的网络地址。 + - ping-timeout; – This is how long in second the server will wait for a response to a ping before assuming the address isn’t in use. -- ping-timeout; – 假设地址以前没有使用,用这个来检测2个ping值回应之间的时间长度。 + +- ping-timeout; – 如果地址以前没有使用过,可以用这个选项来检测2个ping返回值之间的时间长度。 + - ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS. -- ignore client-updates; 现在这个选项是可以忽略的,因为DDNS在前面已在配置文件中被禁用,但是当DDNS运行,此选项会忽略一个主机请求的DNS更新其主机名。 + +- ignore client-updates; 现在这个选项是可以忽略的,因为DDNS在前面已在配置文件中已经被禁用,但是当DDNS运行时,这个选项会忽略更新其DNS主机名的请求。 5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza. -5. 文件中的下面一行是权威DHCP所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么就取消注释权威节(uncomment the authoritative stanza.)。 + +5. 文件中下面一行是权威DHCP所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么取消注释权威字节(authoritative stanza)来实现。 This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the ‘#’ in front of the keyword authoritative. -通过去掉关键字authoritative 前面的‘#’,取消注释全局权威节。那么该服务器将是它管理网络里面的唯一权威。 +通过去掉关键字authoritative 前面的‘#’,取消注释全局权威字节。这台服务器将是它所管理网络里面的唯一权威。 + ![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) Enable ISC Authoritative + 开启 ISC Authoritative By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldn’t, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire network’s DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot. -默认情况下服务器被假定为不是网络上的权威。这样做的理由是为了安全。如果有人因为不了解DHCP服务的配置导致配置不当或在一个不该出现的网络里面,这都将会造成非常严重的重连接问题。这行还可用在每个网络中单独使用。这意味着如果该服务器不是整个网络的DHCP服务器,authoritative行可以用在每个单独的网络中,而不是像上面截图显示中的全局配置那样。 + +默认情况下服务器被假定为不是网络上的权威。之所以这样做是出于安全考虑。如果有人因为不了解DHCP服务的配置,导致配置不当或配置到一个不该出现的网络里面,这都将带来非常严重的重连接问题。这行还可用在每个网络中单独配置使用。也就是说如果这台服务器不是整个网络的DHCP服务器,authoritative行可以用在每个单独的网络中,而不是像上面截图中那样的全局配置。 6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc). -6. 下一步是配置所有的服务器将要管理的DHCP地址池/网络。简短起见,这个教程将只配置地址池。作为管理员需要收集所有必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) +6. 这一步是配置服务器将要管理的所有DHCP地址池/网络。简短起见,本教程只配置了地址池。作为管理员需要收集一些必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) + For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255. + +以下这个地址池所用到的信息都是管理员收集整理的:网络id 172.27.60.0, 子网掩码 255.255.255.0 or a /24, 默认子网网关172.27.60.1,广播地址 172.27.60.255.0 + This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, let’s open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo! -以下这个地址池所用到的信息都是管理员收集整理的:网络id 172.27.60.0, 子网掩码 255.255.255.0 or a /24, 默认网关172.27.60.1,广播地址 172.27.60.255. +以上这些信息用于构建hcpd.conf文件中新的网络非常重要。使用文本编辑器修改配置文件添加新的网络进去,这里我们需要使用root或sudo访问权限。网络非常重要。使用文本编辑器修改配置文件添加新的网络进去,这里我们需要使用root或sudo访问权限。 + # nano /etc/dhcp/dhcpd.conf ![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) Configure DHCP Pools and Networks + 配置DHCP的地址池和网络 This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network. -这例子是分配IP地址给用VMWare创建的虚拟服务器。第一行标明是该网络的子网掩码。括号里面是DHCP服务器应该提供给当前网络上主机的所有选项。 +当前这个例子是给用VMWare创建的虚拟服务器分配IP地址。第一行显示是该网络的子网掩码。括号里面的内容是DHCP服务器应该提供给网络上面主机的所有选项。 The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses aren’t in the pool and can be assigned statically to hosts if needed. -第一节, range 172.27.60.50 172.27.60.254;是DHCP服务在这个网络上能够给主机动态分发的地址范围。 +第一节, range 172.27.60.50 172.27.60.254;这一行显示的是,DHCP服务在这个网络上能够给主机动态分发的地址范围。 The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network. -第二节,option routers 172.27.60.1;给网络里面所有的主机分发默认网关地址。 +第二节,option routers 172.27.60.1;这里显示的是网络里面所有的主机分发默认网关地址。 The last stanza, option broadcast-address 172.27.60.255;, indicates what the network’s broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address can’t be assigned to a host. -最后一节, option broadcast-address 172.27.60.255;,说明当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 +最后一节, option broadcast-address 172.27.60.255;,显示当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 + Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }. 必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在{}里面。 7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command: -7. 如果是创建多个网络,持续的创建它们的相应选项最终保存文本文件。一旦配置完成, ISC-DHCP-Server进程需要重启来使新的更改生效。可以通过下面的命令来完成: +7. 如果是创建多个网络,连续的创建完它们的相应选项后保存文本文件即可。配置完成以后如果有更改,ISC-DHCP-Server进程需要重启来使新的更改生效。重启进程可以通过下面的命令来完成: # service isc-dhcp-server restart This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]: -这条命令将重启DHCP服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理dhcp请求。最简单的方法是通过lsof命令[1]来查看服务器是否在监听67端口,命令如下: +这条命令将重启DHCP服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理dhcp请求。最简单的方法是通过lsof命令[1]来查看服务器是否在侦听67端口,命令如下: # lsof -i :67 ![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) Check DHCP Listening Port -检查DHCP监听端口 + +检查DHCP侦听端口 This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to ‘bootps‘ due to a port number mapping for port 67 in /etc/services file. -这里的输出表明DHCPD(DHCP服务守护进程)正在运行并且监听67端口。由于/etc/services文件中67端口的端口映射,输出中的67端口实际上被转换成了“bootps”。 +这里输出的结果表明DHCPD(DHCP服务守护进程)正在运行并且侦听67端口。由于/etc/services文件中67端口是端口映射,所以输出中的67端口实际上被转换成了“bootps”。 This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server. -在大多数的系统中这是非常普遍的,此时,服务器应该已经为网络连接做好准备,可以通过将一台主机接入网络请求DHCP地址来验证服务是否正常。 +在大多数的系统中这是非常常见的,现在服务器应该已经为网络连接做好准备,我们可以将一台主机接入网络请求DHCP地址来验证服务是否正常。 ### Step 2: Testing Client Connectivity ### ### 测试客户端连接 ### + 8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active. -8. 现在许多系统使用网络管理器来维护网络连接状态,因此这个装置应该预先配置,当接口是活跃的时候来获取DHCP。 +8. 现在许多系统使用网络管理器来维护网络连接状态,因此这个设备应该预先配置好的,只要对应的接口处于活跃状态就能够获取DHCP。 However on machines that aren’t using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses. -然而当一台设备无法使用网络管理器时,它可能需要手动获取DHCP地址。下面的几步将展示如何做到这一点以及如何查看服务器是否分发地址。 +然而当一台设备无法使用网络管理器时,它可能需要手动获取DHCP地址。下面的几步将演示怎样手动获取以及如何查看服务器是否已经按需要分发地址。 The ‘[ifconfig][2]‘ utility can be used to check an interface’s configuration. The machine used to test the DHCP server only has one network adapter and it is called ‘eth0‘. - ‘[ifconfig][2]‘工具能够用来检查接口配置。这台设备被用来监测DHCP服务它只有一个网络适配器(网卡),这块网卡被命名为‘eth0‘。 + ‘[ifconfig][2]‘工具能够用来检查接口的配置。这台被用来测试的DHCP服务器的设备,它只有一个网络适配器(网卡),这块网卡被命名为‘eth0‘。 # ifconfig eth0 ![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) Check Network Interface IP Address + 检查网络接口IP地址 From this output, this machine currently doesn’t have an IPv4 address, great! Let’s instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as ‘dhclient‘ installed. The DHCP client utility may very from system to system. -从输出结果上看,这台设备目前没有一个IPv4地址,很好这样便于测试。让这台设备连接到DHCP服务器并发出一个请求。这台设备上安装了一个名为‘dhclient‘ 的DHCP客户端工具。这个客户端工具会因为系统不同而不同。 +从输出结果上看,这台设备目前没IPv4地址,这样很好便于测试。我们把这台设备连接到DHCP服务器并发出一个请求。这台设备上已经安装了一个名为‘dhclient‘ 的DHCP客户端工具。因为操作系统各不相同,所以这个客户端软件也是互不一样的。 # dhclient eth0 ![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) Request IP Address from DHCP + 从DHCP请求IP地址 Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network. -现在 `'inet addr:'` 字段显示了属于172.27.60.0网络地址范围内的IPv4地址。另外要注意:目前该网络配置了正确的子网掩码并且分发了广播地址。 +当前 `'inet addr:'` 字段中显示了属于172.27.60.0网络地址范围内的IPv4地址。值得欣慰的是当前网络还配置了正确的子网掩码并且分发了广播地址。 Things are looking promising but let’s check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the server’s system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as ‘tail‘ will be used to only show the last few lines of the log file. -看起来还不错,让我们来测试一下,看看它是不是这台设备收到新IP地址的地方。我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具‘tail’,它只显示日志文件的最后几行,而不是使用一个完整的文本编辑器去查看日志文件。命令如下: +到这里看起来还都不错,让我们来测试一下,看看这台设备收到新IP地址是不是由服务器发出的。这里我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具‘tail’,它只显示日志文件的最后几行,这样我们就可以不用拿一个文本编辑器去查看所有的日志文件了。命令如下: # tail /var/log/syslog ![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) Check DHCP Logs + 检查DHCP日志文件 Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary. -OK!服务器记录表明它分发了一个地址给这台主机(HRTDEBXENSRV)。服务器按预期运行,给它充当权威的网络分发适合的网络地址。到这里DHCP服务器搭建成功并且运行起来了。配置其他的网络,排查故障,确保安全。 +OK!服务器记录表明它分发了一个地址给这台主机(HRTDEBXENSRV)。服务器按预期运行,给它充当权威的网络分发适合的网络地址。至此DHCP服务器搭建成功并且运行。如果有需要你可以继续配置其他的网络,排查故障,确保安全。 Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article. -更多新的 ISC-DHCP-Server 的功能在以后的Debian教程中会被提及。不久以后将写一篇关于Bind9和DDNS的教程,插入到这篇文章里面。 +在以后的Debian教程中我会讲一些新的 ISC-DHCP-Server 功能。有时间的话我将写一篇关于Bind9和DDNS的教程,融入到这篇文章里面。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/ From f3573b61793a1f1ec05a59ff158246c3c38a6cb6 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Thu, 3 Dec 2015 12:13:05 +0800 Subject: [PATCH 094/160] Update 20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md delete english --- ...tihomed ISC DHCP Server on Debian Linux.md | 115 ------------------ 1 file changed, 115 deletions(-) diff --git a/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md b/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md index fc45af4db0..5dcea06611 100644 --- a/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md +++ b/translated/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md @@ -1,28 +1,15 @@ -How to Install and Configure Multihomed ISC DHCP Server on Debian Linux debian linux上安装配置 ISC DHCP Server ================================================================================ -Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more. - 动态主机控制协议(DHCP)给网络管理员提供一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的DHCP服务工具是 ISC DHCP Server。DHCP服务的目的是给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP服务一般包括以下信息:DNS服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 -This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well. - 本教程介绍4.2.4版的ISC-DHCP-Server如何在Debian7.7上管理多个虚拟局域网(VLAN),它也可以非常容易的配置的用于单一网络。 -The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation. - 测试用的网络是通过思科路由器使用传统的方式来管理DHCP租约地址的,目前有12个VLANs需要通过路由器的集中式服务器来管理。把DHCP的任务转移到一个专用的服务器上面,路由器可以收回相应的资源,把资源用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 -The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new host’s host-names will be added to the DNS system when the host requests a DHCP address from the server. - 另一个将DHCP服务转移到专用服务器的好处,以后会讲到,它可以建立动态域名服务器(DDNS)这样当主机从服务器请求DHCP地址的时候,新主机的主机名将被添加到DNS系统里面。 -### Step 1: Installing and Configuring ISC DHCP Server ### - ### 安装和配置ISC DHCP Server### -1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the ‘apt‘ utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands. - 1. 使用apt工具用来安装Debian软件仓库中的ISC软件,来创建这个多宿主服务器。与其他教程一样需要使用root或者sudo访问权限。请适当的修改,以便使用下面的命令。(译者注:下面中括号里面是注释,使用的时候请删除,#表示使用的root权限) @@ -32,237 +19,135 @@ The other benefit to moving DHCP to a dedicated server will, in a later guide, i ![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg) -2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope: - 2. 确认服务软件已经安装完成,现在需要一些网络的需求来配置服务器,这样服务器才能够根据我们的需要来分发网络信息。作为管理员最起码需要了解的DHCP信息如下: -- The network addresses - 网络地址 -- The subnet masks - 子网掩码 -- The range of addresses to be dynamically assigned - 动态分配的地址范围 -Other useful information to have the server dynamically assign includes: - 其他一些服务器动态分配的有用信息包括: -- Default gateway - 默认网关 -- DNS server IP addresses - DNS服务器IP地址 -- The Domain Name - 域名 -- Host name - 主机名 -- Network Broadcast addresses - 网络广播地址 -These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package: 这只是能让ISC DHCP server处理的选项中非常少的一部分。如果你想查看所有选项及其描述需要在安装好软件后输入以下命令: # man dhcpd.conf -3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the server’s interfaces. - 3. 一旦管理员已经确定了这台服务器需要分发的需求信息,那么是时候配置服务器并且分配必要的地址池了。在配置任何地址池或服务器配置之前,DHCP服务必须配置好,来侦听这台服务器上面的一个接口。 -On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial. - 在这台特定的服务器上,设置好网卡后,DHCP会侦听名称名为`'bond0'`的接口。请适根据你的实际情况来更改服务器以及网络环境。下面的配置都是针对本教程的。 ![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) -This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin: - 这行指定的是DHCP服务侦听接口(一个或多个)上的DHCP流量。修改主要的配置文件分配适合的DHCP地址池到所需要的网络上。配置文件所在置/etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 # nano /etc/dhcp/dhcpd.conf -This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ‘ddns-update-style‘ clause and for this tutorial it will remain set to ‘none‘ however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates. - 这个配置文件可以配置我们所需要的地址池/主机。文件顶部有‘ddns-update-style‘这样一句,在本教程中它设置为‘none‘。在以后的教程中动态DNS,ISC-DHCP-Server 将被整合到 BIND9,它能够使主机名更新到IP地址。 -4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file. - 4. 接下来的部分是管理员配置全局网络设置,如DNS域名,默认的租约时间,IP地址,子网的掩码,以及更多的区域。如果你想了解所有的选项,请阅读man手册中的dhcpd.conf文件,命令如下: # man dhcpd.conf -For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldn’t have to be implemented in every single pool created. 对于这台服务器,我们需要在顶部配置一些全局网络设置,这样就不用到每个地址池中去单独设置了。 ![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) -Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well. 我们花一点时间来解释一下这些选项,在本教程中虽然它们是一些全局设置,但是也可以为单独的为某一个地址池进行配置。 -- option domain-name “comptech.local”; – All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local” - - option domain-name “comptech.local”; – 所有使用这台DHCP服务器的主机,都将成为DNS域名为“comptech.local”的一员 -- option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host. - - option domain-name-servers 172.27.10.6; DHCP向所有配置这台DHCP服务器的的网络主机分发DNS服务器地址为172.27.10.6 -- option subnet-mask 255.255.255.0; – The subnet mask handed out to every network will be a 255.255.255.0 or a /24 - - option subnet-mask 255.255.255.0; – 分派子网掩码到每一个网络设备 255.255.255.0 或a /24 -- default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host -is done with the lease, they can hand the address back early. - - default-lease-time 3600; – 默认有效的地址租约时间(单位是秒)。如果租约时间耗尽,那么主机可以重新申请租约。如果租约完成,那么相应的地址也将被尽快回收。 -- max-lease-time 86400; – This is the maximum amount of time in seconds a lease can be held by a host. - - max-lease-time 86400; – 这是一台主机最大的租约时间(单位为秒)。 -- ping-check true; – This is an extra test to ensure that the address the server wants to assign out isn’t in use by another host on the network already. - - ping-check true; – 这是一个额外的测试,以确保服务器分发出的网络地址不是当前网络中另一台主机已使用的网络地址。 -- ping-timeout; – This is how long in second the server will wait for a response to a ping before assuming the address isn’t in use. - - ping-timeout; – 如果地址以前没有使用过,可以用这个选项来检测2个ping返回值之间的时间长度。 -- ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS. - - ignore client-updates; 现在这个选项是可以忽略的,因为DDNS在前面已在配置文件中已经被禁用,但是当DDNS运行时,这个选项会忽略更新其DNS主机名的请求。 -5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza. - 5. 文件中下面一行是权威DHCP所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么取消注释权威字节(authoritative stanza)来实现。 -This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the ‘#’ in front of the keyword authoritative. - 通过去掉关键字authoritative 前面的‘#’,取消注释全局权威字节。这台服务器将是它所管理网络里面的唯一权威。 ![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) -Enable ISC Authoritative - 开启 ISC Authoritative -By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldn’t, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire network’s DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot. - 默认情况下服务器被假定为不是网络上的权威。之所以这样做是出于安全考虑。如果有人因为不了解DHCP服务的配置,导致配置不当或配置到一个不该出现的网络里面,这都将带来非常严重的重连接问题。这行还可用在每个网络中单独配置使用。也就是说如果这台服务器不是整个网络的DHCP服务器,authoritative行可以用在每个单独的网络中,而不是像上面截图中那样的全局配置。 -6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc). - 6. 这一步是配置服务器将要管理的所有DHCP地址池/网络。简短起见,本教程只配置了地址池。作为管理员需要收集一些必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) -For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255. - 以下这个地址池所用到的信息都是管理员收集整理的:网络id 172.27.60.0, 子网掩码 255.255.255.0 or a /24, 默认子网网关172.27.60.1,广播地址 172.27.60.255.0 -This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, let’s open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo! - 以上这些信息用于构建hcpd.conf文件中新的网络非常重要。使用文本编辑器修改配置文件添加新的网络进去,这里我们需要使用root或sudo访问权限。网络非常重要。使用文本编辑器修改配置文件添加新的网络进去,这里我们需要使用root或sudo访问权限。 # nano /etc/dhcp/dhcpd.conf ![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) -Configure DHCP Pools and Networks - 配置DHCP的地址池和网络 -This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network. - 当前这个例子是给用VMWare创建的虚拟服务器分配IP地址。第一行显示是该网络的子网掩码。括号里面的内容是DHCP服务器应该提供给网络上面主机的所有选项。 -The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses aren’t in the pool and can be assigned statically to hosts if needed. - 第一节, range 172.27.60.50 172.27.60.254;这一行显示的是,DHCP服务在这个网络上能够给主机动态分发的地址范围。 -The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network. - 第二节,option routers 172.27.60.1;这里显示的是网络里面所有的主机分发默认网关地址。 -The last stanza, option broadcast-address 172.27.60.255;, indicates what the network’s broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address can’t be assigned to a host. - 最后一节, option broadcast-address 172.27.60.255;,显示当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 -Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }. - 必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在{}里面。 -7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command: - 7. 如果是创建多个网络,连续的创建完它们的相应选项后保存文本文件即可。配置完成以后如果有更改,ISC-DHCP-Server进程需要重启来使新的更改生效。重启进程可以通过下面的命令来完成: # service isc-dhcp-server restart -This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]: - 这条命令将重启DHCP服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理dhcp请求。最简单的方法是通过lsof命令[1]来查看服务器是否在侦听67端口,命令如下: # lsof -i :67 ![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) -Check DHCP Listening Port - 检查DHCP侦听端口 -This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to ‘bootps‘ due to a port number mapping for port 67 in /etc/services file. - 这里输出的结果表明DHCPD(DHCP服务守护进程)正在运行并且侦听67端口。由于/etc/services文件中67端口是端口映射,所以输出中的67端口实际上被转换成了“bootps”。 -This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server. - 在大多数的系统中这是非常常见的,现在服务器应该已经为网络连接做好准备,我们可以将一台主机接入网络请求DHCP地址来验证服务是否正常。 -### Step 2: Testing Client Connectivity ### - ### 测试客户端连接 ### -8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active. - 8. 现在许多系统使用网络管理器来维护网络连接状态,因此这个设备应该预先配置好的,只要对应的接口处于活跃状态就能够获取DHCP。 -However on machines that aren’t using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses. - 然而当一台设备无法使用网络管理器时,它可能需要手动获取DHCP地址。下面的几步将演示怎样手动获取以及如何查看服务器是否已经按需要分发地址。 -The ‘[ifconfig][2]‘ utility can be used to check an interface’s configuration. The machine used to test the DHCP server only has one network adapter and it is called ‘eth0‘. - ‘[ifconfig][2]‘工具能够用来检查接口的配置。这台被用来测试的DHCP服务器的设备,它只有一个网络适配器(网卡),这块网卡被命名为‘eth0‘。 # ifconfig eth0 ![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) -Check Network Interface IP Address - 检查网络接口IP地址 -From this output, this machine currently doesn’t have an IPv4 address, great! Let’s instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as ‘dhclient‘ installed. The DHCP client utility may very from system to system. - 从输出结果上看,这台设备目前没IPv4地址,这样很好便于测试。我们把这台设备连接到DHCP服务器并发出一个请求。这台设备上已经安装了一个名为‘dhclient‘ 的DHCP客户端工具。因为操作系统各不相同,所以这个客户端软件也是互不一样的。 # dhclient eth0 ![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) -Request IP Address from DHCP - 从DHCP请求IP地址 -Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network. - 当前 `'inet addr:'` 字段中显示了属于172.27.60.0网络地址范围内的IPv4地址。值得欣慰的是当前网络还配置了正确的子网掩码并且分发了广播地址。 -Things are looking promising but let’s check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the server’s system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as ‘tail‘ will be used to only show the last few lines of the log file. - 到这里看起来还都不错,让我们来测试一下,看看这台设备收到新IP地址是不是由服务器发出的。这里我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具‘tail’,它只显示日志文件的最后几行,这样我们就可以不用拿一个文本编辑器去查看所有的日志文件了。命令如下: # tail /var/log/syslog ![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) -Check DHCP Logs - 检查DHCP日志文件 -Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary. - OK!服务器记录表明它分发了一个地址给这台主机(HRTDEBXENSRV)。服务器按预期运行,给它充当权威的网络分发适合的网络地址。至此DHCP服务器搭建成功并且运行。如果有需要你可以继续配置其他的网络,排查故障,确保安全。 -Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article. - 在以后的Debian教程中我会讲一些新的 ISC-DHCP-Server 功能。有时间的话我将写一篇关于Bind9和DDNS的教程,融入到这篇文章里面。 -------------------------------------------------------------------------------- From 2f2456862999a63d9d057c577c6d6d5391f085cf Mon Sep 17 00:00:00 2001 From: ivo wang Date: Thu, 3 Dec 2015 12:39:01 +0800 Subject: [PATCH 095/160] Update 20150806 Installation Guide for Puppet on Ubuntu 15.04.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 认领这篇 --- .../20150806 Installation Guide for Puppet on Ubuntu 15.04.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index ae8df117ef..59a243f0e5 100644 --- a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -1,3 +1,4 @@ +Translating by ivowang Installation Guide for Puppet on Ubuntu 15.04 ================================================================================ Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. From 51745ee9d4b8681289855fdfe511b017df5d9093 Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 3 Dec 2015 19:06:12 +0800 Subject: [PATCH 096/160] Create 20151201 How to use Mutt email client with encrypted passwords.md --- ...t email client with encrypted passwords.md | 138 ++++++++++++++++++ 1 file changed, 138 insertions(+) create mode 100644 translated/tech/20151201 How to use Mutt email client with encrypted passwords.md diff --git a/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md b/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md new file mode 100644 index 0000000000..1e8a032a04 --- /dev/null +++ b/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md @@ -0,0 +1,138 @@ +如何使用加密过密码的Mutt邮件客户端 +================================================================================ +Mutt是一个开源的Linux/UNIX终端环境下的邮件客户端。连同[Alpine][1],Mutt有充分的理由在Linux命令行热衷者中有最忠诚的追随者。想一下你对邮件客户端的期待的事情,Mutt拥有:多协议支持(e.g., POP3, IMAP and SMTP),S/MIME和PGP/GPG集成,线程会话,颜色编码,可定制宏/快捷键,等等。另外,基于命令行的Mutt相比笨重的web浏览器(如:Gmail,Ymail)或可视化邮件客户端(如:Thunderbird,MS Outlook)是一个轻量访问电子邮件的选择。 + +当你想使用Mutt通过公司的SMTP/IMAP服务器访问或发送邮件,或取代网页邮件服务,可能所关心的一个问题是如何保护您的邮件凭据(如:SMTP/IMAP密码)存储在一个纯文本Mutt配置文件(~/.muttrc)。 + +对于一些人安全的担忧,确实有一个容易的方法来**加密Mutt配置文件***,防止这种风险。在这个教程中,我描述了如何加密Mutt敏感配置,比如SMTP/IMAP密码使用GnuPG(GPG),一个开源的OpenPGP实现。 + +### 第一步 (可选):创建GPG密钥 ### + +因为我们将要使用GPG加密Mutt配置文件,如果你没有,第一步就是创建一个GPG密钥(公有/私有 密钥对)。如果有,忽略这步。 + +创建一个新GPG密钥,输入下面的。 + + $ gpg --gen-key + +选择密钥类型(RSA),密钥长度(2048 bits),和过期时间(0,不过期)。当出现用户ID提示时,输入你的名字(Dan Nanni) 和邮箱地址(myemail@email.com)关联到私有/公有密钥对。最后,输入一个密码来保护你的私钥。 + +![](https://c2.staticflickr.com/6/5726/22808727824_7735f11157_c.jpg) + +生成一个GPG密钥需要大量的随机字节熵,所以在生成密钥期间确保在你的系统上执行一些随机行为(如:打键盘,移动鼠标或者读写磁盘)。根据密钥长度决定生成GPG密钥要花几分钟或更多时间。 + +![](https://c1.staticflickr.com/1/644/23328597612_6ac5a29944_c.jpg) + +### 第二部:加密Mutt敏感配置 ### + +下一步,在~/.mutt目录创建一个新的文本文件,然后把一些你想隐藏的Mutt敏感配置放进去。这个例子里,我指定了SMTP/IMAP密码。 + + $ mkdir ~/.mutt + $ vi ~/.mutt/password + +---------- + + set smtp_pass="XXXXXXX" + set imap_pass="XXXXXXX" + +现在gpg用你的公钥加密这个文件如下。 + + $ gpg -r myemail@email.com -e ~/.mutt/password + +这将创建~/.mutt/password.gpg,这个是一个GPG加密原始版本文件。 + +继续删除~/.mutt/password,只保留GPG加密版本。 + +### 第三部:创建完整Mutt配置文件 ### + +由于你已经在一个单独的文件加密了Mutt敏感配置,你可以在~/.muttrc指定其余的Mutt配置。然后增加下面这行在~/.muttrc末尾。 + + source "gpg -d ~/.mutt/password.gpg |" + +当你使用Mutt,这行将解密~/.mutt/password.gpg,然后将解密内容应用到你的Mutt配置。 + +下面展示一个完整Mutt配置例子,这允许你用Mutt访问Gmail,没有暴露你的SMTP/IMAP密码。取代你用Gmail ID登陆你的账户。 + + set from = "yourgmailaccount@gmail.com" + set realname = "Your Name" + set smtp_url = "smtp://yourgmailaccount@smtp.gmail.com:587/" + set imap_user = "yourgmailaccount@gmail.com" + set folder = "imaps://imap.gmail.com:993" + set spoolfile = "+INBOX" + set postponed = "+[Google Mail]/Drafts" + set trash = "+[Google Mail]/Trash" + set header_cache =~/.mutt/cache/headers + set message_cachedir =~/.mutt/cache/bodies + set certificate_file =~/.mutt/certificates + set move = no + set imap_keepalive = 900 + + # encrypted IMAP/SMTP passwords + source "gpg -d ~/.mutt/password.gpg |" + +### 第四部(可选):配置GPG代理 ### + +这时候,你将可以使用加密了IMAP/SMTP密码的Mutt。无论如何,每次你运行Mutt,你都要先被提示输入一个GPG密码来使用你的私钥解密IMAP/SMTP密码。 + +![](https://c2.staticflickr.com/6/5667/23437064775_20c874940f_c.jpg) + +如果你想避免这样的GPG密码提示,你可以部署gpg代理。运行一个后台程序,gpg代理安全的缓存你的GPG密码,无需手工干预gpg自动从gpg代理获得你的GPG密码。如果你正在使用Linux桌面,你可以使用桌面特定方式来配置一些东西等价于gpg代理,例如,GNOME桌面的gnome-keyring-daemon。 + +你可以在基于Debian系统安装gpg代理: + +$ sudo apt-get install gpg-agent + +gpg代理是基于Red Hat系统预装的。 + +现在增加下面这些道你的.bashrc文件。 + + envfile="$HOME/.gnupg/gpg-agent.env" + if [[ -e "$envfile" ]] && kill -0 $(grep GPG_AGENT_INFO "$envfile" | cut -d: -f 2) 2>/dev/null; then + eval "$(cat "$envfile")" + else + eval "$(gpg-agent --daemon --allow-preset-passphrase --write-env-file "$envfile")" + fi + export GPG_AGENT_INFO + +重载.bashrc,或单纯的登出然后登陆回来。 + + $ source ~/.bashrc + +现在确认GPG_AGENT_INFO环境变量已经设置妥当。 + + $ echo $GPG_AGENT_INFO + +---------- + + /tmp/gpg-0SKJw8/S.gpg-agent:942:1 + +并且,当你输入gpg-agent命令时,你应该看到下面的信息。 + + $ gpg-agent + +---------- + + gpg-agent: gpg-agent running and available + +一旦gpg-agent启动运行,它将会在第一次提示你输入密码时缓存你的GPG密码。随后你运行Mutt多次,你将不会被提示要GPG密码(gpg-agent一直开着,缓存就不会过期)。 + +![](https://c1.staticflickr.com/1/664/22809928093_3be57698ce_c.jpg) + +### 结论 ### + +在这个指导里,我提出一个方法加密Mutt敏感配置如SMTP/IMAP密码使用GnuPG。注意,如果你想在Mutt上使用GnuPG或者登陆你的邮件信息,你可以参考[官方指南][2]在使用GPG与Mutt结合。 + +如果你知道任何使用Mutt的安全技巧,随时分享他。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/mutt-email-client-encrypted-passwords.html + +作者:[Dan Nanni][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/gmail-command-line-linux-alpine.html +[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG From bfb7cbeb1c975026b52e1afad877170ed875d0e2 Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 3 Dec 2015 19:06:20 +0800 Subject: [PATCH 097/160] Delete 20151201 How to use Mutt email client with encrypted passwords.md --- ...t email client with encrypted passwords.md | 139 ------------------ 1 file changed, 139 deletions(-) delete mode 100644 sources/tech/20151201 How to use Mutt email client with encrypted passwords.md diff --git a/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md b/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md deleted file mode 100644 index 758fe7c8b2..0000000000 --- a/sources/tech/20151201 How to use Mutt email client with encrypted passwords.md +++ /dev/null @@ -1,139 +0,0 @@ -wyangsun translating -How to use Mutt email client with encrypted passwords -================================================================================ -Mutt is an open-source email client written for Linux/UNIX terminal environment. Together with [Alpine][1], Mutt has the most devoted followers among Linux command-line enthusiasts, and for good reasons. Think of anything you expect from an email client, and Mutt has it: multi-protocol support (e.g., POP3, IMAP and SMTP), S/MIME and PGP/GPG integration, threaded conversation, color coding, customizable macros/keybindings, and so on. Besides, terminal-based Mutt is a lightweight alternative for accessing emails compared to bulky web browser-based (e.g., Gmail, Ymail) or GUI-based email clients (e.g., Thunderbird, MS Outlook). - -When you want to use Mutt to access or send emails via corporate SMTP/IMAP servers or replace web mail services, one concern you may have is how to protect your email credentials (e.g., SMTP/IMAP passwords) stored in a plain-text Mutt configuration file (~/.muttrc). - -For those who are security-conscious, there is actually an easy way to **encrypt Mutt configuration** to prevent such risk. In this tutorial, I describe how you can encrypt sensitive Mutt configuration such as SMTP/IMAP passwords using GnuPG (GPG), an open-source implementation of OpenPGP. - -### Step One (Optional): Create GPG Key ### - -Since we are going to use GPG to encrypt Mutt configuration, the first step is to create a GPG key (public/private keypair) if you don't have one. If you do, skip this step. - -To create a new GPG key, type the following. - - $ gpg --gen-key - -Choose the key type (RSA), keysize (2048 bits), and expiration date (0: no expiration). When prompted for a user ID, type your name (Dan Nanni) and email address (myemail@email.com) to be associated with the private/public keypair. Finally, type a passphrase to protect your private key. - -![](https://c2.staticflickr.com/6/5726/22808727824_7735f11157_c.jpg) - -Generating a GPG key requires a lot of random bytes for entropy, so make sure to perform some random actions on your system (e.g., type on a keyboard, move a mouse or read/write a disk) during key generation. Depending on keysize, it may take a few minutes or more to generate a GPG key. - -![](https://c1.staticflickr.com/1/644/23328597612_6ac5a29944_c.jpg) - -### Step Two: Encrypt Sensitive Mutt Configuration ### - -Next, create a new text file in ~/.mutt directory, and put in the file any sensitive Mutt configuration you want to hide. In this example, I specify SMTP/IMAP passwords. - - $ mkdir ~/.mutt - $ vi ~/.mutt/password - ----------- - - set smtp_pass="XXXXXXX" - set imap_pass="XXXXXXX" - -Now encrypt this file with gpg using your public key as follows. - - $ gpg -r myemail@email.com -e ~/.mutt/password - -This will create ~/.mutt/password.gpg, which is a GPG-encrypted version of the original file. - -Go ahead and remove ~/.mutt/password, leaving only the GPG-encrypted version. - -### Step Three: Create Full Mutt Configuration ### - -Now that you have encrypted sensitive Mutt configuration in a separate file, you can specify the rest of your Mutt configuration in ~/.muttrc. Then add the following line at the end of ~/.muttrc. - - source "gpg -d ~/.mutt/password.gpg |" - -This line will decrypt ~/.mutt/password.gpg when you launch Mutt, and apply the decrypted content to your Mutt configuration. - -The following shows an example of full Mutt configuration which allows you to access Gmail with Mutt, without revealing your SMTP/IMAP passwords. Replace yourgmailaccount with your Gmail ID. - - set from = "yourgmailaccount@gmail.com" - set realname = "Your Name" - set smtp_url = "smtp://yourgmailaccount@smtp.gmail.com:587/" - set imap_user = "yourgmailaccount@gmail.com" - set folder = "imaps://imap.gmail.com:993" - set spoolfile = "+INBOX" - set postponed = "+[Google Mail]/Drafts" - set trash = "+[Google Mail]/Trash" - set header_cache =~/.mutt/cache/headers - set message_cachedir =~/.mutt/cache/bodies - set certificate_file =~/.mutt/certificates - set move = no - set imap_keepalive = 900 - - # encrypted IMAP/SMTP passwords - source "gpg -d ~/.mutt/password.gpg |" - -### Step Four (Optional): Configure GPG-agent ### - -At this point, you will be able to use Mutt with encrypted IMAP/SMTP passwords. However, every time you launch Mutt, you will first be prompted to enter a GPG passphrase in order to decrypt IMAP/SMTP passwords using your private key. - -![](https://c2.staticflickr.com/6/5667/23437064775_20c874940f_c.jpg) - -If you want to avoid such GPG passphrase prompts, you can set up gpg-agent. Running as a daemon, gpg-agent securely caches your GPG passphrase, so that gpg automatically obtains your GPG passphrase from gpg-agent without you typing it manually. If you are using Linux desktop, you can use desktop-specific ways to configure something equivalent to gpg-agent, for example, gnome-keyring-daemon for GNOME desktop. - -You can install gpg-agent on Debian-based systems with: - -$ sudo apt-get install gpg-agent - -gpg-agent comes pre-installed on Red Hat based systems. - -Now add the following to your .bashrc file. - - envfile="$HOME/.gnupg/gpg-agent.env" - if [[ -e "$envfile" ]] && kill -0 $(grep GPG_AGENT_INFO "$envfile" | cut -d: -f 2) 2>/dev/null; then - eval "$(cat "$envfile")" - else - eval "$(gpg-agent --daemon --allow-preset-passphrase --write-env-file "$envfile")" - fi - export GPG_AGENT_INFO - -Reload .bashrc, or simply log out and log back in. - - $ source ~/.bashrc - -Now confirm that GPG_AGENT_INFO environment variable is set properly. - - $ echo $GPG_AGENT_INFO - ----------- - - /tmp/gpg-0SKJw8/S.gpg-agent:942:1 - -Also, when you type gpg-agent command, you should see the following message. - - $ gpg-agent - ----------- - - gpg-agent: gpg-agent running and available - -Once gpg-agent is up and running, it will cache your GPG passphrase the first time you type it at the passphrase prompt. Subsequently when you launch Mutt multiple times, you won't be prompted for a GPG passphrase (as long as gpg-agent is up and the cache entry does not expire). - -![](https://c1.staticflickr.com/1/664/22809928093_3be57698ce_c.jpg) - -### Conclusion ### - -In this tutorial, I presented a way to encrypt sensitive Mutt configuration such as SMTP/IMAP passwords using GnuPG. Note that if you want to use GnuPG within Mutt to encrypt or sign your email message, you can refer to the [official guide][2] on using GPG with Mutt. - -If you know of any security tips for using Mutt, feel free to share it. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/mutt-email-client-encrypted-passwords.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://xmodulo.com/gmail-command-line-linux-alpine.html -[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG From 0e53967f551df6f4bb889e8b606a1fc6ef3bbfc9 Mon Sep 17 00:00:00 2001 From: Bestony Date: Fri, 4 Dec 2015 05:15:42 +0800 Subject: [PATCH 098/160] =?UTF-8?q?=E6=96=B0=E9=97=BB=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Entering Public Beta --- .../Let's Encrypt:Entering Public Beta.md | 44 +++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 sources/news/Let's Encrypt:Entering Public Beta.md diff --git a/sources/news/Let's Encrypt:Entering Public Beta.md b/sources/news/Let's Encrypt:Entering Public Beta.md new file mode 100644 index 0000000000..5c5c61ebbd --- /dev/null +++ b/sources/news/Let's Encrypt:Entering Public Beta.md @@ -0,0 +1,44 @@ +Let's Encrypt:Entering Public Beta +================================================================================ +We’re happy to announce that Let’s Encrypt has entered Public Beta. Invitations are no longer needed in order to get free +certificates from Let’s Encrypt. + +It’s time for the Web to take a big step forward in terms of security and privacy. We want to see HTTPS become the default. +Let’s Encrypt was built to enable that by making it as easy as possible to get and manage certificates. + +We’d like to thank everyone who participated in the Limited Beta. Let’s Encrypt issued over 26,000 certificates during the +Limited Beta period. This allowed us to gain valuable insight into how our systems perform, and to be confident about moving +to Public Beta. + +We’d also like to thank all of our [sponsors][1] for their support. We’re happy to have announced earlier today that +[Facebook is our newest Gold sponsor][2]/ + +We have more work to do before we’re comfortable dropping the beta label entirely, particularly on the client experience. +Automation is a cornerstone of our strategy, and we need to make sure that the client works smoothly and reliably on a +wide range of platforms. We’ll be monitoring feedback from users closely, and making improvements as quickly as possible. + +Instructions for getting a certificate with the [Let’s Encrypt client][3] can be found [here][4]. + +[Let’s Encrypt Community Support][5] is an invaluable resource for our community, we strongly recommend making use of the +site if you have any questions about Let’s Encrypt. + +Let’s Encrypt depends on support from a wide variety of individuals and organizations. Please consider [getting involved][6] +, and if your company or organization would like to sponsor Let’s Encrypt please email us at [sponsor@letsencrypt.org][7]. +-------------------------------------------------------------------------------- + +via: https://letsencrypt.org/2015/12/03/entering-public-beta.html + +作者:[Josh Aas][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://letsencrypt.org/2015/12/03/entering-public-beta.html +[1]:https://letsencrypt.org/sponsors/ +[2]:https://letsencrypt.org/2015/12/03/facebook-sponsorship.html +[3]:https://github.com/letsencrypt/letsencrypt +[4]:https://letsencrypt.readthedocs.org/en/latest/ +[5]:https://community.letsencrypt.org/ +[6]:https://letsencrypt.org/getinvolved/ +[7]:mailto:sponsor@letsencrypt.org From 0281425e3eb0434448c307cb5c54ddb3e6902e61 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Fri, 4 Dec 2015 09:16:19 +0800 Subject: [PATCH 099/160] [translating]Why did you start using Linux --- sources/talk/20150820 Why did you start using Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md index 5fb6a8d4fe..58b89e1f74 100644 --- a/sources/talk/20150820 Why did you start using Linux.md +++ b/sources/talk/20150820 Why did you start using Linux.md @@ -1,3 +1,4 @@ +[bazz2222] Why did you start using Linux? ================================================================================ > In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux From 2aaec293dda665b83b371100d8768800856a99c3 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 10:53:37 +0800 Subject: [PATCH 100/160] Create Installation Guide for Puppet on Ubuntu 15.04 --- ...tallation Guide for Puppet on Ubuntu 15.04 | 411 ++++++++++++++++++ 1 file changed, 411 insertions(+) create mode 100644 translated/tech/Installation Guide for Puppet on Ubuntu 15.04 diff --git a/translated/tech/Installation Guide for Puppet on Ubuntu 15.04 b/translated/tech/Installation Guide for Puppet on Ubuntu 15.04 new file mode 100644 index 0000000000..70bdb46ad9 --- /dev/null +++ b/translated/tech/Installation Guide for Puppet on Ubuntu 15.04 @@ -0,0 +1,411 @@ +如何在Ubuntu 15.04中安装puppet +================================================================================ + +大家好,本教程将教大家如何在ubuntu 15.04上面安装puppet,用它来管理你的服务器基础环境。puppet是由puppet实验室开发并维护的一款开源软件,它帮我们自动的管理配置服务器的基础环境。不管我们管理的是几个服务器还是数以千计的机器设备组成的业务流程及报表,puppet都能够使管理员从繁琐的手动调整中解放出来,腾出时间和精力去提升整体效率。它能够确保自动化流程作业的一致性,可靠性以及稳定性。它让管理员和开发者走得更近,叫付出更加简洁清晰设计良好的代码。puppet提供了管理配置和自动化数据中心的2个解决方案。分别是**puppet开源项目 and puppet商业版**.puppet开源项目在apache2.0上是灵活可定制的的解决方案,设置初衷是帮助他们完成那些经常操作的重复性工作。pupprt商业版是一个全平台复杂IT环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端apps,只有商业版才有的加强支持,以及模块化和集成管理等。Puppet使用SSL证书来认证主机与代理节点之间的通信。 + +本教程将要介绍如何在ubuntu15.04的主机和代理节点上面安装开源版的puppet。在这我们用一台服务器做主机管理和控制剩余的当作puppet的代理节点的服务器,这些代理节点将依据服务器来进行配置。在ubuntu 15.04只需要简单的几步就能安装配置好puppet,用它来管理我们的服务器基础环境非常的方便。(译者注:puppet采用的C/S架构所以必须有至少有一台作为服务端,其他作为客户端处理) +### 1.设置主机### + Here is the infrastructure of the server that we're gonna use for this tutorial. +在本教程里,我们将使用用2台运行ubuntu 15.04 "Vivid Vervet"的主机,一台作为服务端,另一台作为puppet的代理节点。这就是我们将用到的服务器基础环境。 + +puupet服务器IP:44.55.88.6,主机名: puppetmaster +puppet 代理节点 IP 45.55.86.39 ,主机名: puppetnode + +Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server. +我们要在代理节点和服务器这两天机器的hosts里面都添加上相应的条目,使用root或是sudo访问权限来编辑/etc/hosts文件命令如下: + # nano /etc/hosts + + 45.55.88.6 puppetmaster.example.com puppetmaster + 45.55.86.39 puppetnode.example.com puppetnode + +注意,puppet服务端必使用8140端口来运行,所以请务必保证开启8140端口。 + +### 2. 用NTP更新时间 ### + +To do so, here's the command below that we need to run on both master and node agent. +puppet的代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须双方互相同步。我们使用NTP(Network Time Protocol,网络时间协议)来更新同步时间。在服务器与代理节点上面分别运行以下命令来同步时间: + # ntpdate pool.ntp.org + + 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec (译者注:显示类似的输出结果表示运行正常) + +使用下面的命令更新你的软件仓库,安装并运行ntp服务 + + # apt-get update && sudo apt-get -y install ntp ; service ntp restart + +### 3. 安装服务器软件 ### + +There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package. +有很多的方法可以用来安装开源版本的puppet。在本教程中我们在puppet实验室官网下载一个名为puppetlabs-release的软件包,安装后它为我们在软件源里面添加puppetmaster-passenger。puppetmaster-passenger是基于apache的puppet服务端。我们开始下载这个软件包 + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + + --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s + + 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +下载完成后我们要安装这个软件包 + + # dpkg -i puppetlabs-release-trusty.deb + + Selecting previously unselected package puppetlabs-release. + (Reading database ... 85899 files and directories currently installed.) + Preparing to unpack puppetlabs-release-trusty.deb ... + Unpacking puppetlabs-release (1.0-11) ... + Setting up puppetlabs-release (1.0-11) ... + +使用apt包管理命令更新一下本地的软件源 + + # apt-get update + +现在我们就可以使用下面的命令来安装puppetmaster-passenger了 + + # apt-get install puppetmaster-passenger + +**提示**: 在安装的时候可能会报错**Warning: Setting templatedir is deprecated.请查看 http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** 不过不用担心,忽略掉它就好,报错的意思是templatedir过时了,我们只需要在设置配置文件的时候把这一项disable就行了。 + +如何来查看puppet master是否已经安装成功了呢?非常简单只需要使用下面的命令查看它的版本就可以了。 + # puppet --version + + 3.8.1 + + +现在我们已经安装好了puppet master。要想使用puppet master apache服务就必须运行起来,因为puppet master进程的运行是基于apache的。 +在开始之前,我们将apache服务停止,这样puppet muster也会停止运行。 + + # systemctl stop apache2 + +### 4. 使用Apt工具锁定Master(服务端)版本 ### + +现在已经安装了 3.8.1版的puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用apT工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** + + # nano /etc/apt/preferences.d/00-puppet.pref + +在新创建的文件里面添加以下内容 + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common puppetmaster-passenger + Pin: version 3.8* + Pin-Priority: 501 + +这样在以后的系统软件升级中puppet将被锁住不会跟随系统软件一起升级。 + +### 5. 配置 Puppet### +Puppet master作为一个证书发行机构,所有代理证书的请求都将由它来处理。首先我们要删除所有在软件包安装过程中创建出来已经存在的ssl证书。本地默认的puppet证书在/var/lib/puppet/ssl。因此我们只需要使用rm命令来移除这些证书就可以了。 + # rm -rf /var/lib/puppet/ssl + + +现在来配置这些证书,在创建puppet master'证书的时候,需要用到能与服务器通信的代理节点的DNS名称。使用文本编辑器来编辑服务器的配置文件puppet.conf + # nano /etc/puppet/puppet.conf + +输出的结果像下面这样 + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + templatedir=$confdir/templates + + [master] + # These are needed when the puppetmaster is run by passenger + # and can safely be removed if webrick is used. + ssl_client_header = SSL_CLIENT_S_DN + ssl_client_verify_header = SSL_CLIENT_VERIFY + + +在这我们需要注释掉templatedir 这行使它失效。在文件的结尾添加下面的信息。 + server = puppetmaster + environment = production + runinterval = 1h + strict_variables = true + certname = puppetmaster + dns_alt_names = puppetmaster, puppetmaster.example.com + +这里有很多有用的建立适合你的配置项。 如果你有需要,在Puppet实验室有一份详细的描述文件供你阅读。 [Main Config File (puppet.conf)][1]. + +编辑完成后保存退出。 + +使用下面的命令来生成一个新的证书。 + # puppet master --verbose --no-daemonize + + Info: Creating a new SSL key for ca + Info: Creating a new SSL certificate request for ca + Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 + ... + Notice: puppetmaster has a waiting certificate request + Notice: Signed certificate request for puppetmaster + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' + Notice: Starting Puppet master version 3.8.1 + ^CNotice: Caught INT; storing stop + Notice: Processing stop + +至此,证书已经生成. 一旦我们看到 **Notice: Starting Puppet master version 3.8.1**, 表明证书就已经制作好了.我们按下 CTRL-C 回到shell命令行. + +如果你想看刚生成证书的信息,可以使用下面的命令来进行查看。 + + # puppet cert list -all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 6. 创建一个Puppet清单 ### + +默认的主要清单在/etc/puppet/manifests/site.pp. 这个主要清单文件定义着控制哪些代理节点。我们现在使用下面的命令来创建一个清单文件 + + # nano /etc/puppet/manifests/site.pp + +在刚打开的文件里面添加下面这几行 + + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + +以上这几行的意思是通过apache web 服务来部署代理节点 +### 7. 运行Master服务### + +已经准备好运行puppet master了,那么开启apache服务来让它运行 + + # systemctl start apache2 + +我们puppet master已经跑起来了, 但是现在他还不能管理任何代理节点。现在我们给master添加代理节点. + +**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server. +**提示**: 如果报错 **Job for apache2.service failed. 查看"systemctl status apache2.service" and "journalctl -xe" 所给出的信息.** 肯定是apache server有一些问题. 我们可以使用root或是sudo访问权限来运行**apachectl start**查看它输出的日志。 在本教程执行过程中, 我们找到一个关于证书有问题的配置文件**/etc/apache2/sites-enabled/puppetmaster.conf**. 修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem 为 SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** 然后注释掉后面这行**SSLCertificateKeyFile** . 然后在命令行启动apache +### 8. Puppet客户端安装 ### + +我们已经准备好了puppet的服务端,现在来安装代理节点安装客户端。这里我们要给每一个需要管理的节点安装客户端,并且确保这些节点能够通过DNS查找的服务器主机。下面将 puppetnode.example.com作为代理节点安装客户端 +在代理节点上使用下面的命令下载puppet实验室提供的软件包。 + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ + + --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s + + 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +在ubuntu 15.04上我们使用debian包管理系统来安装它,命令如下: + # dpkg -i puppetlabs-release-trusty.deb + +使用apt包管理命令更新一下本地的软件源 + # apt-get update + +通过远程仓库安装 + # apt-get install puppet + +Puppet客户端默认是不启动的。这里我们需要使用文本编辑器修改/etc/default/puppet文件,使它正常工作。 + # nano /etc/default/puppet + +更改 **START** 的值改成 "yes" 。 + + START=yes + +最后保存并退出。 + +### 9. 使用APT工具锁定Agent(客户端)版本### + +和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要将该版本使用apt工具锁定。具体做法是使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** + # nano /etc/apt/preferences.d/00-puppet.pref + +在新建的文件里面加入如下内容 + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common + Pin: version 3.8* + Pin-Priority: 501 + +这样puppet就不会随着系统软件升级而随意升级了。 + +### 10. 配置puppet代理节点 ### + +我们需要编辑一下代理节点的puppet.conf文件,来使它运行。 + + # nano /etc/puppet/puppet.conf + +它看起来和服务端的配置文件完全一样。 +同样注释掉**templatedir**这行. 在这里我们需要删除掉所有关于[master]的部分。 + + +假定服务端可用我们的客户端应该是可以和它相互连接通信的。如果不行我们需要使用完整的域名puppetmaster.example.com + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +在文件的结尾增加上面3行,增加之后文件内容像下面这样。 + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + #templatedir=$confdir/templates + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +最后保存并退出。 + +使用下面的命令来启动客户端软件 + + # systemctl start puppet + +如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个ssl证书并且发送一个请求给服务端,通过确认后,两台机器就可以互相通信了。 + +**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further. + +**提示**: 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。 + +### 11. 主服务器上的签名证书请求 ### + +第一次运行的时候,代理节点会生成一个ssl证书并且发送一个请求给服务端.在主服务器给代理节点服务器的证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 + +在主服务器上使用下面的命令来列出现有的证书请求 + # puppet cert list + + "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 + + 因为只设置了一台代理节点服务器,所以我们将看到只有一个请求。它看起来像是代理节点服务器的域名和主机名。 +注意有没有“+”号在前面,代表这个证书有没有被签名。 + +使用**puppet cert sign**到**hostname**这个命令来签署这个签名请求。 + # puppet cert sign puppetnode.example.com + + Notice: Signed certificate request for puppetnode.example.com + Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' + +服务端只能和它所签名的代理节点通信并控制代理节点。 + +如想我们想签署所有的请求, 我们需要使用-all选项,如下所示。 + + # puppet cert sign --all + +### 删除一个Puppet证书 ### + +如果我们想移除一个主机,或者想重建一个主机然后再添加它. 下面的例子我们将展示如何删除puppet master上面的一个证书. 使用的命令如下: + + # puppet cert clean hostname + + Notice: Revoked certificate with serial 5 + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' + +如果我们想查看目前所有的签署和未签署的请求,使用下面这条命令 + # puppet cert list --all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 12. 部署一个 Puppet清单 ### + +After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node. +当我们配置并完成主puppet清单,我们现在署代理节点服务器清单。要应用并加载主清单,我们可以在代理节点服务器上面使用下面的命令 + + # puppet agent --test + + Info: Retrieving pluginfacts + Info: Retrieving plugin + Info: Caching catalog for puppetnode.example.com + Info: Applying configuration version '1434563858' + Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully + Notice: Finished catalog run in 10.53 seconds + +这里像我们展示了主清单如何去管理一个单一的服务器。 +If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from. +如果我们打算运行的puppet清单与主清单没有什么关联,那么需要使用puppet apply 到相应的路径。它仅适用于该代理节点。 + # puppet apply /etc/puppet/manifest/test.pp + +### 13. 配置一个特殊节点清单 ### + +如果我们想部署一个清单到某个特定的节点,我们需要配置清单如下。 + +在主服务器上面使用文本编辑器编辑/etc/puppet/manifest/site.pp + # nano /etc/puppet/manifest/site.pp + +添加下面的内容进去 + + node 'puppetnode', 'puppetnode1' { + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + } + +这里的配置显示我们将在名为puppetnode and puppetnode1的2个特殊节点上面安装apache服务. 这里可以添加更多我们需要安装部署的具体节点进去。 +### 14. 配置清单模块 ### + +模块化组件组是非常实用的,在Puppet社区有很多人提交自己的模块。 + +在主puppet服务器上, 我们将使用uppet module命令来安装**puppetlabs-apache** 模块。 + + # puppet module install puppetlabs-apache + +**警告**: 千万不要在一个已经部署apache环境的机器上面使用这个模块,否则它将清空你的apache配置。 + +现在用文本编辑器来修改 **site.pp** + + # nano /etc/puppet/manifest/site.pp + +添加下面的内容进去,意思是在 puppetnode上面安装apache服务。 + + node 'puppet-node' { + class { 'apache': } # use apache module + apache::vhost { 'example.com': # define vhost resource + port => '80', + docroot => '/var/www/html' + } + } + +保存退出。这样为我们的代理服务器重新配置部署基础环境。 + +### 总结 ### + +现在我们已经成功的在ubuntu 15.04上面部署并运行puppet来管理代理节点服务器的基础运行环境.我们学习了,puppet是如何工作的,配置清单文件,节点与主机间的ssl证书认证。使用puppet控制,管理并且配置众多的代理节点服务器是非常容易的。如果你有任何的问题,建议,反馈,请务必与我们取得联系我们将及时的改善更新,谢谢。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/ivo-wang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html From ab096a3c4cb4e3140fd9944688e1dbf34023f938 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:43:48 +0800 Subject: [PATCH 101/160] Update and rename Installation Guide for Puppet on Ubuntu 15.04 to 20150806 Installation Guide for Puppet on Ubuntu 15.04 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ...allation Guide for Puppet on Ubuntu 15.04} | 119 ++++++++---------- 1 file changed, 54 insertions(+), 65 deletions(-) rename translated/tech/{Installation Guide for Puppet on Ubuntu 15.04 => 20150806 Installation Guide for Puppet on Ubuntu 15.04} (56%) diff --git a/translated/tech/Installation Guide for Puppet on Ubuntu 15.04 b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04 similarity index 56% rename from translated/tech/Installation Guide for Puppet on Ubuntu 15.04 rename to translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04 index 70bdb46ad9..290698a62f 100644 --- a/translated/tech/Installation Guide for Puppet on Ubuntu 15.04 +++ b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04 @@ -1,18 +1,16 @@ 如何在Ubuntu 15.04中安装puppet ================================================================================ -大家好,本教程将教大家如何在ubuntu 15.04上面安装puppet,用它来管理你的服务器基础环境。puppet是由puppet实验室开发并维护的一款开源软件,它帮我们自动的管理配置服务器的基础环境。不管我们管理的是几个服务器还是数以千计的机器设备组成的业务流程及报表,puppet都能够使管理员从繁琐的手动调整中解放出来,腾出时间和精力去提升整体效率。它能够确保自动化流程作业的一致性,可靠性以及稳定性。它让管理员和开发者走得更近,叫付出更加简洁清晰设计良好的代码。puppet提供了管理配置和自动化数据中心的2个解决方案。分别是**puppet开源项目 and puppet商业版**.puppet开源项目在apache2.0上是灵活可定制的的解决方案,设置初衷是帮助他们完成那些经常操作的重复性工作。pupprt商业版是一个全平台复杂IT环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端apps,只有商业版才有的加强支持,以及模块化和集成管理等。Puppet使用SSL证书来认证主机与代理节点之间的通信。 +大家好,本教程将教各位如何在ubuntu 15.04上面安装puppet,用它来管理你的服务器基础环境。puppet是由puppet实验室开发并维护的一款开源软件,它能够帮我们自动的管理配置服务器的基础环境。不管我们管理的是几个服务器还是数以千计的计算机组成的业务报表体系,puppet都能够使管理员从繁琐的手动配置调整中解放出来,腾出时间和精力去提系统的升整体效率。它能够确保所有自动化流程作业的一致性,可靠性以及稳定性。它让管理员和开发者更紧密的联系在一起,使开发者更容易产出付出设计良好,简洁清晰的代码。puppet提供了管理配置和自动化数据中心的2个解决方案。这两个解决方案分别是**puppet开源项目 和 puppet商业版**。puppet开源项目依赖于apache2.0,它是一个非常灵活可随意定制的解决方案,设置初衷是帮助管理员去完成那些重复性操作工作。pupprt商业版是一个全平台复杂IT环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端apps,只有商业版才有的加强支持,以及模块化和集成管理等。Puppet使用SSL证书来认证主机与代理节点之间的通信。 -本教程将要介绍如何在ubuntu15.04的主机和代理节点上面安装开源版的puppet。在这我们用一台服务器做主机管理和控制剩余的当作puppet的代理节点的服务器,这些代理节点将依据服务器来进行配置。在ubuntu 15.04只需要简单的几步就能安装配置好puppet,用它来管理我们的服务器基础环境非常的方便。(译者注:puppet采用的C/S架构所以必须有至少有一台作为服务端,其他作为客户端处理) +本教程将要介绍如何在ubuntu 15.04的主机和代理节点上面安装开源版的puppet。在这我们用一台服务器做主机,管理和控制剩余当作puppet的代理节点的服务器,这些代理节点将依据服务器来进行配置。在ubuntu 15.04只需要简单的几步就能安装配置好puppet,用它来管理我们的服务器基础环境非常的方便。(译者注:puppet采用的C/S架构所以必须有至少有一台作为服务端,其他作为客户端处理) ### 1.设置主机### - Here is the infrastructure of the server that we're gonna use for this tutorial. -在本教程里,我们将使用用2台运行ubuntu 15.04 "Vivid Vervet"的主机,一台作为服务端,另一台作为puppet的代理节点。这就是我们将用到的服务器基础环境。 +在本教程里,我们将使用用2台运行ubuntu 15.04 "Vivid Vervet"的主机,一台作为服务端,另一台作为puppet的代理节点。下面是我们将用到的服务器的基础信息。 puupet服务器IP:44.55.88.6,主机名: puppetmaster -puppet 代理节点 IP 45.55.86.39 ,主机名: puppetnode +puppet代理节点IP: 45.55.86.39 ,主机名: puppetnode -Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server. -我们要在代理节点和服务器这两天机器的hosts里面都添加上相应的条目,使用root或是sudo访问权限来编辑/etc/hosts文件命令如下: +我们要在代理节点和服务器这两天机器的hosts里面都添加上相应的条目,使用root或是sudo访问权限来编辑/etc/hosts文件,命令如下: # nano /etc/hosts 45.55.88.6 puppetmaster.example.com puppetmaster @@ -22,20 +20,18 @@ Now we'll add the entry of the machines to /etc/hosts on both machines node agen ### 2. 用NTP更新时间 ### -To do so, here's the command below that we need to run on both master and node agent. -puppet的代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须双方互相同步。我们使用NTP(Network Time Protocol,网络时间协议)来更新同步时间。在服务器与代理节点上面分别运行以下命令来同步时间: +puppet代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用NTP(Network Time Protocol,网络时间协议)来同步时间。在服务器与代理节点上面分别运行以下命令来同步时间。 # ntpdate pool.ntp.org - 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec (译者注:显示类似的输出结果表示运行正常) + 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec (译者注:显示类似的输出结果表示运行正常) -使用下面的命令更新你的软件仓库,安装并运行ntp服务 +如果没有ntp,请使用下面的命令更新你的软件仓库,安装并运行ntp服务 # apt-get update && sudo apt-get -y install ntp ; service ntp restart ### 3. 安装服务器软件 ### -There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package. -有很多的方法可以用来安装开源版本的puppet。在本教程中我们在puppet实验室官网下载一个名为puppetlabs-release的软件包,安装后它为我们在软件源里面添加puppetmaster-passenger。puppetmaster-passenger是基于apache的puppet服务端。我们开始下载这个软件包 +安装开源版本的puppet有很多的方法。在本教程中我们在puppet实验室官网下载一个名为puppetlabs-release的软件包,安装后它将为我们在软件源里面添加puppetmaster-passenger。puppetmaster-passenger依赖于apache的puppet服务端。我们开始下载这个软件包 # cd /tmp/ # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb @@ -50,7 +46,7 @@ There are many ways to install open source puppet. In this tutorial, we'll downl 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] -下载完成后我们要安装这个软件包 +下载完成,我们来安装它 # dpkg -i puppetlabs-release-trusty.deb @@ -64,26 +60,25 @@ There are many ways to install open source puppet. In this tutorial, we'll downl # apt-get update -现在我们就可以使用下面的命令来安装puppetmaster-passenger了 +现在我们就可以安装puppetmaster-passenger了 # apt-get install puppetmaster-passenger -**提示**: 在安装的时候可能会报错**Warning: Setting templatedir is deprecated.请查看 http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** 不过不用担心,忽略掉它就好,报错的意思是templatedir过时了,我们只需要在设置配置文件的时候把这一项disable就行了。 +**提示**: 在安装的时候可能会报错**Warning: Setting templatedir is deprecated.请查看 http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** 不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项disable就行了。 -如何来查看puppet master是否已经安装成功了呢?非常简单只需要使用下面的命令查看它的版本就可以了。 +如何来查看puppet master是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。 # puppet --version 3.8.1 - -现在我们已经安装好了puppet master。要想使用puppet master apache服务就必须运行起来,因为puppet master进程的运行是基于apache的。 +现在我们已经安装好了puppet master。要想使用puppet master apache服务就必须运行起来,因为puppet master进程的运行是依赖于apache的。 在开始之前,我们将apache服务停止,这样puppet muster也会停止运行。 # systemctl stop apache2 ### 4. 使用Apt工具锁定Master(服务端)版本 ### -现在已经安装了 3.8.1版的puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用apT工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** +现在已经安装了 3.8.1版的puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用apt工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** # nano /etc/apt/preferences.d/00-puppet.pref @@ -93,14 +88,13 @@ There are many ways to install open source puppet. In this tutorial, we'll downl Pin: version 3.8* Pin-Priority: 501 -这样在以后的系统软件升级中puppet将被锁住不会跟随系统软件一起升级。 +这样在以后的系统软件升级中puppet master将被锁住不会跟随系统软件一起升级。 -### 5. 配置 Puppet### -Puppet master作为一个证书发行机构,所有代理证书的请求都将由它来处理。首先我们要删除所有在软件包安装过程中创建出来已经存在的ssl证书。本地默认的puppet证书在/var/lib/puppet/ssl。因此我们只需要使用rm命令来移除这些证书就可以了。 +### 5. 配置 Puppet Master### +Puppet master作为一个证书发行机构,所有代理证书的请求都将由它来处理。首先我们要删除所有在软件包安装过程中创建出来的ssl证书。本地默认的puppet证书在/var/lib/puppet/ssl。因此我们只需要使用rm命令来移除这些证书就可以了。 # rm -rf /var/lib/puppet/ssl - -现在来配置这些证书,在创建puppet master'证书的时候,需要用到能与服务器通信的代理节点的DNS名称。使用文本编辑器来编辑服务器的配置文件puppet.conf +现在来配置这些证书,在创建puppet master证书的时候,需要用使用DNS能查找到的代理节点名称。使用文本编辑器来修改服务器的配置文件puppet.conf # nano /etc/puppet/puppet.conf 输出的结果像下面这样 @@ -120,7 +114,7 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 ssl_client_verify_header = SSL_CLIENT_VERIFY -在这我们需要注释掉templatedir 这行使它失效。在文件的结尾添加下面的信息。 +在这我们需要注释掉templatedir 这行使它失效。然后在文件的结尾添加下面的信息。 server = puppetmaster environment = production runinterval = 1h @@ -128,7 +122,7 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 certname = puppetmaster dns_alt_names = puppetmaster, puppetmaster.example.com -这里有很多有用的建立适合你的配置项。 如果你有需要,在Puppet实验室有一份详细的描述文件供你阅读。 [Main Config File (puppet.conf)][1]. +还有很多你可能用的到的配置选项。 如果你有需要,在Puppet实验室有一份详细的描述文件供你阅读。 [Main Config File (puppet.conf)][1]. 编辑完成后保存退出。 @@ -147,9 +141,9 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 ^CNotice: Caught INT; storing stop Notice: Processing stop -至此,证书已经生成. 一旦我们看到 **Notice: Starting Puppet master version 3.8.1**, 表明证书就已经制作好了.我们按下 CTRL-C 回到shell命令行. +至此,证书已经生成。一旦我们看到 **Notice: Starting Puppet master version 3.8.1**, 表明证书就已经制作好了.我们按下 CTRL-C 回到shell命令行。 -如果你想看刚生成证书的信息,可以使用下面的命令来进行查看。 +查看新生成证书的信息,可以使用下面的命令。 # puppet cert list -all @@ -157,7 +151,7 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 ### 6. 创建一个Puppet清单 ### -默认的主要清单在/etc/puppet/manifests/site.pp. 这个主要清单文件定义着控制哪些代理节点。我们现在使用下面的命令来创建一个清单文件 +默认的主要清单是/etc/puppet/manifests/site.pp。 这个主要清单文件定义着控制哪些代理节点。现在我们来创建一个清单文件 # nano /etc/puppet/manifests/site.pp @@ -179,21 +173,20 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 ensure => running, } -以上这几行的意思是通过apache web 服务来部署代理节点 -### 7. 运行Master服务### +以上这几行的意思是给代理节点部署apache web 服务 +### 7. 运行puppet Master服务 ### -已经准备好运行puppet master了,那么开启apache服务来让它运行 +已经准备好运行puppet master了,那么开启apache服务来让它启动 # systemctl start apache2 -我们puppet master已经跑起来了, 但是现在他还不能管理任何代理节点。现在我们给master添加代理节点. +我们puppet master已经运行,不过它还不能管理任何代理节点。现在我们给puppet master添加代理节点. -**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server. -**提示**: 如果报错 **Job for apache2.service failed. 查看"systemctl status apache2.service" and "journalctl -xe" 所给出的信息.** 肯定是apache server有一些问题. 我们可以使用root或是sudo访问权限来运行**apachectl start**查看它输出的日志。 在本教程执行过程中, 我们找到一个关于证书有问题的配置文件**/etc/apache2/sites-enabled/puppetmaster.conf**. 修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem 为 SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** 然后注释掉后面这行**SSLCertificateKeyFile** . 然后在命令行启动apache +**提示**: 如果报错 **Job for apache2.service failed. 查看"systemctl status apache2.service" and "journalctl -xe" 所给出的信息.** 肯定是apache server有一些问题. 我们可以使用root或是sudo访问权限来运行**apachectl start**查看它输出的日志。 在本教程执行过程中, 我们发现一个证书配置的问题,解决方法如下**/etc/apache2/sites-enabled/puppetmaster.conf**. 修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem 为 SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** 然后注释掉后面这行**SSLCertificateKeyFile** . 然后在命令行启动apache ### 8. Puppet客户端安装 ### -我们已经准备好了puppet的服务端,现在来安装代理节点安装客户端。这里我们要给每一个需要管理的节点安装客户端,并且确保这些节点能够通过DNS查找的服务器主机。下面将 puppetnode.example.com作为代理节点安装客户端 -在代理节点上使用下面的命令下载puppet实验室提供的软件包。 +我们已经准备好了puppet的服务端,现在来为代理节点安装客户端。这里我们要给每一个需要管理的节点安装客户端,并且确保这些节点能够通过DNS查询到服务器主机。下面将 puppetnode.example.com作为代理节点安装客户端 +在代理节点服务器上,使用下面的命令下载puppet实验室提供的软件包。 # cd /tmp/ # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ @@ -226,9 +219,9 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 最后保存并退出。 -### 9. 使用APT工具锁定Agent(客户端)版本### +### 9. 使用APT工具锁定Agent(客户端)版本 ### -和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要将该版本使用apt工具锁定。具体做法是使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** +和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用apt工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref** # nano /etc/apt/preferences.d/00-puppet.pref 在新建的文件里面加入如下内容 @@ -246,11 +239,10 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 # nano /etc/puppet/puppet.conf -它看起来和服务端的配置文件完全一样。 -同样注释掉**templatedir**这行. 在这里我们需要删除掉所有关于[master]的部分。 +它看起来和服务端的配置文件完全一样。同样注释掉**templatedir**这行。不同的是在这里我们需要删除掉所有关于[master]的部分。 -假定服务端可用我们的客户端应该是可以和它相互连接通信的。如果不行我们需要使用完整的域名puppetmaster.example.com +假定服务端可用,我们的客户端应该是可以和它相互连接通信的。如果不行我们需要使用完整的主机域名puppetmaster.example.com [agent] server = puppetmaster.example.com certname = puppetnode.example.com @@ -275,22 +267,20 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 # systemctl start puppet -如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个ssl证书并且发送一个请求给服务端,通过确认后,两台机器就可以互相通信了。 - -**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further. +如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个ssl证书并且给服务端发送一个请求,经过签名确认后,两台机器就可以互相通信了。 **提示**: 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。 -### 11. 主服务器上的签名证书请求 ### +### 11. 服务器上的签名证书请求 ### -第一次运行的时候,代理节点会生成一个ssl证书并且发送一个请求给服务端.在主服务器给代理节点服务器的证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 +第一次运行的时候,代理节点会生成一个ssl证并且给服务端发送一个请求。在主服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 -在主服务器上使用下面的命令来列出现有的证书请求 +在主服务器上使用下面的命令来列出当前的证书请求 # puppet cert list "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 - 因为只设置了一台代理节点服务器,所以我们将看到只有一个请求。它看起来像是代理节点服务器的域名和主机名。 + 因为只设置了一台代理节点服务器,所以我们将只看到一个请求。 注意有没有“+”号在前面,代表这个证书有没有被签名。 使用**puppet cert sign**到**hostname**这个命令来签署这个签名请求。 @@ -299,15 +289,15 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 Notice: Signed certificate request for puppetnode.example.com Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' -服务端只能和它所签名的代理节点通信并控制代理节点。 +服务端只能控制它签名过的代理节点。 -如想我们想签署所有的请求, 我们需要使用-all选项,如下所示。 +如想我们想签署所有的请求, 需要使用-all选项,如下所示。 # puppet cert sign --all ### 删除一个Puppet证书 ### -如果我们想移除一个主机,或者想重建一个主机然后再添加它. 下面的例子我们将展示如何删除puppet master上面的一个证书. 使用的命令如下: +如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除puppet master上面的一个证书。使用的命令如下: # puppet cert clean hostname @@ -315,15 +305,14 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' -如果我们想查看目前所有的签署和未签署的请求,使用下面这条命令 +如果我们想查看所有的签署和未签署的请求,使用下面这条命令 # puppet cert list --all + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") -### 12. 部署一个 Puppet清单 ### +### 12. 部署代理节点Puppet清单 ### -After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node. -当我们配置并完成主puppet清单,我们现在署代理节点服务器清单。要应用并加载主清单,我们可以在代理节点服务器上面使用下面的命令 +当配置并完成主puppet清单后,现在我们需要署代理节点服务器清单。要应用并加载主puppet清单,我们可以在代理节点服务器上面使用下面的命令 # puppet agent --test @@ -335,13 +324,13 @@ After we configure and complete the puppet manifest, we'll wanna deploy the mani Notice: Finished catalog run in 10.53 seconds 这里像我们展示了主清单如何去管理一个单一的服务器。 -If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from. -如果我们打算运行的puppet清单与主清单没有什么关联,那么需要使用puppet apply 到相应的路径。它仅适用于该代理节点。 + +如果我们打算运行的puppet清单与主puppet清单没有什么关联,那么需要使用puppet apply 到相应的路径。它仅适用于该代理节点。 # puppet apply /etc/puppet/manifest/test.pp -### 13. 配置一个特殊节点清单 ### +### 13. 配置特殊节点清单 ### -如果我们想部署一个清单到某个特定的节点,我们需要配置清单如下。 +如果我们想部署一个清单到某个特定的节点,我们需要进行以下操作。 在主服务器上面使用文本编辑器编辑/etc/puppet/manifest/site.pp # nano /etc/puppet/manifest/site.pp @@ -366,12 +355,12 @@ If we wanna run a puppet manifest that is not related to the main manifest, we c } } -这里的配置显示我们将在名为puppetnode and puppetnode1的2个特殊节点上面安装apache服务. 这里可以添加更多我们需要安装部署的具体节点进去。 +这里的配置显示我们将在名为puppetnode and puppetnode1的2个特殊节点上面安装apache服务. 这里可以添加其他我们需要安装部署的具体节点进去。 ### 14. 配置清单模块 ### -模块化组件组是非常实用的,在Puppet社区有很多人提交自己的模块。 +模块化组件组是非常实用的,在Puppet社区有很多人贡献自己的模块组件。 -在主puppet服务器上, 我们将使用uppet module命令来安装**puppetlabs-apache** 模块。 +在主puppet服务器上, 我们将使用puppet module命令来安装**puppetlabs-apache** 模块。 # puppet module install puppetlabs-apache @@ -395,7 +384,7 @@ If we wanna run a puppet manifest that is not related to the main manifest, we c ### 总结 ### -现在我们已经成功的在ubuntu 15.04上面部署并运行puppet来管理代理节点服务器的基础运行环境.我们学习了,puppet是如何工作的,配置清单文件,节点与主机间的ssl证书认证。使用puppet控制,管理并且配置众多的代理节点服务器是非常容易的。如果你有任何的问题,建议,反馈,请务必与我们取得联系我们将及时的改善更新,谢谢。 +现在我们已经成功的在ubuntu 15.04上面部署并运行puppet来管理代理节点服务器的基础运行环境。我们学习了puppet是如何工作的,编写清单文件,节点与主机间使用ssl证书认证的认证过程。使用puppet管理配置众多的代理节点服务器是非常容易的。如果你有任何的问题,建议,反馈,与我们取得联系,我们将第一时间完善更新,谢谢。 -------------------------------------------------------------------------------- From 7e6e201929e878809237fb28db4f8e7df529fd34 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:45:02 +0800 Subject: [PATCH 102/160] =?UTF-8?q?Rename=2020150806=20Installation=20Guid?= =?UTF-8?q?e=20for=20Puppet=20on=20Ubuntu=2015.04.md=20to=2020150806=20Ins?= =?UTF-8?q?tallation=20Guide=20for=20Puppet=20on=20Ubuntu=2015.04.md?= =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/{20150806 Installation Guide for Puppet on Ubuntu 15.04.md => 20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕} (100%) diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕 similarity index 100% rename from sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md rename to sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕 From 98864bbc306ea66c39e5877022b6f4f0371d959b Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:45:43 +0800 Subject: [PATCH 103/160] Rename 20150806 Installation Guide for Puppet on Ubuntu 15.04 to 20150806 Installation Guide for Puppet on Ubuntu 15.04.md --- ... => 20150806 Installation Guide for Puppet on Ubuntu 15.04.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{20150806 Installation Guide for Puppet on Ubuntu 15.04 => 20150806 Installation Guide for Puppet on Ubuntu 15.04.md} (100%) diff --git a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04 b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md similarity index 100% rename from translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04 rename to translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md From 9dddb652df763de9f174f0c15b79188399841adb Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:48:57 +0800 Subject: [PATCH 104/160] Update 20150806 Installation Guide for Puppet on Ubuntu 15.04.md --- .../20150806 Installation Guide for Puppet on Ubuntu 15.04.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index 290698a62f..4b40960c1c 100644 --- a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -391,7 +391,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ 作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/ivo-wang) +译者:[ivo-wang](https://github.com/ivo-wang) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 2df47a9d28ed1f130287c489b9fc2e3b4a0db688 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:50:16 +0800 Subject: [PATCH 105/160] Update 20150806 Installation Guide for Puppet on Ubuntu 15.04.md --- .../20150806 Installation Guide for Puppet on Ubuntu 15.04.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index 4b40960c1c..8c96091bf0 100644 --- a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -32,6 +32,8 @@ puppet代理节点所使用系统时间必须要准确,这样可以避免代 ### 3. 安装服务器软件 ### 安装开源版本的puppet有很多的方法。在本教程中我们在puppet实验室官网下载一个名为puppetlabs-release的软件包,安装后它将为我们在软件源里面添加puppetmaster-passenger。puppetmaster-passenger依赖于apache的puppet服务端。我们开始下载这个软件包 + + # cd /tmp/ # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb From f8fbc01efa7115e507606303e35524081016f088 Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:53:30 +0800 Subject: [PATCH 106/160] Update 20150806 Installation Guide for Puppet on Ubuntu 15.04.md --- ...lation Guide for Puppet on Ubuntu 15.04.md | 31 ++++++++++++++++--- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index 8c96091bf0..93a9d05f80 100644 --- a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -11,6 +11,8 @@ puupet服务器IP:44.55.88.6,主机名: puppetmaster puppet代理节点IP: 45.55.86.39 ,主机名: puppetnode 我们要在代理节点和服务器这两天机器的hosts里面都添加上相应的条目,使用root或是sudo访问权限来编辑/etc/hosts文件,命令如下: + + # nano /etc/hosts 45.55.88.6 puppetmaster.example.com puppetmaster @@ -21,6 +23,8 @@ puppet代理节点IP: 45.55.86.39 ,主机名: puppetnode ### 2. 用NTP更新时间 ### puppet代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用NTP(Network Time Protocol,网络时间协议)来同步时间。在服务器与代理节点上面分别运行以下命令来同步时间。 + + # ntpdate pool.ntp.org 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec (译者注:显示类似的输出结果表示运行正常) @@ -69,6 +73,8 @@ puppet代理节点所使用系统时间必须要准确,这样可以避免代 **提示**: 在安装的时候可能会报错**Warning: Setting templatedir is deprecated.请查看 http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** 不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项disable就行了。 如何来查看puppet master是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。 + + # puppet --version 3.8.1 @@ -85,6 +91,8 @@ puppet代理节点所使用系统时间必须要准确,这样可以避免代 # nano /etc/apt/preferences.d/00-puppet.pref 在新创建的文件里面添加以下内容 + + # /etc/apt/preferences.d/00-puppet.pref Package: puppet puppet-common puppetmaster-passenger Pin: version 3.8* @@ -94,6 +102,8 @@ puppet代理节点所使用系统时间必须要准确,这样可以避免代 ### 5. 配置 Puppet Master### Puppet master作为一个证书发行机构,所有代理证书的请求都将由它来处理。首先我们要删除所有在软件包安装过程中创建出来的ssl证书。本地默认的puppet证书在/var/lib/puppet/ssl。因此我们只需要使用rm命令来移除这些证书就可以了。 + + # rm -rf /var/lib/puppet/ssl 现在来配置这些证书,在创建puppet master证书的时候,需要用使用DNS能查找到的代理节点名称。使用文本编辑器来修改服务器的配置文件puppet.conf @@ -117,6 +127,8 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 在这我们需要注释掉templatedir 这行使它失效。然后在文件的结尾添加下面的信息。 + + server = puppetmaster environment = production runinterval = 1h @@ -129,6 +141,8 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 编辑完成后保存退出。 使用下面的命令来生成一个新的证书。 + + # puppet master --verbose --no-daemonize Info: Creating a new SSL key for ca @@ -189,6 +203,8 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 我们已经准备好了puppet的服务端,现在来为代理节点安装客户端。这里我们要给每一个需要管理的节点安装客户端,并且确保这些节点能够通过DNS查询到服务器主机。下面将 puppetnode.example.com作为代理节点安装客户端 在代理节点服务器上,使用下面的命令下载puppet实验室提供的软件包。 + + # cd /tmp/ # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ @@ -204,9 +220,12 @@ Puppet master作为一个证书发行机构,所有代理证书的请求都将 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] 在ubuntu 15.04上我们使用debian包管理系统来安装它,命令如下: + + # dpkg -i puppetlabs-release-trusty.deb 使用apt包管理命令更新一下本地的软件源 + # apt-get update 通过远程仓库安装 @@ -224,6 +243,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 ### 9. 使用APT工具锁定Agent(客户端)版本 ### 和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用apt工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref** + # nano /etc/apt/preferences.d/00-puppet.pref 在新建的文件里面加入如下内容 @@ -245,6 +265,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 假定服务端可用,我们的客户端应该是可以和它相互连接通信的。如果不行我们需要使用完整的主机域名puppetmaster.example.com + [agent] server = puppetmaster.example.com certname = puppetnode.example.com @@ -278,16 +299,16 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 第一次运行的时候,代理节点会生成一个ssl证并且给服务端发送一个请求。在主服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 在主服务器上使用下面的命令来列出当前的证书请求 - # puppet cert list + # puppet cert list "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 因为只设置了一台代理节点服务器,所以我们将只看到一个请求。 注意有没有“+”号在前面,代表这个证书有没有被签名。 使用**puppet cert sign**到**hostname**这个命令来签署这个签名请求。 - # puppet cert sign puppetnode.example.com + # puppet cert sign puppetnode.example.com Notice: Signed certificate request for puppetnode.example.com Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' @@ -302,16 +323,16 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除puppet master上面的一个证书。使用的命令如下: # puppet cert clean hostname - Notice: Revoked certificate with serial 5 Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' 如果我们想查看所有的签署和未签署的请求,使用下面这条命令 + # puppet cert list --all - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + ### 12. 部署代理节点Puppet清单 ### 当配置并完成主puppet清单后,现在我们需要署代理节点服务器清单。要应用并加载主puppet清单,我们可以在代理节点服务器上面使用下面的命令 @@ -328,6 +349,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 这里像我们展示了主清单如何去管理一个单一的服务器。 如果我们打算运行的puppet清单与主puppet清单没有什么关联,那么需要使用puppet apply 到相应的路径。它仅适用于该代理节点。 + # puppet apply /etc/puppet/manifest/test.pp ### 13. 配置特殊节点清单 ### @@ -335,6 +357,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 如果我们想部署一个清单到某个特定的节点,我们需要进行以下操作。 在主服务器上面使用文本编辑器编辑/etc/puppet/manifest/site.pp + # nano /etc/puppet/manifest/site.pp 添加下面的内容进去 From 0cecba377b72802d1f04ac0251224d5a176f512b Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 11:55:16 +0800 Subject: [PATCH 107/160] Update 20150806 Installation Guide for Puppet on Ubuntu 15.04.md --- ...50806 Installation Guide for Puppet on Ubuntu 15.04.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index 93a9d05f80..7195dbe268 100644 --- a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -318,7 +318,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 # puppet cert sign --all -### 删除一个Puppet证书 ### +### 12.删除一个Puppet证书 ### 如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除puppet master上面的一个证书。使用的命令如下: @@ -333,7 +333,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") -### 12. 部署代理节点Puppet清单 ### +### 13. 部署代理节点Puppet清单 ### 当配置并完成主puppet清单后,现在我们需要署代理节点服务器清单。要应用并加载主puppet清单,我们可以在代理节点服务器上面使用下面的命令 @@ -352,7 +352,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 # puppet apply /etc/puppet/manifest/test.pp -### 13. 配置特殊节点清单 ### +### 14. 配置特殊节点清单 ### 如果我们想部署一个清单到某个特定的节点,我们需要进行以下操作。 @@ -381,7 +381,7 @@ Puppet客户端默认是不启动的。这里我们需要使用文本编辑器 } 这里的配置显示我们将在名为puppetnode and puppetnode1的2个特殊节点上面安装apache服务. 这里可以添加其他我们需要安装部署的具体节点进去。 -### 14. 配置清单模块 ### +### 15. 配置清单模块 ### 模块化组件组是非常实用的,在Puppet社区有很多人贡献自己的模块组件。 From 031303b37b641ecc5a3223415cb89cf735c4187b Mon Sep 17 00:00:00 2001 From: ivo wang Date: Fri, 4 Dec 2015 12:24:37 +0800 Subject: [PATCH 108/160] Update 20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md --- ...stall OsTicket Ticketing System in Fedora 22 or Centos 7.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md index 515b15844a..7a56750804 100644 --- a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md +++ b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md @@ -1,3 +1,4 @@ +translated by iov-wang How to Install OsTicket Ticketing System in Fedora 22 / Centos 7 ================================================================================ In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system. @@ -176,4 +177,4 @@ via: http://linoxide.com/linux-how-to/install-osticket-fedora-22-centos-7/ [a]:http://linoxide.com/author/arunp/ [1]:http://www.enhancesoft.com/ [2]:http://osticket.com/download -[3]:https://github.com/osTicket/osTicket-1.8/releases \ No newline at end of file +[3]:https://github.com/osTicket/osTicket-1.8/releases From 35f8a39b1337d7c9b3f65d896b1fd69633c287fe Mon Sep 17 00:00:00 2001 From: VicYu Date: Fri, 4 Dec 2015 15:02:19 +0800 Subject: [PATCH 109/160] =?UTF-8?q?=E5=88=A0=E9=99=A4IvoWang=E7=9A=84?= =?UTF-8?q?=E5=8E=9F=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...uide for Puppet on Ubuntu 15.04.md翻译完毕 | 430 ------------------ 1 file changed, 430 deletions(-) delete mode 100644 sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕 diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕 b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕 deleted file mode 100644 index 59a243f0e5..0000000000 --- a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md翻译完毕 +++ /dev/null @@ -1,430 +0,0 @@ -Translating by ivowang -Installation Guide for Puppet on Ubuntu 15.04 -================================================================================ -Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. - -In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04. - -### 1. Setting up Hosts ### - -In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial. - -puppet master server with IP 44.55.88.6 and hostname : puppetmaster -puppet node agent with IP 45.55.86.39 and hostname : puppetnode - -Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server. - - # nano /etc/hosts - - 45.55.88.6 puppetmaster.example.com puppetmaster - 45.55.86.39 puppetnode.example.com puppetnode - -Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it. - -### 2. Updating Time with NTP ### - -As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent. - - # ntpdate pool.ntp.org - - 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec - -Now, we'll update our local repository index and install ntp as follows. - - # apt-get update && sudo apt-get -y install ntp ; service ntp restart - -### 3. Puppet Master Package Installation ### - -There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package. - - # cd /tmp/ - # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - - --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d - Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7384 (7.2K) [application/x-debian-package] - Saving to: ‘puppetlabs-release-trusty.deb’ - - puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s - - 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] - -After the download has been completed, we'll wanna install the package. - - # dpkg -i puppetlabs-release-trusty.deb - - Selecting previously unselected package puppetlabs-release. - (Reading database ... 85899 files and directories currently installed.) - Preparing to unpack puppetlabs-release-trusty.deb ... - Unpacking puppetlabs-release (1.0-11) ... - Setting up puppetlabs-release (1.0-11) ... - -Then, we'll update the local respository index with the server using apt package manager. - - # apt-get update - -Then, we'll install the puppetmaster-passenger package by running the below command. - - # apt-get install puppetmaster-passenger - -**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :) - -To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version. - - # puppet --version - - 3.8.1 - -We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running. - -Before continuing, we'll need to stop the Puppet master by stopping the apache2 service. - - # systemctl stop apache2 - -### 4. Master version lock with Apt ### - -As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor. - - # nano /etc/apt/preferences.d/00-puppet.pref - -Then, we'll gonna add the entries in the newly created file as: - - # /etc/apt/preferences.d/00-puppet.pref - Package: puppet puppet-common puppetmaster-passenger - Pin: version 3.8* - Pin-Priority: 501 - -Now, it will not update the puppet while running updates in the system. - -### 5. Configuring Puppet Config ### - -Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command. - - # rm -rf /var/lib/puppet/ssl - -Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor. - - # nano /etc/puppet/puppet.conf - -The output seems as shown below. - - [main] - logdir=/var/log/puppet - vardir=/var/lib/puppet - ssldir=/var/lib/puppet/ssl - rundir=/var/run/puppet - factpath=$vardir/lib/facter - templatedir=$confdir/templates - - [master] - # These are needed when the puppetmaster is run by passenger - # and can safely be removed if webrick is used. - ssl_client_header = SSL_CLIENT_S_DN - ssl_client_verify_header = SSL_CLIENT_VERIFY - -Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main]. - - server = puppetmaster - environment = production - runinterval = 1h - strict_variables = true - certname = puppetmaster - dns_alt_names = puppetmaster, puppetmaster.example.com - -This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1]. - -After editing the file, we'll wanna save that and exit. - -Now, we'll gonna generate a new CA certificates by running the following command. - - # puppet master --verbose --no-daemonize - - Info: Creating a new SSL key for ca - Info: Creating a new SSL certificate request for ca - Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 - ... - Notice: puppetmaster has a waiting certificate request - Notice: Signed certificate request for puppetmaster - Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' - Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' - Notice: Starting Puppet master version 3.8.1 - ^CNotice: Caught INT; storing stop - Notice: Processing stop - -Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell. - -If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command. - - # puppet cert list -all - - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") - -### 6. Creating a Puppet Manifest ### - -The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command. - - # nano /etc/puppet/manifests/site.pp - -Then, we'll add the following lines of configuration in the file that we just opened. - - # execute 'apt-get update' - exec { 'apt-update': # exec resource named 'apt-update' - command => '/usr/bin/apt-get update' # command this resource will run - } - - # install apache2 package - package { 'apache2': - require => Exec['apt-update'], # require 'apt-update' before installing - ensure => installed, - } - - # ensure apache2 service is running - service { 'apache2': - ensure => running, - } - -The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent. - -### 7. Starting Master Service ### - -We are now ready to start the puppet master. We can start it by running the apache2 service. - - # systemctl start apache2 - -Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master. - -**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server. - -### 8. Puppet Agent Package Installation ### - -Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com . - -We'll run the following command to download the Puppet Labs package in our puppet agent nodes. - - # cd /tmp/ - # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ - - --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d - Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7384 (7.2K) [application/x-debian-package] - Saving to: ‘puppetlabs-release-trusty.deb’ - - puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s - - 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] - -Then, as we're running ubuntu 15.04, we'll use debian package manager to install it. - - # dpkg -i puppetlabs-release-trusty.deb - -Now, we'll gonna update the repository index using apt-get. - - # apt-get update - -Finally, we'll gonna install the puppet agent directly from the remote repository. - - # apt-get install puppet - -Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor. - - # nano /etc/default/puppet - -Then, we'll need to change value of **START** to "yes" as shown below. - - START=yes - -Then, we'll need to save and exit the file. - -### 9. Agent Version Lock with Apt ### - -As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor. - - # nano /etc/apt/preferences.d/00-puppet.pref - -Then, we'll gonna add the entries in the newly created file as: - - # /etc/apt/preferences.d/00-puppet.pref - Package: puppet puppet-common - Pin: version 3.8* - Pin-Priority: 501 - -Now, it will not update the Puppet while running updates in the system. - -### 10. Configuring Puppet Node Agent ### - -Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf - - # nano /etc/puppet/puppet.conf - -It will look exactly like the Puppet master's initial configuration file. - -This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it. - -Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com . - - [agent] - server = puppetmaster.example.com - certname = puppetnode.example.com - -After adding this, it will look alike this. - - [main] - logdir=/var/log/puppet - vardir=/var/lib/puppet - ssldir=/var/lib/puppet/ssl - rundir=/var/run/puppet - factpath=$vardir/lib/facter - #templatedir=$confdir/templates - - [agent] - server = puppetmaster.example.com - certname = puppetnode.example.com - -After done with that, we'll gonna save and exit it. - -Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command. - - # systemctl start puppet - -If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node. - -**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further. - -### 11. Signing certificate Requests on Master ### - -While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate. - -To get the list of the certificate requests, we'll run the following command in the puppet master server. - - # puppet cert list - - "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 - -As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname. - -Note that there is no + in front of it which indicates that it has not been signed yet. - -Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below. - - # puppet cert sign puppetnode.example.com - - Notice: Signed certificate request for puppetnode.example.com - Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' - -The Puppet master can now communicate and control the node that the signed certificate belongs to. - -If we want to sign all of the current requests, we can use the -all option as shown below. - - # puppet cert sign --all - -### Removing a Puppet Certificate ### - -If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows. - - # puppet cert clean hostname - - Notice: Revoked certificate with serial 5 - Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' - Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' - -If we want to view all of the requests signed and unsigned, run the following command: - - # puppet cert list --all - - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") - -### 12. Deploying a Puppet Manifest ### - -After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node. - - # puppet agent --test - - Info: Retrieving pluginfacts - Info: Retrieving plugin - Info: Caching catalog for puppetnode.example.com - Info: Applying configuration version '1434563858' - Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully - Notice: Finished catalog run in 10.53 seconds - -This will show us all the processes how the main manifest will affect a single server immediately. - -If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from. - - # puppet apply /etc/puppet/manifest/test.pp - -### 13. Configuring Manifest for a Specific Node ### - -If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows. - -We'll need to edit the manifest on the master server using a text editor. - - # nano /etc/puppet/manifest/site.pp - -Now, we'll gonna add the following lines there. - - node 'puppetnode', 'puppetnode1' { - # execute 'apt-get update' - exec { 'apt-update': # exec resource named 'apt-update' - command => '/usr/bin/apt-get update' # command this resource will run - } - - # install apache2 package - package { 'apache2': - require => Exec['apt-update'], # require 'apt-update' before installing - ensure => installed, - } - - # ensure apache2 service is running - service { 'apache2': - ensure => running, - } - } - -Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically. - -### 14. Configuring Manifest with a Module ### - -Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further. - -On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command. - - # puppet module install puppetlabs-apache - -**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet. - -Now we'll gonna edit the main manifest ie **site.pp** using a text editor. - - # nano /etc/puppet/manifest/site.pp - -Now add the following lines to install apache under puppetnode. - - node 'puppet-node' { - class { 'apache': } # use apache module - apache::vhost { 'example.com': # define vhost resource - port => '80', - docroot => '/var/www/html' - } - } - -Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure. - -### Conclusion ### - -Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html From 8e3192ca90ad0fe255fb774fd416e34bda9f5ec8 Mon Sep 17 00:00:00 2001 From: VicYu Date: Fri, 4 Dec 2015 15:17:58 +0800 Subject: [PATCH 110/160] =?UTF-8?q?=E6=9D=A5=EF=BC=8C=E6=95=B4=E8=B5=B7?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...inux ftp command to up- and download files on the shell.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md b/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md index 54b69555c4..e2ddf0cb2a 100644 --- a/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md +++ b/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md @@ -1,3 +1,5 @@ +Vic020 + How to use the Linux ftp command to up- and download files on the shell ================================================================================ In this tutorial, I will explain how to use the Linux ftp command on the shell. I will show you how to connect to an FTP server, up- and download files and create directories. While there are many nice desktops FTP clients available, the FTP command is still useful when you work remotely on a server over an SSH session and e.g. want to fetch a backup file from your FTP storage. @@ -143,4 +145,4 @@ via: https://www.howtoforge.com/tutorial/how-to-use-ftp-on-the-linux-shell/ 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 77faf28c55ccf5bb581bfbde2a7d2ac8c87618b3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 4 Dec 2015 15:52:16 +0800 Subject: [PATCH 111/160] Create 20151204 Review EXT4 vs. Btrfs vs. XFS.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 20151204-1 选题 --- .../20151204 Review EXT4 vs. Btrfs vs. XFS.md | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md diff --git a/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md b/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md new file mode 100644 index 0000000000..d370b87822 --- /dev/null +++ b/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md @@ -0,0 +1,65 @@ +Review EXT4 vs. Btrfs vs. XFS +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/09/1385698302_funny_linux_wallpapers-593x445.jpg) + +To be honest, one of the things that comes last in people’s thinking is to look at which file system on their PC is being used. Windows users as well as Mac OS X users even have less reason for looking as they have really only 1 choice for their operating system which are NTFS and HFS+. Linux operating system, on the other side, has plenty of various file system options, with the current default is being widely used ext4. However, there is another push for changing the file system to something other which is called btrfs. But what makes btrfs better, what are other file systems, and when can we see the distributions making the change? + +Let’s first have a general look at file systems and what they really do, then we will make a small comparison between famous file systems. + +### So, What Do File Systems Do? ### + +Just in case if you are unfamiliar about what file systems really do, it is actually simple when it is summarized. The file systems are mainly used in order for controlling how the data is stored after any program is no longer using it, how access to the data is controlled, what other information (metadata) is attached to the data itself, etc. I know that it does not sound like an easy thing to be programmed, and it is definitely not. The file systems are continually still being revised for including more functionality while becoming more efficient in what it simply needs to do. Therefore, however, it is a basic need for all computers, it is not quite as basic as it sounds like. + +### Why Partitioning? ### + +Many people have a vague knowledge of what the partitions are since each operating system has an ability for creating or removing them. It can seem strange that Linux operating system uses more than 1 partition on the same disk, even while using the standard installation procedure, so few explanations are called for them. One of the main goals of having different partitions is achieving higher data security in the disaster case. + +By dividing your hard disk into partitions, the data may be grouped and also separated. When the accidents occur, only the data stored in the partition which got the hit will only be damaged, while data on the other partitions will survive most likely. These principles date from the days when the Linux operating system didn’t have a journaled file system and any power failure might have led to a disaster. + +The using of partitions will remain for security and the robustness reasons, then the breach on 1 part of the operating system does not automatically mean that whole computer is under risk or danger. This is currently most important factor for the partitioning process. For example, the users create scripts, the programs or web applications which start filling up the disk. If that disk contains only 1 big partition, then entire system may stop functioning if that disk is full. If the users store data on separate partitions, then only that data partition can be affected, while system partitions and the possible other data partitions will keep functioning. + +Mind that to have a journaled file system will only provide data security in case if there is a power failure as well as sudden disconnection of the storage devices. Such will not protect the data against the bad blocks and the logical errors in the file system. In such cases, the user should use a Redundant Array of Inexpensive Disks (RAID) solution. + +### Why Switch File Systems? ### + +The ext4 file system has been an improvement for the ext3 file system that was also an improvement over the ext2 file system. While the ext4 is a very solid file system which has been the default choice for almost all distributions for the past few years, it is made from an aging code base. Additionally, Linux operating system users are seeking many new different features in file systems which ext4 does not handle on its own. There is software which takes care of some of such needs, but in the performance aspect, being able to do such things on the file system level could be faster. + +### Ext4 File System ### + +The ext4 has some limits which are still a bit impressive. The maximum file size is 16 tebibytes (which is roughly 17.6 terabytes) and is much bigger than any hard drive a regular consumer can currently buy. While, the largest volume/partition you can make with ext4 is 1 exbibyte (which is roughly 1,152,921.5 terabytes). The ext4 is known to bring the speed improvements over ext3 by using multiple various techniques. Like in the most modern file systems, it is a journaling file system that means that it will keep a journal of where the files are mainly located on the disk and of any other changes that happen to the disk. Regardless all of its features, it doesn’t support the transparent compression, the data deduplication, or the transparent encryption. The snapshots are supported technically, but such feature is experimental at best. + +### Btrfs File System ### + +The btrfs, many of us pronounce it different ways, as an example, Better FS, Butter FS, or B-Tree FS. It is a file system which is completely made from scratch. The btrfs exists because its developers firstly wanted to expand the file system functionality in order to include snapshots, pooling, as well as checksums among the other things. While it is independent from the ext4, it also wants to build off the ideas present in the ext4 that are great for the consumers and the businesses alike as well as incorporate those additional features that will benefit everybody, but specifically the enterprises. For the enterprises who are using very large programs with very large databases, they are having a seemingly continuous file system across the multiple hard drives could be very beneficial as it will make a consolidation of the data much easier. The data deduplication could reduce the amount of the actual space data could occupy, and the data mirroring could become easier with the btrfs as well when there is a single and broad file system which needs to be mirrored. + +The user certainly can still choose to create multiple partitions so that he does not need to mirror everything. Considering that the btrfs will be able for spanning over the multiple hard drives, it is a very good thing that it can support 16 times more drive space than the ext4. A maximum partition size of the btrfs file system is 16 exbibytes, as well as maximum file size is 16 exbibytes too. + +### XFS File System ### + +The XFS file system is an extension of the extent file system. The XFS is a high-performance 64-bit journaling file system. The support of the XFS was merged into Linux kernel in around 2002 and In 2009 Red Hat Enterprise Linux version 5.4 usage of the XFS file system. XFS supports maximum file system size of 8 exbibytes for the 64-bit file system. There is some comparison of XFS file system is XFS file system can’t be shrunk and poor performance with deletions of the large numbers of files. Now, the RHEL 7.0 uses XFS as the default filesystem. + +### Final Thoughts ### + +Unfortunately, the arrival date for the btrfs is not quite known. But officially, the next-generation file system is still classified as “unstable”, but if the user downloads the latest version of Ubuntu, he will be able to choose to install on a btrfs partition. When the btrfs will be classified actually as “stable” is still a mystery, but users shouldn’t expect the Ubuntu to use the btrfs by default until it’s indeed considered “stable”. It has been reported that Fedora 18 will use the btrfs as its default file system as by the time of its release a file system checker for the btrfs should exist. There is a good amount of work still left for the btrfs, as not all the features are yet implemented and the performance is a little sluggish if we compare it to the ext4. + +So, which is better to use? Till now, the ext4 will be the winner despite the identical performance. But why? The answer will be the convenience as well as the ubiquity. The ext4 is still excellent file system for the desktop or workstation use. It is provided by default, so the user can install the operating system on it. Also, the ext4 supports volumes up to 1 Exabyte and files up to 16 Terabyte in size, so there’s still a plenty of room for the growth where space is concerned. + +The btrfs might offer greater volumes up to 16 Exabyte and improved fault tolerance, but, till now, it feels more as an add-on file system rather than one integrated into the Linux operating system. For example, the btrfs-tools have to be present before a drive will be formatted with the btrfs, which means that the btrfs is not an option during the Linux operating system installation though that could vary with the distribution. + +Even though the transfer rates are so important, there’s more to a just file system than speed of the file transfers. The btrfs has many useful features such as Copy-on-Write (CoW), extensive checksums, snapshots, scrubbing, self-healing data, deduplication, as well as many more good improvements that ensure the data integrity. The btrfs lacks the RAID-Z features of ZFS, so the RAID is still in an experimental state with the btrfs. For pure data storage, however, the btrfs is the winner over the ext4, but time still will tell. + +Till the moment, the ext4 seems to be a better choice on the desktop system since it is presented as a default file system, as well as it is faster than the btrfs when transferring files. The btrfs is definitely worth to look into, but to completely switch to replace the ext4 on desktop Linux might be few years later. The data farms and the large storage pools could reveal different stories and show the right differences between ext4, XCF, and btrfs. + +If you have a different or additional opinion, kindly let us know by commenting on this article. + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/review-ext4-vs-btrfs-vs-xfs/ + +作者:[M.el Khamlichi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ From b73766ccc0b51aca8f6e62ee24313f2099fc1190 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 4 Dec 2015 15:55:19 +0800 Subject: [PATCH 112/160] =?UTF-8?q?20151204-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 20151204-2 选题 --- ...P from Fail2ban on CentOS 6 or CentOS 7.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md diff --git a/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md b/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md new file mode 100644 index 0000000000..38d3d89735 --- /dev/null +++ b/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md @@ -0,0 +1,60 @@ +How to Remove Banned IP from Fail2ban on CentOS 6 / CentOS 7 +================================================================================ +![](http://www.ehowstuff.com/wp-content/uploads/2015/12/security-265130_1280.jpg) + +[Fail2ban][1] is an intrusion prevention software framework that able to protect your server from brute-force attacks. Fail2ban written in the Python programming language and is widely used by most of the VPS servers. Fail2ban will scan log files and IP blacklists that shows signs of malicious, too many password failures, web server exploitation, WordPress plugin attacks and other vulnerabilities. If you already installed and used fail2ban to protect your web server, you may be wondering how to find the IP banned or blocked by Fail2ban, or you may want to remove banned ip from fail2ban jail on CentOS 6, CentOS 7, RHEL 6, RHEL 7 and Oracle Linux 6/7. + +### How to List of Banned IP address ### + +To see all the blocked ip addresses, run the following command : + + # iptables -L + Chain INPUT (policy ACCEPT) + target prot opt source destination + f2b-AccessForbidden tcp -- anywhere anywhere tcp dpt:http + f2b-WPLogin tcp -- anywhere anywhere tcp dpt:http + f2b-ConnLimit tcp -- anywhere anywhere tcp dpt:http + f2b-ReqLimit tcp -- anywhere anywhere tcp dpt:http + f2b-NoAuthFailures tcp -- anywhere anywhere tcp dpt:http + f2b-SSH tcp -- anywhere anywhere tcp dpt:ssh + f2b-php-url-open tcp -- anywhere anywhere tcp dpt:http + f2b-nginx-http-auth tcp -- anywhere anywhere multiport dports http,https + ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED + ACCEPT icmp -- anywhere anywhere + ACCEPT all -- anywhere anywhere + ACCEPT tcp -- anywhere anywhere tcp dpt:EtherNet/IP-1 + ACCEPT tcp -- anywhere anywhere tcp dpt:http + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain FORWARD (policy ACCEPT) + target prot opt source destination + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain OUTPUT (policy ACCEPT) + target prot opt source destination + + + Chain f2b-NoAuthFailures (1 references) + target prot opt source destination + REJECT all -- 64.68.50.128 anywhere reject-with icmp-port-unreachable + REJECT all -- 104.194.26.205 anywhere reject-with icmp-port-unreachable + RETURN all -- anywhere anywhere + +### How to Remove Banned IP from Fail2ban jail ### + + # iptables -D f2b-NoAuthFailures -s banned_ip -j REJECT + +I hope this article gives you some ideas and quick guide on remove banned IP from Fail2ban jail on on CentOS 6, CentOS 7, RHEL 6, RHEL 7 and Oracle Linux 6/7. + +-------------------------------------------------------------------------------- + +via: http://www.ehowstuff.com/how-to-remove-banned-ip-from-fail2ban-on-centos/ + +作者:[skytech][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ehowstuff.com/author/skytech/ +[1]:http://www.fail2ban.org/wiki/index.php/Main_Page From 09a694c6492ffa6502a085f19e8eec948d4d1e72 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 4 Dec 2015 15:56:20 +0800 Subject: [PATCH 113/160] =?UTF-8?q?20151204-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 20151204-2 选题 --- ...04 Linux or Unix--jobs Command Examples.md | 195 ++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 sources/tech/20151204 Linux or Unix--jobs Command Examples.md diff --git a/sources/tech/20151204 Linux or Unix--jobs Command Examples.md b/sources/tech/20151204 Linux or Unix--jobs Command Examples.md new file mode 100644 index 0000000000..560a1ea1e8 --- /dev/null +++ b/sources/tech/20151204 Linux or Unix--jobs Command Examples.md @@ -0,0 +1,195 @@ +Linux / Unix: jobs Command Examples +================================================================================ +I am new Linux and Unix user. How do I show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH or POSIX based shell? How can I display status of jobs in the current session on Unix/Linux? + +Job control is nothing but the ability to stop/suspend the execution of processes (command) and continue/resume their execution as per your requirements. This is done using your operating system and shell such as bash/ksh or POSIX shell. + +You shell keeps a table of currently executing jobs and can be displayed with jobs command. + +### Purpose ### + +> Displays status of jobs in the current shell session. + +### Syntax ### + +The basic syntax is as follows: + + jobs + +OR + + jobs jobID + +OR + + jobs [options] jobID + +### Starting few jobs for demonstration purpose ### + +Before you start using jobs command, you need to start couple of jobs on your system. Type the following commands to start jobs: + + ## Start xeyes, calculator, and gedit text editor ### + xeyes & + gnome-calculator & + gedit fetch-stock-prices.py & + +Finally, run ping command in foreground: + + ping www.cyberciti.biz + +To suspend ping command job hit the **Ctrl-Z** key sequence. + +### jobs command examples ### + +To display the status of jobs in the current shell, enter: + + $ jobs + +Sample outputs: + + [1] 7895 Running gpass & + [2] 7906 Running gnome-calculator & + [3]- 7910 Running gedit fetch-stock-prices.py & + [4]+ 7946 Stopped ping cyberciti.biz + +To display the process ID or jobs for the job whose name begins with "p," enter: + + $ jobs -p %p + +OR + + $ jobs %p + +Sample outputs: + + [4]- Stopped ping cyberciti.biz + +The character % introduces a job specification. In this example, you are using the string whose name begins with suspended command such as %ping. + +### How do I show process IDs in addition to the normal information? ### + +Pass the -l(lowercase L) option to jobs command for more information about each job listed, run: + + $ jobs -l + +Sample outputs: + +![Fig.01: Displaying the status of jobs in the shell](http://s0.cyberciti.org/uploads/faq/2013/02/jobs-command-output.jpg) +Fig.01: Displaying the status of jobs in the shell + +### How do I list only processes that have changed status since the last notification? ### + +First, start a new job as follows: + + $ sleep 100 & + +Now, only show jobs that have stopped or exited since last notified, type: + + $ jobs -n + +Sample outputs: + + [5]- Running sleep 100 & + +### Display lists process IDs (PIDs) only ### + +Pass the -p option to jobs command to display PIDs only: + + $ jobs -p + +Sample outputs: + + 7895 + 7906 + 7910 + 7946 + 7949 + +### How do I display only running jobs? ### + +Pass the -r option to jobs command to display only running jobs only, type: + + $ jobs -r + +Sample outputs: + + [1] Running gpass & + [2] Running gnome-calculator & + [3]- Running gedit fetch-stock-prices.py & + +### How do I display only jobs that have stopped? ### + +Pass the -s option to jobs command to display only stopped jobs only, type: + + $ jobs -s + +Sample outputs: + + [4]+ Stopped ping cyberciti.biz + +To resume the ping cyberciti.biz job by entering the following bg command: + + $ bg %4 + +### jobs command options ### + +From the [bash(1)][1] command man page: + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
-lShow process id's in addition to the normal information.
-pShow process id's only.
-nShow only processes that have changed status since the last notification are printed.
-rRestrict output to running jobs only.
-sRestrict output to stopped jobs only.
-xCOMMAND is run after all job specifications that appear in ARGS have been replaced with the process ID of that job's process group leader./td>
+ +### A note about /usr/bin/jobs and shell builtin ### + +Type the following type command to find out whether jobs is part of shell, external command or both: + + $ type -a jobs + +Sample outputs: + + jobs is a shell builtin + jobs is /usr/bin/jobs + +In almost all cases you need to use the jobs command that is implemented as a BASH/KSH/POSIX shell built-in. The /usr/bin/jobs command can not be used in the current shell. The /usr/bin/jobs command operates in a different environment and does not share the parent bash/ksh's shells understanding of jobs. + +-------------------------------------------------------------------------------- + +via: + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.manpager.com/linux/man1/bash.1.html From 1c7e0c73fdfb276f7945a151b7480be703216f4c Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 4 Dec 2015 15:56:57 +0800 Subject: [PATCH 114/160] =?UTF-8?q?20151204-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 20151204-2 选题 --- ...figure Munin monitoring server in Linux.md | 145 ++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 sources/tech/20151204 Install and Configure Munin monitoring server in Linux.md diff --git a/sources/tech/20151204 Install and Configure Munin monitoring server in Linux.md b/sources/tech/20151204 Install and Configure Munin monitoring server in Linux.md new file mode 100644 index 0000000000..314d721e38 --- /dev/null +++ b/sources/tech/20151204 Install and Configure Munin monitoring server in Linux.md @@ -0,0 +1,145 @@ +Install and Configure Munin monitoring server in Linux +================================================================================ +![](http://www.linuxnix.com/wp-content/uploads/2015/12/munin_page.jpg) + +Munin is an excellent system monitoring tool similar to [RRD tool][1] which will give you ample information about system performance in multiple fronts like **disk, network, process, system and users**. These are some of the default properties Munin monitors. + +### How Munin works? ### + +Munin works in a client-server model. Munin server process on main server try to collect data from client daemon which is running locally(Munin can monitor it’ss own resources) or from remote client(Munin can monitor hundreds of machines) and displays them in graphs on its web interface. + +### Configuring Munin in nutshell ### + +This is of two steps as we have to configure both server and client. +1)Install Munin server package and configure it so that it get data from clients. +2)Configure Munin client so that server will connect to client daemon for data collocation. + +### Install munin server in Linux ### + +Munin server installation on Ubuntu/Debian based machines + + apt-get install munin apache2 + +Munin server installation on Redhat/Centos based machines. Make sure that you [enable EPEL repo][2] before installing Munin on Redhat based machines as by default Redhat based machines do not have Munin in their repos. + + yum install munin httpd + +### Configuring Munin server in Linux ### + +Below are the steps we have to do in order to bring server up. + +1. Add host details which need monitoring in /etc/munin/munin.conf +1. Configure apache web server to include munin details. +1. Create User name and password for web interface +1. Restart apache server + +**Step 1**: Add hosts entry in this file in **/etc/munin/munin.conf**. Go to end of the file and a client to monitor. Here in this example, I added my DB server and its IP address to monitor + +Example: + + [db.linuxnix.com] + address 192.168.1.25 + use_node_name yes + +Save the file and exit. + +**Step 2**: Edit/create munin.conf file in /etc/apache2/conf.d folder to include Munin Apache related configs. In another note, by default other Munin web related configs are kept in /var/www/munin folder. + + vi /etc/apache2/conf.d/munin.conf + +Content: + + Alias /munin /var/www/munin + + Order allow,deny + Allow from localhost 127.0.0.0/8 ::1 + AllowOverride None + Options ExecCGI FollowSymlinks + AddHandler cgi-script .cgi + DirectoryIndex index.cgi + AuthUserFile /etc/munin/munin.passwd + AuthType basic + AuthName "Munin stats" + require valid-user + + ExpiresActive On + ExpiresDefault M310 + + + +Save the file and exit + +**Step 3**: Now create a username and password for viewing muning graphs: + + htpasswd -c /etc/munin/munin-htpasswd munin + +**Note**: For Redhat/Centos machines replace “**apache2**” with “**httpd**” in each path to access your config files. + +**Step 3**: Restart Apache server so that Munin configurations are picked-up by Apache. + +#### Ubuntu/Debian based: #### + + service apache2 restart + +#### Centos/Redhat based: #### + + service httpd restart + +### Install and configure Munin client in Linux ### + +**Step 1**: Install Munin client in Linux + + apt-get install munin-node + +**Note**: If you want to monitor your Munin server, then you have to install munin-node on that as well. + +**Step 2**: Configure client by editing munin-node.conf file. + + vi /etc/munin/munin-node.conf + +Example: + + allow ^127\.0\.0\.1$ + allow ^10\.10\.20\.20$ + +---------- + + # Which address to bind to; + host * + +---------- + + # And which port + port 4949 + +**Note**: 10.10.20.20 is my Munin server and it connections to 4949 port on client to get its data. + +**Step 3**: Restart munin-node on client server + + service munin-node restart + +### Testing connection ### + +check if you are able to connect client from server on 4949 port, other wise you have to open that port on client machine. + + telnet db.linuxnix.com 4949 + +Accessing Munin web interface + + http://munin.linuxnix.com/munin/index.html + +Hope this helps to configure basic Munin server. + +-------------------------------------------------------------------------------- + +via: http://www.linuxnix.com/install-and-configure-munin-monitoring-server-in-linux/ + +作者:[Surendra Anne][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxnix.com/author/surendra/ +[1]:http://www.linuxnix.com/network-monitoringinfo-gathering-tools-in-linux/ +[2]:http://www.linuxnix.com/how-to-install-and-enable-epel-repo-in-rhel-centos-oracle-scentific-linux/ From 9d19e7cd069bf5e69f1b6a42c96b2363bc536acf Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 4 Dec 2015 16:03:29 +0800 Subject: [PATCH 115/160] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E6=96=87=E4=BB=B6?= =?UTF-8?q?=E5=90=8D=EF=BC=8C=E5=B0=86=E6=96=87=E4=BB=B6=E5=90=8D=E7=9A=84?= =?UTF-8?q?=E5=86=92=E5=8F=B7=E6=9B=BF=E6=8D=A2?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ring Public Beta.md => Let's Encrypt--Entering Public Beta.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/news/{Let's Encrypt:Entering Public Beta.md => Let's Encrypt--Entering Public Beta.md} (100%) diff --git a/sources/news/Let's Encrypt:Entering Public Beta.md b/sources/news/Let's Encrypt--Entering Public Beta.md similarity index 100% rename from sources/news/Let's Encrypt:Entering Public Beta.md rename to sources/news/Let's Encrypt--Entering Public Beta.md From c4b09094393e123758640192485a14ef73febca1 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 4 Dec 2015 16:22:06 +0800 Subject: [PATCH 116/160] =?UTF-8?q?20151204-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...P Framework on CentOS 7 or Ubuntu 15.04.md | 175 ++++++++++++++++++ 1 file changed, 175 insertions(+) create mode 100644 sources/tech/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md diff --git a/sources/tech/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md b/sources/tech/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md new file mode 100644 index 0000000000..10b3b02303 --- /dev/null +++ b/sources/tech/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md @@ -0,0 +1,175 @@ +How to Install Laravel PHP Framework on CentOS 7 / Ubuntu 15.04 +================================================================================ +Hi All, In this article we are going to setup Laravel on CentOS 7 and Ubuntu 15.04. If you are a PHP web developer then you don't need to worry about of all modern PHP frameworks, Laravel is the easiest to get up and running that saves your time and effort and makes web development a joy. Laravel embraces a general development philosophy that sets a high priority on creating maintainable code by following some simple guidelines, you should be able to keep a rapid pace of development and be free to change your code with little fear of breaking existing functionality. + +Laravel's PHP framework installation is not a big deal. You can simply follow the step by step guide in this article for your CentOS 7 or Ubuntu 15 server. + +### 1) Server Requirements ### + +Laravel depends upon a number of prerequisites that must be setup before installing it. Those prerequisites includes some basic tuning parameter of server like your system update, sudo rights and installation of required packages. + +Once you are connected to your server make sure to configure the fully qualified domain name then run the commands below to enable EPEL Repo and update your server. + +#### CentOS-7 #### + + # yum install epel-release + +---------- + + # rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm + # rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm + +---------- + + # yum update + +#### Ubuntu #### + + # apt-get install python-software-properties + # add-apt-repository ppa:ondrej/php5 + +---------- + + # apt-get update + +---------- + + # apt-get install -y php5 mcrypt php5-mcrypt php5-gd + +### 2) Firewall Setup ### + +System Firewall and SELinux setup is an important part regarding the security of your applications in production. You can make firewall off if you are working on test server and keep SELinux to permissive mode using the below command, so that you installing setup won't be affected by it. + + # setenforce 0 + +### 3) Apache, MariaDB, PHP Setup ### + +Laravel installation requires a complete LAMP stack with OpenSSL, PDO, Mbstring and Tokenizer PHP Extensions. If you are already running LAMP server then you can skip this step to move on and just make sure that the required PHP extensions are installed. + +To install AMP stack you can use the below commands on your respective server. + +#### CentOS #### + + # yum install httpd mariadb-server php56w php56w-mysql php56w-mcrypt php56w-dom php56w-mbstring + +To start and enable Apache web and MySQL/Mariadb services at bootup on CentOS 7 , we will use below commands. + + # systemctl start httpd + # systemctl enable httpd + +---------- + + #systemctl start mysqld + #systemctl enable mysqld + +After starting MariaDB service, we will configure its secured password with below command. + + #mysql_secure_installation + +#### Ubuntu #### + + # apt-get install mysql-server apache2 libapache2-mod-php5 php5-mysql + +### 4) Install Composer ### + +Now we are going to install composer that is one of the most important requirement before starting the Laravel installation that helps in installing Laravel's dependencies. + +#### CentOS/Ubuntu #### + +Run the below commands to setup 'composer' in CentOS/Ubuntu. + + # curl -sS https://getcomposer.org/installer | php + # mv composer.phar /usr/local/bin/composer + # chmod +x /usr/local/bin/composer + +![composer installation](http://blog.linoxide.com/wp-content/uploads/2015/11/14.png) + +### 5) Installing Laravel ### + +Laravel's installation package can be downloaded from github using the command below. + +# wget https://github.com/laravel/laravel/archive/develop.zip + +To extract the archived package and move into the document root directory use below commands. + + # unzip develop.zip + +---------- + + # mv laravel-develop /var/www/ + +Now use the following compose command that will install all required dependencies for Laravel within its directory. + + # cd /var/www/laravel-develop/ + # composer install + +![compose laravel](http://blog.linoxide.com/wp-content/uploads/2015/11/25.png) + +### 6) Key Encryption ### + +For encrypter service, we will be generating a 32 digit encryption key using the command below. + + # php artisan key:generate + + Application key [Lf54qK56s3qDh0ywgf9JdRxO2N0oV9qI] set successfully + +Now put this key into the 'app.php' file as shown below. + + # vim /var/www/laravel-develop/config/app.php + +![Key encryption](http://blog.linoxide.com/wp-content/uploads/2015/11/45.png) + +### 7) Virtua Host and Ownership ### + +After composer installation assign the permissions and apache user ownership to the document root directory as shown. + + # chmod 775 /var/www/laravel-develop/app/storage + +---------- + + # chown -R apache:apache /var/www/laravel-develop + +Open the default configuration file of apache web server using any editor to add the following lines at the end file for new virtual host entry. + + # vim /etc/httpd/conf/httpd.conf + +---------- + + ServerName laravel-develop + DocumentRoot /var/www/laravel/public + + start Directory /var/www/laravel + AllowOverride All + Directory close + +Now the time is to restart apache web server services as shown below and then open your web browser to check your localhost page. + +#### CentOS #### + + # systemctl restart httpd + +#### Ubuntu #### + + # service apache2 restart + +### 8) Laravel 5 Web Access ### + +Open your web browser and give your server IP or Fully Qualified Domain name and you will see the default web page of Laravel 5 frame work. + +![Laravel Default](http://blog.linoxide.com/wp-content/uploads/2015/11/35.png) + +### Conclusion ### + +Laravel Framework is a great tool to develop your web applications. So, at the end of this article you have learned its installation setup on Ubuntu 15 and CentOS 7 , Now start using this awesome PHP framework that provides you a lot of more features and comfort in your development work. Feel free to comment us back for your valuable suggestions an feedback to guide you in more specific and easiest way. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-laravel-php-centos-7-ubuntu-15-04/ + +作者:[Kashif][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file From 81bf330a9933ac5cbd87b42a3ac960e75b2af1c2 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 4 Dec 2015 19:13:35 +0800 Subject: [PATCH 117/160] Update 20151204 Linux or Unix--jobs Command Examples.md --- sources/tech/20151204 Linux or Unix--jobs Command Examples.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20151204 Linux or Unix--jobs Command Examples.md b/sources/tech/20151204 Linux or Unix--jobs Command Examples.md index 560a1ea1e8..40333bed6c 100644 --- a/sources/tech/20151204 Linux or Unix--jobs Command Examples.md +++ b/sources/tech/20151204 Linux or Unix--jobs Command Examples.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux / Unix: jobs Command Examples ================================================================================ I am new Linux and Unix user. How do I show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH or POSIX based shell? How can I display status of jobs in the current session on Unix/Linux? From 7fc6a7ae20d391339808ec9f7ed42c3bf018905c Mon Sep 17 00:00:00 2001 From: ivo wang Date: Sat, 5 Dec 2015 01:28:37 +0800 Subject: [PATCH 118/160] Update 20150906 How to Configure OpenNMS on CentOS 7.x.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 认领这篇 --- .../tech/20150906 How to Configure OpenNMS on CentOS 7.x.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md index c7810d06ef..aca2b04bba 100644 --- a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md +++ b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md @@ -1,3 +1,4 @@ +translated by ivo-wang How to Configure OpenNMS on CentOS 7.x ================================================================================ Systems management and monitoring services are very important that provides information to view important systems management information that allow us to to make decisions based on this information. To make sure the network is running at its best and to minimize the network downtime we need to improve application performance. So, in this article we will make you understand the step by step procedure to setup OpenNMS in your IT infrastructure. OpenNMS is a free open source enterprise level network monitoring and management platform that provides information to allow us to make decisions in regards to future network and capacity planning. @@ -216,4 +217,4 @@ via: http://linoxide.com/monitoring-2/install-configure-opennms-centos-7-x/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file +[a]:http://linoxide.com/author/kashifs/ From d7ccef0ffa34e6febdf79daa77b25779a5dcff0c Mon Sep 17 00:00:00 2001 From: ivo wang Date: Sat, 5 Dec 2015 01:30:03 +0800 Subject: [PATCH 119/160] Update 20150906 How to install Suricata intrusion detection system on Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 认领这篇 --- ... to install Suricata intrusion detection system on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md b/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md index fe4a784d5a..736a6577de 100644 --- a/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md +++ b/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md @@ -1,3 +1,4 @@ +translated by ivo-wang How to install Suricata intrusion detection system on Linux ================================================================================ With incessant security threats, intrusion detection system (IDS) has become one of the most critical requirements in today's data center environments. However, as more and more servers upgrade their NICs to 10GB/40GB Ethernet, it is increasingly difficult to implement compute-intensive intrusion detection on commodity hardware at line rates. One approach to scaling IDS performance is **multi-threaded IDS**, where CPU-intensive deep packet inspection workload is parallelized into multiple concurrent tasks. Such parallelized inspection can exploit multi-core hardware to scale up IDS throughput easily. Two well-known open-source efforts in this area are [Suricata][1] and [Bro][2]. @@ -194,4 +195,4 @@ via: http://xmodulo.com/install-suricata-intrusion-detection-system-linux.html [6]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Runmodes [7]:http://ask.xmodulo.com/view-threads-process-linux.html [8]:http://xmodulo.com/how-to-compile-and-install-snort-from-source-code-on-ubuntu.html -[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki \ No newline at end of file +[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki From bd65852eafbffeba8ccd40a6ceb195c90cea2a6b Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 5 Dec 2015 09:34:37 +0800 Subject: [PATCH 120/160] PUB:20150806 Installation Guide for Puppet on Ubuntu 15.04 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ivo-wang ,好长的文章,辛苦啦,基本上翻译的很好,有个别地方理解不对,已经修正。 可以做 diff 比较看看。发布后 URL: https://linux.cn/article-6692-1.html --- ...lation Guide for Puppet on Ubuntu 15.04.md | 435 ++++++++++++++++++ ...lation Guide for Puppet on Ubuntu 15.04.md | 425 ----------------- 2 files changed, 435 insertions(+), 425 deletions(-) create mode 100644 published/20150806 Installation Guide for Puppet on Ubuntu 15.04.md delete mode 100644 translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md diff --git a/published/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/published/20150806 Installation Guide for Puppet on Ubuntu 15.04.md new file mode 100644 index 0000000000..67ae06db9b --- /dev/null +++ b/published/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -0,0 +1,435 @@ +如何在 Ubuntu 15.04 中安装 puppet +================================================================================ + +大家好,本教程将学习如何在 ubuntu 15.04 上面安装 puppet,它可以用来管理你的服务器基础环境。puppet 是由puppet 实验室(Puppet Labs)开发并维护的一款开源的配置管理软件,它能够帮我们自动化供给、配置和管理服务器的基础环境。不管我们管理的是几个服务器还是数以千计的计算机组成的业务报表体系,puppet 都能够使管理员从繁琐的手动配置调整中解放出来,腾出时间和精力去提系统的升整体效率。它能够确保所有自动化流程作业的一致性、可靠性以及稳定性。它让管理员和开发者更紧密的联系在一起,使开发者更容易产出付出设计良好、简洁清晰的代码。puppet 提供了配置管理和数据中心自动化的两个解决方案。这两个解决方案分别是 **puppet 开源版** 和 **puppet 企业版**。puppet 开源版以 Apache 2.0 许可证发布,它是一个非常灵活、可定制的解决方案,设置初衷是帮助管理员去完成那些重复性操作工作。pupprt 企业版是一个全平台复杂 IT 环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端 apps、只有商业版才有的加强支持,以及模块化和集成管理等。Puppet 使用 SSL 证书来认证主控服务器与代理节点之间的通信。 + +本教程将要介绍如何在运行 ubuntu 15.04 的主控服务器和代理节点上面安装开源版的 puppet。在这里,我们用一台服务器做主控服务器(master),管理和控制剩余的当作 puppet 代理节点(agent node)的服务器,这些代理节点将依据主控服务器来进行配置。在 ubuntu 15.04 只需要简单的几步就能安装配置好 puppet,用它来管理我们的服务器基础环境非常的方便。(LCTT 译注:puppet 采用 C/S 架构,所以必须有至少有一台作为服务器,其他作为客户端处理) + +### 1.设置主机文件 ### + +在本教程里,我们将使用2台运行 ubuntu 15.04 “Vivid Vervet" 的主机,一台作为主控服务器,另一台作为 puppet 的代理节点。下面是我们将用到的服务器的基础信息。 + +- puupet 主控服务器 IP:44.55.88.6 ,主机名: puppetmaster +- puppet 代理节点 IP: 45.55.86.39 ,主机名: puppetnode + +我们要在代理节点和服务器这两台机器的 hosts 文件里面都添加上相应的条目,使用 root 或是 sudo 访问权限来编辑 /etc/hosts 文件,命令如下: + + # nano /etc/hosts + + 45.55.88.6 puppetmaster.example.com puppetmaster + 45.55.86.39 puppetnode.example.com puppetnode + +注意,puppet 主控服务器必使用 8140 端口来运行,所以请务必保证开启8140端口。 + +### 2. 用 NTP 更新时间 ### + +puppet 代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用 NTP(Network Time Protocol,网络时间协议)来同步时间。**在服务器与代理节点上面分别**运行以下命令来同步时间。 + + # ntpdate pool.ntp.org + + 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec + +(LCTT 译注:显示类似的输出结果表示运行正常) + +如果没有安装 ntp,请使用下面的命令更新你的软件仓库,安装并运行ntp服务 + + # apt-get update && sudo apt-get -y install ntp ; service ntp restart + +### 3. 安装主控服务器软件 ### + +安装开源版本的 puppet 有很多的方法。在本教程中我们在 puppet 实验室官网下载一个名为 puppetlabs-release 的软件包的软件源,安装后它将为我们在软件源里面添加 puppetmaster-passenger。puppetmaster-passenger 包括带有 apache 的 puppet 主控服务器。我们开始下载这个软件包: + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + + --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s + + 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +下载完成,我们来安装它: + + # dpkg -i puppetlabs-release-trusty.deb + + Selecting previously unselected package puppetlabs-release. + (Reading database ... 85899 files and directories currently installed.) + Preparing to unpack puppetlabs-release-trusty.deb ... + Unpacking puppetlabs-release (1.0-11) ... + Setting up puppetlabs-release (1.0-11) ... + +使用 apt 包管理命令更新一下本地的软件源: + + # apt-get update + +现在我们就可以安装 puppetmaster-passenger 了 + + # apt-get install puppetmaster-passenger + +**提示**: 在安装的时候可能会报错: + + Warning: Setting templatedir is deprecated.see http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning') + +不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项禁用就行了。 + +如何来查看puppet 主控服务器是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。 + + # puppet --version + + 3.8.1 + +现在我们已经安装好了 puppet 主控服务器。因为我们使用的是配合 apache 的 passenger,由 apache 来控制 puppet 主控服务器,当 apache 运行时 puppet 主控才运行。 + +在开始之前,我们需要通过停止 apache 服务来让 puppet 主控服务器停止运行。 + + # systemctl stop apache2 + +### 4. 使用 Apt 工具锁定主控服务器的版本 ### + +现在已经安装了 3.8.1 版的 puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用 apt 工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** + + # nano /etc/apt/preferences.d/00-puppet.pref + +在新创建的文件里面添加以下内容: + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common puppetmaster-passenger + Pin: version 3.8* + Pin-Priority: 501 + +这样在以后的系统软件升级中, puppet 主控服务器将不会跟随系统软件一起升级。 + +### 5. 配置 Puppet 主控服务器### + +Puppet 主控服务器作为一个证书发行机构,需要生成它自己的证书,用于签署所有代理的证书的请求。首先我们要删除所有在该软件包安装过程中创建出来的 ssl 证书。本地默认的 puppet 证书放在 /var/lib/puppet/ssl。因此我们只需要使用 rm 命令来整个移除这些证书就可以了。 + + # rm -rf /var/lib/puppet/ssl + +现在来配置该证书,在创建 puppet 主控服务器证书时,我们需要包括代理节点与主控服务器沟通所用的每个 DNS 名称。使用文本编辑器来修改服务器的配置文件 puppet.conf + + # nano /etc/puppet/puppet.conf + +输出的结果像下面这样 + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + templatedir=$confdir/templates + + [master] + # These are needed when the puppetmaster is run by passenger + # and can safely be removed if webrick is used. + ssl_client_header = SSL_CLIENT_S_DN + ssl_client_verify_header = SSL_CLIENT_VERIFY + +在这我们需要注释掉 templatedir 这行使它失效。然后在文件的 `[main]` 小节的结尾添加下面的信息。 + + server = puppetmaster + environment = production + runinterval = 1h + strict_variables = true + certname = puppetmaster + dns_alt_names = puppetmaster, puppetmaster.example.com + +还有很多你可能用的到的配置选项。 如果你有需要,在 Puppet 实验室有一份详细的描述文件供你阅读: [Main Config File (puppet.conf)][1]。 + +编辑完成后保存退出。 + +使用下面的命令来生成一个新的证书。 + + # puppet master --verbose --no-daemonize + + Info: Creating a new SSL key for ca + Info: Creating a new SSL certificate request for ca + Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 + ... + Notice: puppetmaster has a waiting certificate request + Notice: Signed certificate request for puppetmaster + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' + Notice: Starting Puppet master version 3.8.1 + ^CNotice: Caught INT; storing stop + Notice: Processing stop + +至此,证书已经生成。一旦我们看到 **Notice: Starting Puppet master version 3.8.1**,就表明证书就已经制作好了。我们按下 CTRL-C 回到 shell 命令行。 + +查看新生成证书的信息,可以使用下面的命令。 + + # puppet cert list -all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 6. 创建一个 Puppet 清单 ### + +默认的主要清单(Manifest)是 /etc/puppet/manifests/site.pp。 这个主要清单文件包括了用于在代理节点执行的配置定义。现在我们来创建一个清单文件: + + # nano /etc/puppet/manifests/site.pp + +在刚打开的文件里面添加下面这几行: + + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + +以上这几行的意思是给代理节点部署 apache web 服务。 + +### 7. 运行 puppet 主控服务 ### + +已经准备好运行 puppet 主控服务器 了,那么开启 apache 服务来让它启动 + + # systemctl start apache2 + +我们 puppet 主控服务器已经运行,不过它还不能管理任何代理节点。现在我们给 puppet 主控服务器添加代理节点. + +**提示**: 如果报错 + + Job for apache2.service failed. see "systemctl status apache2.service" and "journalctl -xe" for details. + +肯定是 apache 服务器有一些问题,我们可以使用 root 或是 sudo 访问权限来运行**apachectl start**查看它输出的日志。在本教程执行过程中, 我们发现一个 **/etc/apache2/sites-enabled/puppetmaster.conf** 的证书配置问题。修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem **为 **SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem**,然后注释掉后面这行**SSLCertificateKeyFile** 。然后在命令行重新启动 apache。 + +### 8. 安装 Puppet 代理节点的软件包 ### + +我们已经准备好了 puppet 的服务器,现在需要一个可以管理的代理节点,我们将安装 puppet 代理软件到节点上去。这里我们要给每一个需要管理的节点安装代理软件,并且确保这些节点能够通过 DNS 查询到服务器主机。下面将 安装最新的代理软件到 节点 puppetnode.example.com 上。 + +在代理节点上使用下面的命令下载 puppet 实验室提供的软件包: + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ + + --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s + + 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +在 ubuntu 15.04 上我们使用debian包管理系统来安装它,命令如下: + + # dpkg -i puppetlabs-release-trusty.deb + +使用 apt 包管理命令更新一下本地的软件源: + + # apt-get update + +通过远程仓库安装: + + # apt-get install puppet + +Puppet 代理默认是不启动的。这里我们需要使用文本编辑器修改 /etc/default/puppet 文件,使它正常工作: + + # nano /etc/default/puppet + +更改 **START** 的值改成 "yes" 。 + + START=yes + +最后保存并退出。 + +### 9. 使用 Apt 工具锁定代理软件的版本 ### + +和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用 apt 工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref** + + # nano /etc/apt/preferences.d/00-puppet.pref + +在新建的文件里面加入如下内容 + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common + Pin: version 3.8* + Pin-Priority: 501 + +这样 puppet 就不会随着系统软件升级而随意升级了。 + +### 10. 配置 puppet 代理节点 ### + +我们需要编辑一下代理节点的 puppet.conf 文件,来使它运行。 + + # nano /etc/puppet/puppet.conf + +它看起来和服务器的配置文件完全一样。同样注释掉**templatedir**这行。不同的是在这里我们需要删除掉所有关于`[master]` 的部分。 + +假定主控服务器可以通过名字“puppet-master”访问,我们的客户端应该可以和它相互连接通信。如果不行的话,我们需要使用完整的主机域名 puppetmaster.example.com + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +在文件的结尾增加上面3行,增加之后文件内容像下面这样: + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + #templatedir=$confdir/templates + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +最后保存并退出。 + +使用下面的命令来启动客户端软件: + + # systemctl start puppet + +如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个请求,经过签名确认后,两台机器就可以互相通信了。 + +**提示**: 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。 + +### 11. 在主控服务器上对证书请求进行签名 ### + +第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个签名请求。在主控服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 + +在主控服务器上使用下面的命令来列出当前的证书请求: + + # puppet cert list + "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 + +因为只设置了一台代理节点服务器,所以我们将只看到一个请求。看起来类似如上,代理节点的完整域名即其主机名。 + +注意有没有“+”号在前面,代表这个证书有没有被签名。 + +使用带有主机名的**puppet cert sign**这个命令来签署这个签名请求,如下: + + # puppet cert sign puppetnode.example.com + Notice: Signed certificate request for puppetnode.example.com + Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' + +主控服务器现在可以通讯和控制它签名过的代理节点了。 + +如果想签署所有的当前请求,可以使用 -all 选项,如下所示: + + # puppet cert sign --all + +### 12. 删除一个 Puppet 证书 ### + +如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除 puppet 主控服务器上面的一个证书。使用的命令如下: + + # puppet cert clean hostname + Notice: Revoked certificate with serial 5 + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' + +如果我们想查看所有的签署和未签署的请求,使用下面这条命令: + + # puppet cert list --all + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + + +### 13. 部署 Puppet 清单 ### + +当配置并完成 puppet 清单后,现在我们需要部署清单到代理节点服务器上。要应用并加载主 puppet 清单,我们可以在代理节点服务器上面使用下面的命令: + + # puppet agent --test + + Info: Retrieving pluginfacts + Info: Retrieving plugin + Info: Caching catalog for puppetnode.example.com + Info: Applying configuration version '1434563858' + Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully + Notice: Finished catalog run in 10.53 seconds + +这里向我们展示了主清单如何立即影响到了一个单一的服务器。 + +如果我们打算运行的 puppet 清单与主清单没有什么关联,我们可以简单使用 puppet apply 带上相应的清单文件的路径即可。它仅将清单应用到我们运行该清单的代理节点上。 + + # puppet apply /etc/puppet/manifest/test.pp + +### 14. 为特定节点配置清单 ### + +如果我们想部署一个清单到某个特定的节点,我们需要如下配置清单。 + +在主控服务器上面使用文本编辑器编辑 /etc/puppet/manifest/site.pp: + + # nano /etc/puppet/manifest/site.pp + +添加下面的内容进去 + + node 'puppetnode', 'puppetnode1' { + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + } + +这里的配置显示我们将在名为 puppetnode 和 puppetnode1 的2个指定的节点上面安装 apache 服务。这里可以添加其他我们需要安装部署的具体节点进去。 + +### 15. 配置清单模块 ### + +模块对于组合任务是非常有用的,在 Puppet 社区有很多人贡献了自己的模块组件。 + +在主控服务器上, 我们将使用 puppet module 命令来安装 **puppetlabs-apache** 模块。 + + # puppet module install puppetlabs-apache + +**警告**: 千万不要在一个已经部署 apache 环境的机器上面使用这个模块,否则它将清空你没有被 puppet 管理的 apache 配置。 + +现在用文本编辑器来修改 **site.pp** : + + # nano /etc/puppet/manifest/site.pp + +添加下面的内容进去,在 puppetnode 上面安装 apache 服务。 + + node 'puppet-node' { + class { 'apache': } # use apache module + apache::vhost { 'example.com': # define vhost resource + port => '80', + docroot => '/var/www/html' + } + } + +保存退出。然后重新运行该清单来为我们的代理节点部署 apache 配置。 + +### 总结 ### + +现在我们已经成功的在 ubuntu 15.04 上面部署并运行 puppet 来管理代理节点服务器的基础运行环境。我们学习了puppet 是如何工作的,编写清单文件,节点与主机间使用 ssl 证书认证的认证过程。使用 puppet 开源软件配置管理工具在众多的代理节点上来控制、管理和配置重复性任务是非常容易的。如果你有任何的问题,建议,反馈,与我们取得联系,我们将第一时间完善更新,谢谢。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ + +作者:[Arun Pyasi][a] +译者:[ivo-wang](https://github.com/ivo-wang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html diff --git a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md deleted file mode 100644 index 7195dbe268..0000000000 --- a/translated/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ /dev/null @@ -1,425 +0,0 @@ -如何在Ubuntu 15.04中安装puppet -================================================================================ - -大家好,本教程将教各位如何在ubuntu 15.04上面安装puppet,用它来管理你的服务器基础环境。puppet是由puppet实验室开发并维护的一款开源软件,它能够帮我们自动的管理配置服务器的基础环境。不管我们管理的是几个服务器还是数以千计的计算机组成的业务报表体系,puppet都能够使管理员从繁琐的手动配置调整中解放出来,腾出时间和精力去提系统的升整体效率。它能够确保所有自动化流程作业的一致性,可靠性以及稳定性。它让管理员和开发者更紧密的联系在一起,使开发者更容易产出付出设计良好,简洁清晰的代码。puppet提供了管理配置和自动化数据中心的2个解决方案。这两个解决方案分别是**puppet开源项目 和 puppet商业版**。puppet开源项目依赖于apache2.0,它是一个非常灵活可随意定制的解决方案,设置初衷是帮助管理员去完成那些重复性操作工作。pupprt商业版是一个全平台复杂IT环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端apps,只有商业版才有的加强支持,以及模块化和集成管理等。Puppet使用SSL证书来认证主机与代理节点之间的通信。 - -本教程将要介绍如何在ubuntu 15.04的主机和代理节点上面安装开源版的puppet。在这我们用一台服务器做主机,管理和控制剩余当作puppet的代理节点的服务器,这些代理节点将依据服务器来进行配置。在ubuntu 15.04只需要简单的几步就能安装配置好puppet,用它来管理我们的服务器基础环境非常的方便。(译者注:puppet采用的C/S架构所以必须有至少有一台作为服务端,其他作为客户端处理) -### 1.设置主机### -在本教程里,我们将使用用2台运行ubuntu 15.04 "Vivid Vervet"的主机,一台作为服务端,另一台作为puppet的代理节点。下面是我们将用到的服务器的基础信息。 - -puupet服务器IP:44.55.88.6,主机名: puppetmaster -puppet代理节点IP: 45.55.86.39 ,主机名: puppetnode - -我们要在代理节点和服务器这两天机器的hosts里面都添加上相应的条目,使用root或是sudo访问权限来编辑/etc/hosts文件,命令如下: - - - # nano /etc/hosts - - 45.55.88.6 puppetmaster.example.com puppetmaster - 45.55.86.39 puppetnode.example.com puppetnode - -注意,puppet服务端必使用8140端口来运行,所以请务必保证开启8140端口。 - -### 2. 用NTP更新时间 ### - -puppet代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用NTP(Network Time Protocol,网络时间协议)来同步时间。在服务器与代理节点上面分别运行以下命令来同步时间。 - - - # ntpdate pool.ntp.org - - 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec (译者注:显示类似的输出结果表示运行正常) - -如果没有ntp,请使用下面的命令更新你的软件仓库,安装并运行ntp服务 - - # apt-get update && sudo apt-get -y install ntp ; service ntp restart - -### 3. 安装服务器软件 ### - -安装开源版本的puppet有很多的方法。在本教程中我们在puppet实验室官网下载一个名为puppetlabs-release的软件包,安装后它将为我们在软件源里面添加puppetmaster-passenger。puppetmaster-passenger依赖于apache的puppet服务端。我们开始下载这个软件包 - - - # cd /tmp/ - # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - - --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d - Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7384 (7.2K) [application/x-debian-package] - Saving to: ‘puppetlabs-release-trusty.deb’ - - puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s - - 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] - -下载完成,我们来安装它 - - # dpkg -i puppetlabs-release-trusty.deb - - Selecting previously unselected package puppetlabs-release. - (Reading database ... 85899 files and directories currently installed.) - Preparing to unpack puppetlabs-release-trusty.deb ... - Unpacking puppetlabs-release (1.0-11) ... - Setting up puppetlabs-release (1.0-11) ... - -使用apt包管理命令更新一下本地的软件源 - - # apt-get update - -现在我们就可以安装puppetmaster-passenger了 - - # apt-get install puppetmaster-passenger - -**提示**: 在安装的时候可能会报错**Warning: Setting templatedir is deprecated.请查看 http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** 不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项disable就行了。 - -如何来查看puppet master是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。 - - - # puppet --version - - 3.8.1 - -现在我们已经安装好了puppet master。要想使用puppet master apache服务就必须运行起来,因为puppet master进程的运行是依赖于apache的。 -在开始之前,我们将apache服务停止,这样puppet muster也会停止运行。 - - # systemctl stop apache2 - -### 4. 使用Apt工具锁定Master(服务端)版本 ### - -现在已经安装了 3.8.1版的puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用apt工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** - - # nano /etc/apt/preferences.d/00-puppet.pref - -在新创建的文件里面添加以下内容 - - - # /etc/apt/preferences.d/00-puppet.pref - Package: puppet puppet-common puppetmaster-passenger - Pin: version 3.8* - Pin-Priority: 501 - -这样在以后的系统软件升级中puppet master将被锁住不会跟随系统软件一起升级。 - -### 5. 配置 Puppet Master### -Puppet master作为一个证书发行机构,所有代理证书的请求都将由它来处理。首先我们要删除所有在软件包安装过程中创建出来的ssl证书。本地默认的puppet证书在/var/lib/puppet/ssl。因此我们只需要使用rm命令来移除这些证书就可以了。 - - - # rm -rf /var/lib/puppet/ssl - -现在来配置这些证书,在创建puppet master证书的时候,需要用使用DNS能查找到的代理节点名称。使用文本编辑器来修改服务器的配置文件puppet.conf - # nano /etc/puppet/puppet.conf - -输出的结果像下面这样 - - [main] - logdir=/var/log/puppet - vardir=/var/lib/puppet - ssldir=/var/lib/puppet/ssl - rundir=/var/run/puppet - factpath=$vardir/lib/facter - templatedir=$confdir/templates - - [master] - # These are needed when the puppetmaster is run by passenger - # and can safely be removed if webrick is used. - ssl_client_header = SSL_CLIENT_S_DN - ssl_client_verify_header = SSL_CLIENT_VERIFY - - -在这我们需要注释掉templatedir 这行使它失效。然后在文件的结尾添加下面的信息。 - - - server = puppetmaster - environment = production - runinterval = 1h - strict_variables = true - certname = puppetmaster - dns_alt_names = puppetmaster, puppetmaster.example.com - -还有很多你可能用的到的配置选项。 如果你有需要,在Puppet实验室有一份详细的描述文件供你阅读。 [Main Config File (puppet.conf)][1]. - -编辑完成后保存退出。 - -使用下面的命令来生成一个新的证书。 - - - # puppet master --verbose --no-daemonize - - Info: Creating a new SSL key for ca - Info: Creating a new SSL certificate request for ca - Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 - ... - Notice: puppetmaster has a waiting certificate request - Notice: Signed certificate request for puppetmaster - Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' - Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' - Notice: Starting Puppet master version 3.8.1 - ^CNotice: Caught INT; storing stop - Notice: Processing stop - -至此,证书已经生成。一旦我们看到 **Notice: Starting Puppet master version 3.8.1**, 表明证书就已经制作好了.我们按下 CTRL-C 回到shell命令行。 - -查看新生成证书的信息,可以使用下面的命令。 - - # puppet cert list -all - - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") - -### 6. 创建一个Puppet清单 ### - -默认的主要清单是/etc/puppet/manifests/site.pp。 这个主要清单文件定义着控制哪些代理节点。现在我们来创建一个清单文件 - - # nano /etc/puppet/manifests/site.pp - -在刚打开的文件里面添加下面这几行 - - # execute 'apt-get update' - exec { 'apt-update': # exec resource named 'apt-update' - command => '/usr/bin/apt-get update' # command this resource will run - } - - # install apache2 package - package { 'apache2': - require => Exec['apt-update'], # require 'apt-update' before installing - ensure => installed, - } - - # ensure apache2 service is running - service { 'apache2': - ensure => running, - } - -以上这几行的意思是给代理节点部署apache web 服务 -### 7. 运行puppet Master服务 ### - -已经准备好运行puppet master了,那么开启apache服务来让它启动 - - # systemctl start apache2 - -我们puppet master已经运行,不过它还不能管理任何代理节点。现在我们给puppet master添加代理节点. - -**提示**: 如果报错 **Job for apache2.service failed. 查看"systemctl status apache2.service" and "journalctl -xe" 所给出的信息.** 肯定是apache server有一些问题. 我们可以使用root或是sudo访问权限来运行**apachectl start**查看它输出的日志。 在本教程执行过程中, 我们发现一个证书配置的问题,解决方法如下**/etc/apache2/sites-enabled/puppetmaster.conf**. 修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem 为 SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** 然后注释掉后面这行**SSLCertificateKeyFile** . 然后在命令行启动apache -### 8. Puppet客户端安装 ### - -我们已经准备好了puppet的服务端,现在来为代理节点安装客户端。这里我们要给每一个需要管理的节点安装客户端,并且确保这些节点能够通过DNS查询到服务器主机。下面将 puppetnode.example.com作为代理节点安装客户端 -在代理节点服务器上,使用下面的命令下载puppet实验室提供的软件包。 - - - # cd /tmp/ - # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ - - --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d - Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7384 (7.2K) [application/x-debian-package] - Saving to: ‘puppetlabs-release-trusty.deb’ - - puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s - - 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] - -在ubuntu 15.04上我们使用debian包管理系统来安装它,命令如下: - - - # dpkg -i puppetlabs-release-trusty.deb - -使用apt包管理命令更新一下本地的软件源 - - # apt-get update - -通过远程仓库安装 - # apt-get install puppet - -Puppet客户端默认是不启动的。这里我们需要使用文本编辑器修改/etc/default/puppet文件,使它正常工作。 - # nano /etc/default/puppet - -更改 **START** 的值改成 "yes" 。 - - START=yes - -最后保存并退出。 - -### 9. 使用APT工具锁定Agent(客户端)版本 ### - -和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用apt工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref** - - # nano /etc/apt/preferences.d/00-puppet.pref - -在新建的文件里面加入如下内容 - - # /etc/apt/preferences.d/00-puppet.pref - Package: puppet puppet-common - Pin: version 3.8* - Pin-Priority: 501 - -这样puppet就不会随着系统软件升级而随意升级了。 - -### 10. 配置puppet代理节点 ### - -我们需要编辑一下代理节点的puppet.conf文件,来使它运行。 - - # nano /etc/puppet/puppet.conf - -它看起来和服务端的配置文件完全一样。同样注释掉**templatedir**这行。不同的是在这里我们需要删除掉所有关于[master]的部分。 - - -假定服务端可用,我们的客户端应该是可以和它相互连接通信的。如果不行我们需要使用完整的主机域名puppetmaster.example.com - - [agent] - server = puppetmaster.example.com - certname = puppetnode.example.com - -在文件的结尾增加上面3行,增加之后文件内容像下面这样。 - - [main] - logdir=/var/log/puppet - vardir=/var/lib/puppet - ssldir=/var/lib/puppet/ssl - rundir=/var/run/puppet - factpath=$vardir/lib/facter - #templatedir=$confdir/templates - - [agent] - server = puppetmaster.example.com - certname = puppetnode.example.com - -最后保存并退出。 - -使用下面的命令来启动客户端软件 - - # systemctl start puppet - -如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个ssl证书并且给服务端发送一个请求,经过签名确认后,两台机器就可以互相通信了。 - -**提示**: 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。 - -### 11. 服务器上的签名证书请求 ### - -第一次运行的时候,代理节点会生成一个ssl证并且给服务端发送一个请求。在主服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 - -在主服务器上使用下面的命令来列出当前的证书请求 - - # puppet cert list - "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 - - 因为只设置了一台代理节点服务器,所以我们将只看到一个请求。 -注意有没有“+”号在前面,代表这个证书有没有被签名。 - -使用**puppet cert sign**到**hostname**这个命令来签署这个签名请求。 - - # puppet cert sign puppetnode.example.com - Notice: Signed certificate request for puppetnode.example.com - Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' - -服务端只能控制它签名过的代理节点。 - -如想我们想签署所有的请求, 需要使用-all选项,如下所示。 - - # puppet cert sign --all - -### 12.删除一个Puppet证书 ### - -如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除puppet master上面的一个证书。使用的命令如下: - - # puppet cert clean hostname - Notice: Revoked certificate with serial 5 - Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' - Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' - -如果我们想查看所有的签署和未签署的请求,使用下面这条命令 - - # puppet cert list --all - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") - - -### 13. 部署代理节点Puppet清单 ### - -当配置并完成主puppet清单后,现在我们需要署代理节点服务器清单。要应用并加载主puppet清单,我们可以在代理节点服务器上面使用下面的命令 - - # puppet agent --test - - Info: Retrieving pluginfacts - Info: Retrieving plugin - Info: Caching catalog for puppetnode.example.com - Info: Applying configuration version '1434563858' - Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully - Notice: Finished catalog run in 10.53 seconds - -这里像我们展示了主清单如何去管理一个单一的服务器。 - -如果我们打算运行的puppet清单与主puppet清单没有什么关联,那么需要使用puppet apply 到相应的路径。它仅适用于该代理节点。 - - # puppet apply /etc/puppet/manifest/test.pp - -### 14. 配置特殊节点清单 ### - -如果我们想部署一个清单到某个特定的节点,我们需要进行以下操作。 - -在主服务器上面使用文本编辑器编辑/etc/puppet/manifest/site.pp - - # nano /etc/puppet/manifest/site.pp - -添加下面的内容进去 - - node 'puppetnode', 'puppetnode1' { - # execute 'apt-get update' - exec { 'apt-update': # exec resource named 'apt-update' - command => '/usr/bin/apt-get update' # command this resource will run - } - - # install apache2 package - package { 'apache2': - require => Exec['apt-update'], # require 'apt-update' before installing - ensure => installed, - } - - # ensure apache2 service is running - service { 'apache2': - ensure => running, - } - } - -这里的配置显示我们将在名为puppetnode and puppetnode1的2个特殊节点上面安装apache服务. 这里可以添加其他我们需要安装部署的具体节点进去。 -### 15. 配置清单模块 ### - -模块化组件组是非常实用的,在Puppet社区有很多人贡献自己的模块组件。 - -在主puppet服务器上, 我们将使用puppet module命令来安装**puppetlabs-apache** 模块。 - - # puppet module install puppetlabs-apache - -**警告**: 千万不要在一个已经部署apache环境的机器上面使用这个模块,否则它将清空你的apache配置。 - -现在用文本编辑器来修改 **site.pp** - - # nano /etc/puppet/manifest/site.pp - -添加下面的内容进去,意思是在 puppetnode上面安装apache服务。 - - node 'puppet-node' { - class { 'apache': } # use apache module - apache::vhost { 'example.com': # define vhost resource - port => '80', - docroot => '/var/www/html' - } - } - -保存退出。这样为我们的代理服务器重新配置部署基础环境。 - -### 总结 ### - -现在我们已经成功的在ubuntu 15.04上面部署并运行puppet来管理代理节点服务器的基础运行环境。我们学习了puppet是如何工作的,编写清单文件,节点与主机间使用ssl证书认证的认证过程。使用puppet管理配置众多的代理节点服务器是非常容易的。如果你有任何的问题,建议,反馈,与我们取得联系,我们将第一时间完善更新,谢谢。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ - -作者:[Arun Pyasi][a] -译者:[ivo-wang](https://github.com/ivo-wang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html From 401264a523af0c84402d35c09c0303af2972101f Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 5 Dec 2015 11:26:04 +0800 Subject: [PATCH 121/160] translating --- ...o Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md b/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md index 38d3d89735..8b7d60f37c 100644 --- a/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md +++ b/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md @@ -1,3 +1,6 @@ +Translating + + How to Remove Banned IP from Fail2ban on CentOS 6 / CentOS 7 ================================================================================ ![](http://www.ehowstuff.com/wp-content/uploads/2015/12/security-265130_1280.jpg) From 81fccd3482b9e078d2c95d1c40ca5d8b448fc80f Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 5 Dec 2015 12:40:08 +0800 Subject: [PATCH 122/160] translated --- ...ed IP from Fail2ban on CentOS 6 or CentOS 7.md | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) rename {sources => translated}/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md (71%) diff --git a/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md b/translated/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md similarity index 71% rename from sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md rename to translated/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md index 8b7d60f37c..a918eff18f 100644 --- a/sources/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md +++ b/translated/tech/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md @@ -1,15 +1,12 @@ -Translating - - -How to Remove Banned IP from Fail2ban on CentOS 6 / CentOS 7 +如何在CentOS 6/7 上移除被Fail2ban禁止的IP ================================================================================ ![](http://www.ehowstuff.com/wp-content/uploads/2015/12/security-265130_1280.jpg) -[Fail2ban][1] is an intrusion prevention software framework that able to protect your server from brute-force attacks. Fail2ban written in the Python programming language and is widely used by most of the VPS servers. Fail2ban will scan log files and IP blacklists that shows signs of malicious, too many password failures, web server exploitation, WordPress plugin attacks and other vulnerabilities. If you already installed and used fail2ban to protect your web server, you may be wondering how to find the IP banned or blocked by Fail2ban, or you may want to remove banned ip from fail2ban jail on CentOS 6, CentOS 7, RHEL 6, RHEL 7 and Oracle Linux 6/7. +[Fail2ban][1]是一款用于保护你的服务器免于暴力攻击的入侵保护软件。Fail2ban用python写成,并被广泛用户大多数服务器上。Fail2ban将扫描日志文件和IP黑名单来显示恶意软件、过多的密码失败、web服务器利用、Wordpress插件攻击和其他漏洞。如果你已经安装并使用了fail2ban来保护你的web服务器,你也许会想知道如何在CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7中找到被Fail2ban阻止的IP,或者你想将ip从fail2ban监狱中移除。 -### How to List of Banned IP address ### +### 如何列出被禁止的IP ### -To see all the blocked ip addresses, run the following command : +要查看所有被禁止的ip地址,运行下面的命令: # iptables -L Chain INPUT (policy ACCEPT) @@ -43,11 +40,11 @@ To see all the blocked ip addresses, run the following command : REJECT all -- 104.194.26.205 anywhere reject-with icmp-port-unreachable RETURN all -- anywhere anywhere -### How to Remove Banned IP from Fail2ban jail ### +### 如何从Fail2ban中移除IP ### # iptables -D f2b-NoAuthFailures -s banned_ip -j REJECT -I hope this article gives you some ideas and quick guide on remove banned IP from Fail2ban jail on on CentOS 6, CentOS 7, RHEL 6, RHEL 7 and Oracle Linux 6/7. +我希望这篇教程可以给你在CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7中移除被禁止的ip一些指导。 -------------------------------------------------------------------------------- From 5e575541042729bde0efd89ba97145816b7f6680 Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Sat, 5 Dec 2015 16:07:43 +0800 Subject: [PATCH 123/160] Signed-off-by: Flowsnow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [已翻译]Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper --- ...th Yum RPM Apt Dpkg Aptitude and Zypper.md | 230 ------------------ ...th Yum RPM Apt Dpkg Aptitude and Zypper.md | 230 ++++++++++++++++++ 2 files changed, 230 insertions(+), 230 deletions(-) delete mode 100644 sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md create mode 100644 translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md deleted file mode 100644 index af967e18d4..0000000000 --- a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md +++ /dev/null @@ -1,230 +0,0 @@ -Flowsnow translating... -Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper -================================================================================ -Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. - -![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) - -Linux Foundation Certified Sysadmin – Part 9 - -Watch the following video that explains about the Linux Foundation Certification Program. - -注:youtube 视频 - - -This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam. - -### Package Management ### - -In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system. - -In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled. - -**How package management systems work** - -If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well. - -**Packaging Systems** - -Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually. - -Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification. - -**High and low-level package tools** - -In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed). - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DISTRIBUTIONLOW-LEVEL TOOLHIGH-LEVEL TOOL
 Debian and derivatives dpkg apt-get / aptitude
 CentOS rpm yum
 openSUSE rpm zypper
- -Let us see the descrption of the low-level and high-level tools. - -dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it can’t automatically download and install their corresponding dependencies. - -- Read More: [15 dpkg Command Examples][1] - -apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name. - -- Read More: [25 apt-get Command Examples][2] - -aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package. - -rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS. - -- Read More: [20 rpm Command Examples][3] - -yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories. - -- Read More: [20 yum Command Examples][4] -- -### Common Usage of Low-Level Tools ### - -The most frequent tasks that you will do with low level tools are as follows: - -**1. Installing a package from a compiled (*.deb or *.rpm) file** - -The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distribution’s repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies. - - # dpkg -i file.deb [Debian and derivative] - # rpm -i file.rpm [CentOS / openSUSE] - -**Note**: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa! - -**2. Upgrading a package from a compiled file** - -Again, you will only upgrade an installed package manually when it is not available in the central repositories. - - # dpkg -i file.deb [Debian and derivative] - # rpm -U file.rpm [CentOS / openSUSE] - -**3. Listing installed packages** - -When you first get your hands on an already working system, chances are you’ll want to know what packages are installed. - - # dpkg -l [Debian and derivative] - # rpm -qa [CentOS / openSUSE] - -If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in [manipulate files in Linux – Part 1][6] of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system. - - # dpkg -l | grep mysql-common - -![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) - -Check Installed Packages - -Another way to determine if a package is installed. - - # dpkg --status package_name [Debian and derivative] - # rpm -q package_name [CentOS / openSUSE] - -For example, let’s find out whether package sysdig is installed on our system. - - # rpm -qa | grep sysdig - -![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) - -Check sysdig Package - -**4. Finding out which package installed a file** - - # dpkg --search file_name - # rpm -qf file_name - -For example, which package installed pw_dict.hwm? - - # rpm -qf /usr/share/cracklib/pw_dict.hwm - -![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) - -Query File in Linux - -### Common Usage of High-Level Tools ### - -The most frequent tasks that you will do with high level tools are as follows. - -**1. Searching for a package** - -aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name. - - # aptitude update && aptitude search package_name - -In the search all option, yum will search for package_name not only in package names, but also in package descriptions. - - # yum search package_name - # yum search all package_name - # yum whatprovides “*/package_name” - -Let’s supposed we need a file whose name is sysdig. To know that package we will have to install, let’s run. - - # yum whatprovides “*/sysdig” - -![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) - -Check Package Description - -whatprovides tells yum to search the package the will provide a file that matches the above regular expression. - - # zypper refresh && zypper search package_name [On openSUSE] - -**2. Installing a package from a repository** - -While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons. - - # aptitude update && aptitude install package_name [Debian and derivatives] - # yum update && yum install package_name [CentOS] - # zypper refresh && zypper install package_name [openSUSE] - -**3. Removing a package** - -The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system. -# aptitude remove / purge package_name -# yum erase package_name - - ---Notice the minus sign in front of the package that will be uninstalled, openSUSE --- - - # zypper remove -package_name - -Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble! - -**4. Displaying information about a package** - -The following command will display information about the birthday package. - - # aptitude show birthday - # yum info birthday - # zypper info birthday - -![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) - -Check Package Information - -### Summary ### - -Package management is something you just can’t sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moment’s notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-package-management/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/dpkg-command-examples/ -[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ -[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ -[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ diff --git a/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md new file mode 100644 index 0000000000..2781dde63d --- /dev/null +++ b/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -0,0 +1,230 @@ +Flowsnow translating... +LFCS系列第九讲: 使用Yum, RPM, Apt, Dpkg, Aptitude, Zypper进行Linux包管理 +================================================================================ +去年八月, Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划,这对广大系统管理员来说是一个很好的机会,管理员们可以通过绩效考试来表明自己可以成功支持Linux系统的整体运营。 当需要的时候一个Linux基金会认证的系统管理员有足够的专业知识来确保系统高效运行,提供第一手的故障诊断和监视,并且为工程师团队在问题升级时提供智能决策。 + +![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) + +Linux基金会认证系统管理员 – 第九讲 + +请观看下面关于Linux基金会认证计划的演示。 + +注:youtube 视频 + + +本文是本系列十套教程中的第九讲,今天在这篇文章中我们会引导你学习Linux包管理,这也是LFCS认证考试所需要的。 + +### 包管理 ### + +简单的说,包管理是系统中安装和维护软件的一种方法,其中维护也包含更新和卸载。 + +在Linux早期,程序只以源代码的方式发行,还带有所需的用户使用手册和必备的配置文件,甚至更多。现如今,大多数发行商使用默认的预装程序或者被称为包的程序集合。用户使用这些预装程序或者包来安装该发行版本。然而,Linux最伟大的一点是我们仍然能够获得程序的源代码用来学习、改进和编译。 + +**包管理系统是如何工作的** + +如果某一个包需要一定的资源,如共享库,或者需要另一个包,据说就会存在依赖性问题。所有现在的包管理系统提供了一些解决依赖性的方法,以确保当安装一个包时,相关的依赖包也安装好了 + +**打包系统** + +几乎所有安装在现代Linux系统上的软件都会在互联网上找到。它要么能够通过中央库(中央库能包含几千个包,每个包都已经构建、测试并且维护好了)发行商得到,要么能够直接得到可以下载和手动安装的源代码。 + +由于不同的发行版使用不同的打包系统(Debian的*.deb文件/ CentOS的*.rpm文件/ openSUSE的专门为openSUSE构建的*.rpm文件),因此为一个发行版本开发的包会与其他发行版本不兼容。然而,大多数发行版本都可能是LFCS认证的三个发行版本之一。 + +**高级和低级打包工具** + +为了有效地进行包管理的任务,你需要知道,你将有两种类型的实用工具:低级工具(能在后端实际安装,升级,卸载包文件),以及高级工具(负责确保能很好的执行依赖性解决和元数据检索的任务,元数据也称为关于数据的数据)。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
发行版低级工具高级工具
 Debian版及其衍生版 dpkg apt-get / aptitude
 CentOS版 rpm yum
 openSUSE版 rpm zypper
+ +让我们来看下低级工具和高级工具的描述。 + +dpkg的是基于Debian系统中的一个低级包管理器。它可以安装,删除,提供有关资料,并建立*.deb包,但它不能自动下载并安装它们相应的依赖包。 + +- 阅读更多: [15个dpkg命令实例][1] + +apt-get是Debian和衍生版的高级包管理器,并提供命令行方式从多个来源检索和安装软件包,其中包括解决依赖性。和dpkg不同的是,apt-get不是直接基于.deb文件工作,而是基于包的正确名称。 + +- 阅读更多: [25个apt-get命令实力][2] + +Aptitude是基于Debian的系统的另一个高级包管理器,它可用于快速简便的执行管理任务(安装,升级和删除软件包,还可以自动处理解决依赖性)。与atp-get和额外的包管理器相比,它提供了相同的功能,例如提供对包的几个版本的访问。 + +rpm是Linux标准基础(LSB)兼容发布版使用的一种包管理器,用来对包进行低级处理。就像dpkg,rpm可以查询,安装,检验,升级和卸载软件包,并能被基于Fedora的系统频繁地使用,比如RHEL和CentOS。 + +- 阅读更多: [20个rpm命令实例][3] + +相对于基于RPM的系统,yum增加了系统自动更新的功能和带依赖性管理的包管理功能。作为一个高级工具,和apt-get或者aptitude相似,yum基于库工作。 + +- 阅读更多: [20个yum命令实例][4] +- +### 低级工具的常见用法 ### + +你用低级工具处理最常见的任务如下。 + +**1. 从已编译(*.deb或*.rpm)的文件安装一个包** + +这种安装方法的缺点是没有提供解决依赖性的方案。当你在发行版本库中无法获得某个包并且又不能通过高级工具下载安装时,你很可能会从一个已编译文件安装该包。因为低级工具不需要解决依赖性问题,所以当安装一个没有解决依赖性的包时会出现出错并且退出。 + + # dpkg -i file.deb [Debian版和衍生版] + # rpm -i file.rpm [CentOS版 / openSUSE版] + +**注意**: 不要试图在CentOS中安装一个为openSUSE构建的.rpm文件,反之亦然! + +**2. 从已编译文件中更新一个包** + +同样,当中央库中没有某安装包时,你只能手动升级该包。 + + # dpkg -i file.deb [Debian版和衍生版] + # rpm -U file.rpm [CentOS版 / openSUSE版] + +**3. 列举安装的包** + +当你第一次接触一个已经在工作中的系统时,很可能你会想知道安装了哪些包。 + + # dpkg -l [Debian版和衍生版] + # rpm -qa [CentOS版 / openSUSE版] + +如果你想知道一个特定的包安装在哪儿, 你可以使用管道命令从以上命令的输出中去搜索,这在这个系列的[操作Linux文件 – 第一讲][5] 中有介绍。假定我们需要验证mysql-common这个包是否安装在Ubuntu系统中。 + + # dpkg -l | grep mysql-common + +![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) + +检查安装的包 + +另外一种方式来判断一个包是否已安装。 + + # dpkg --status package_name [Debian版和衍生版] + # rpm -q package_name [CentOS版 / openSUSE版] + +例如,让我们找出sysdig包是否安装在我们的系统。 + + # rpm -qa | grep sysdig + +![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) + +检查sysdig包 + +**4. 查询一个文件是由那个包安装的** + + # dpkg --search file_name + # rpm -qf file_name + +例如,pw_dict.hwm文件是由那个包安装的? + + # rpm -qf /usr/share/cracklib/pw_dict.hwm + +![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) + +Linux中查询文件 + +### 高级工具的常见用法 ### + +你用高级工具处理最常见的任务如下。 + +**1. 搜索包** + +aptitude更新将会更新可用的软件包列表,并且aptitude搜索会根据包名进行实际性的搜索。 + + # aptitude update && aptitude search package_name + +在搜索所有选项中,yum不仅可以通过包名还可以通过包的描述搜索程序包。 + + # yum search package_name + # yum search all package_name + # yum whatprovides “*/package_name” + +假定我们需要一个名为sysdig的包,要知道的是我们需要先安装然后才能运行。 + + # yum whatprovides “*/sysdig” + +![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) + +检查包描述 + +whatprovides告诉yum搜索一个含有能够匹配上述正则表达式的文件的包。 + + # zypper refresh && zypper search package_name [在openSUSE上] + +**2. 从仓库安装一个包** + +当安装一个包时,在包管理器解决了所有依赖性问题后,可能会提醒你确认安装。需要注意的是运行更新或刷新(根据所使用的软件包管理器)不是绝对必要,但是考虑到安全性和依赖性的原因,保持安装的软件包是最新的是一个好的系统管理员的做法。 + + # aptitude update && aptitude install package_name [Debian版和衍生版] + # yum update && yum install package_name [CentOS版] + # zypper refresh && zypper install package_name [openSUSE版] + +**3. 卸载包** + +按选项卸载将会卸载软件包,但把配置文件保留完好,然而清除包从系统中完全删去该程序。 +# aptitude remove / purge package_name +# yum erase package_name + + ---注意要卸载的openSUSE包前面的减号 --- + + # zypper remove -package_name + +在默认情况下,大部分(如果不是全部)的包管理器会提示你,在你实际卸载之前你是否确定要继续卸载。所以,请仔细阅读屏幕上的信息,以避免陷入不必要的麻烦! + +**4. 显示包的信息** + +下面的命令将会显示birthday这个包的信息。 + + # aptitude show birthday + # yum info birthday + # zypper info birthday + +![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) + +检查包信息 + +### 总结 ### + +作为一个系统管理员,包管理器是你不能回避的东西。你应该立即准备使用本文中介绍的这些工具。希望你在准备LFCS考试和日常工作中会觉得这些工具好用。欢迎在下面留下您的意见或问题,我们将尽可能快的回复你。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-package-management/ + +作者:[Gabriel Cánepa][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/dpkg-command-examples/ +[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file From 1404b5581120b793b53f75a0fb6ff8aace971632 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 5 Dec 2015 19:34:37 +0800 Subject: [PATCH 124/160] Translating sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md --- ... How to install Android Studio on Ubuntu 15.04 or CentOS 7.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md b/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md index 29569329c9..e0a8272395 100644 --- a/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md +++ b/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md @@ -1,3 +1,4 @@ +ictlyh Translating How to install Android Studio on Ubuntu 15.04 / CentOS 7 ================================================================================ With the advancement of smart phones in the recent years, Android has become one of the biggest phone platforms and all the tools required to build Android applications are also freely available. Android Studio is an Integrated Development Environment (IDE) for developing Android applications based on [IntelliJ IDEA][1]. It is a free and open source software by Google released in 2014 and succeeds Eclipse as the main IDE. From 8d68281c0599f22eb4e01456c41e72e777824024 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 5 Dec 2015 23:13:14 +0800 Subject: [PATCH 125/160] PUB:Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache @ictlyh --- ...Network Security Service NSS for Apache.md | 78 ++++++++++--------- 1 file changed, 40 insertions(+), 38 deletions(-) rename {translated/tech => published}/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md (62%) diff --git a/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/published/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md similarity index 62% rename from translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md rename to published/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md index 5ff1f9fe65..574e9dc594 100644 --- a/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md +++ b/published/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md @@ -1,11 +1,13 @@ -RHCE 系列: 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS +RHCE 系列(八):在 Apache 上使用网络安全服务(NSS)实现 HTTPS ================================================================================ -如果你是一个负责维护和确保 web 服务器安全的系统管理员,你不能不花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。 + +如果你是一个负责维护和确保 web 服务器安全的系统管理员,你需要花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。 + ![使用 SSL/TLS 设置 Apache HTTPS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png) -RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS +*RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS* -为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(安全套接层)或者最近称为 TLS(传输层安全)的组合,产生了 HTTPS 协议。 +为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(Secure Sockets Layer,安全套接层)或者最近称为 TLS(Transport Layer Security,传输层安全)的组合,产生了 HTTPS 协议。 由于一些严重的安全漏洞,SSL 已经被更健壮的 TLS 替代。由于这个原因,在这篇文章中我们会解析如何通过 TLS 实现你 web 服务器和客户端之间的安全连接。 @@ -22,11 +24,11 @@ RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 # firewall-cmd --permanent –-add-service=http # firewall-cmd --permanent –-add-service=https -然后安装一些必须软件包: +然后安装一些必需的软件包: # yum update && yum install openssl mod_nss crypto-utils -**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中由于更加健壮我们会使用 NSS;例如,它支持最新的加密标准,比如 PKCS #11)。 +**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(Network Security Service,网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中我们会使用 NSS,因为它更加安全,比如说,它支持最新的加密标准,比如 PKCS #11)。 如果你使用 mod\_nss,首先要卸载 mod\_ssl,反之如此。 @@ -54,15 +56,15 @@ nss.conf – 配置文件 下一步,在 `/etc/httpd/conf.d/nss.conf` 配置文件中做以下更改: -1. 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的: +1、 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的: NSSCertificateDatabase /etc/httpd/alias -2. 通过保存密码到数据库目录中的 /etc/httpd/nss-db-password.conf 文件避免每次系统启动时要手动输入密码: +2、 通过保存密码到数据库目录中的 `/etc/httpd/nss-db-password.conf` 文件来避免每次系统启动时要手动输入密码: NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf -其中 /etc/httpd/nss-db-password.conf 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码: +其中 `/etc/httpd/nss-db-password.conf` 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码: internal:mypassword @@ -71,27 +73,27 @@ nss.conf – 配置文件 # chmod 640 /etc/httpd/nss-db-password.conf # chgrp apache /etc/httpd/nss-db-password.conf -3. 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。 +3、 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。 确保 NSSProtocol 指令的每个实例都类似下面一样(如果你没有托管其它虚拟主机,很可能只有一条): NSSProtocol TLSv1.0,TLSv1.1 -4. 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加: +4、 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加: NSSEnforceValidCerts off -5. 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要: +5、 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要: # certutil -W -d /etc/httpd/alias ![为 NSS 数据库设置密码](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png) -为 NSS 数据库设置密码 +*为 NSS 数据库设置密码* ### 创建一个 Apache SSL 自签名证书 ### -下一步,我们会创建一个自签名证书为我们的客户机识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。 +下一步,我们会创建一个自签名证书来让我们的客户机可以识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。 我们用 genkey 命令为 box1 创建有效期为 365 天的 NSS 兼容证书。完成这一步后: @@ -101,19 +103,19 @@ nss.conf – 配置文件 ![创建 Apache SSL 密钥](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png) -创建 Apache SSL 密钥 +*创建 Apache SSL 密钥* 你可以使用默认的密钥大小(2048),然后再次选择 Next: ![选择 Apache SSL 密钥大小](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png) -选择 Apache SSL 密钥大小 +*选择 Apache SSL 密钥大小* 等待系统生成随机比特: ![生成随机密钥比特](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png) -生成随机密钥比特 +*生成随机密钥比特* 为了加快速度,会提示你在控制台输入随机字符,正如下面的截图所示。请注意当没有从键盘接收到输入时进度条是如何停止的。然后,会让你选择: @@ -124,35 +126,35 @@ nss.conf – 配置文件 注:youtube 视频 -最后,会提示你输入之前设置的密码到 NSS 证书: +最后,会提示你输入之前给 NSS 证书设置的密码: # genkey --nss --days 365 box1 ![Apache NSS 证书密码](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png) -Apache NSS 证书密码 +*Apache NSS 证书密码* -在任何时候你都可以用以下命令列出现有的证书: +需要的话,你可以用以下命令列出现有的证书: # certutil –L –d /etc/httpd/alias ![列出 Apache NSS 证书](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png) -列出 Apache NSS 证书 +*列出 Apache NSS 证书* -然后通过名字删除(除非严格要求,用你自己的证书名称替换 box1): +然后通过名字删除(如果你真的需要删除的,用你自己的证书名称替换 box1): # certutil -d /etc/httpd/alias -D -n "box1" -如果你需要继续的话: +如果你需要继续进行的话,请继续阅读。 ### 测试 Apache SSL HTTPS 连接 ### -最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://,你会看到著名的信息 “This connection is untrusted”: +最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://\,你会看到著名的信息 “This connection is untrusted”: ![检查 Apache SSL 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png) -检查 Apache SSL 连接 +*检查 Apache SSL 连接* 在上面的情况中,你可以点击添加例外(Add Exception) 然后确认安全例外(Confirm Security Exception) - 但先不要这么做。让我们首先来看看证书看它的信息是否和我们之前输入的相符(如截图所示)。 @@ -160,37 +162,37 @@ Apache NSS 证书密码 ![确认 Apache SSL 证书详情](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png) -确认 Apache SSL 证书详情 +*确认 Apache SSL 证书详情* -现在你继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情: +现在你可以继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情: -在火狐浏览器中,你可以通过在屏幕中右击然后从上下文菜单中选择检查元素(Inspect Element)启动,尤其是通过网络选项卡: +在火狐浏览器中,你可以通过在屏幕中右击,然后从上下文菜单中选择检查元素(Inspect Element)启动开发者工具,尤其要看“网络”选项卡: ![检查 Apache HTTPS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png) -检查 Apache HTTPS 连接 +*检查 Apache HTTPS 连接* 请注意这和之前显示的在验证过程中输入的信息一致。还有一种方式通过使用命令行工具测试连接: -左边(测试 SSLv3): +左图(测试 SSLv3): # openssl s_client -connect localhost:443 -ssl3 -右边(测试 TLS): +右图(测试 TLS): # openssl s_client -connect localhost:443 -tls1 ![测试 Apache SSL 和 TLS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png) -测试 Apache SSL 和 TLS 连接 +*测试 Apache SSL 和 TLS 连接* -参考上面的截图了解更相信信息。 +参考上面的截图了解更详细信息。 ### 总结 ### -我确信你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。 +我想你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。 -在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(启用的步骤和发送 CSR 到 CA 然后获得签名证书的例子相同);另外的情况,就是像我们的例子中一样使用自签名证书。 +在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(步骤和设置需要启用例外的证书的步骤相同,发送 CSR 到 CA 然后获得返回的签名证书);否则,就像我们的例子中一样使用自签名证书即可。 要获取更多关于使用 NSS 的详情,可以参考关于 [mod-nss][3] 的在线帮助。如果你有任何疑问或评论,请告诉我们。 @@ -200,11 +202,11 @@ via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-ns 作者:[Gabriel Cánepa][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.tecmint.com/install-lamp-in-centos-7/ -[1]:http://www.tecmint.com/author/gacanepa/ +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://linux.cn/article-5789-1.html [2]:https://access.redhat.com/articles/1232123 [3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html \ No newline at end of file From 9248a086a3a68df962b8ee7073a905c2606a88bf Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 6 Dec 2015 00:35:36 +0800 Subject: [PATCH 126/160] Delete 20151201 Linux and Unix Port Scanning With netcat [nc] Command.md --- ... Port Scanning With netcat [nc] Command.md | 97 ------------------- 1 file changed, 97 deletions(-) delete mode 100644 sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md diff --git a/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md b/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md deleted file mode 100644 index f4019db6eb..0000000000 --- a/sources/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md +++ /dev/null @@ -1,97 +0,0 @@ -translation by strugglingyouth -Linux and Unix Port Scanning With netcat [nc] Command -================================================================================ -How do I find out which ports are opened on my own server? How do I run port scanning using the nc command instead of [the nmap command on a Linux or Unix-like][1] systems? - -The nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. If nmap is not installed and you do not wish to use all of nmap options you can use netcat/nc command for scanning ports. This may useful to know which ports are open and running services on a target machine. You can use [nmap command for port scanning][2] too. - -### How do I use nc to scan Linux, UNIX and Windows server port scanning? ### - -If nmap is not installed try nc / netcat command as follow. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. Run nc command with -z flag. You need to specify host name / ip along with the port range to limit and speedup operation: - - ## syntax ## - nc -z -v {host-name-here} {port-range-here} - nc -z -v host-name-here ssh - nc -z -v host-name-here 22 - nc -w 1 -z -v server-name-here port-Number-her - - ## scan 1 to 1023 ports ## - nc -zv vip-1.vsnl.nixcraft.in 1-1023 - -Sample outputs: - - Connection to localhost 25 port [tcp/smtp] succeeded! - Connection to vip-1.vsnl.nixcraft.in 25 port [tcp/smtp] succeeded! - Connection to vip-1.vsnl.nixcraft.in 80 port [tcp/http] succeeded! - Connection to vip-1.vsnl.nixcraft.in 143 port [tcp/imap] succeeded! - Connection to vip-1.vsnl.nixcraft.in 199 port [tcp/smux] succeeded! - Connection to vip-1.vsnl.nixcraft.in 783 port [tcp/*] succeeded! - Connection to vip-1.vsnl.nixcraft.in 904 port [tcp/vmware-authd] succeeded! - Connection to vip-1.vsnl.nixcraft.in 993 port [tcp/imaps] succeeded! - -You can scan individual port too: - - nc -zv v.txvip1 443 - nc -zv v.txvip1 80 - nc -zv v.txvip1 22 - nc -zv v.txvip1 21 - nc -zv v.txvip1 smtp - nc -zvn v.txvip1 ftp - - ## really fast scanner with 1 timeout value ## - netcat -v -z -n -w 1 v.txvip1 1-1023 - -Sample outputs: - -![Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server](http://s0.cyberciti.org/uploads/faq/2007/07/scan-with-nc.jpg) - -Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server - -Where, - -1. -z : Port scanning mode i.e. zero I/O mode. -1. -v : Be verbose [use twice -vv to be more verbose]. -1. -n : Use numeric-only IP addresses i.e. do not use DNS to resolve ip addresses. -1. -w 1 : Set time out value to 1. - -More examples: - - $ netcat -z -vv www.cyberciti.biz http - www.cyberciti.biz [75.126.153.206] 80 (http) open - sent 0, rcvd 0 - $ netcat -z -vv google.com https - DNS fwd/rev mismatch: google.com != maa03s16-in-f2.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f6.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f5.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f3.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f8.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f0.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f7.1e100.net - DNS fwd/rev mismatch: google.com != maa03s16-in-f4.1e100.net - google.com [74.125.236.162] 443 (https) open - sent 0, rcvd 0 - $ netcat -v -z -n -w 1 192.168.1.254 1-1023 - (UNKNOWN) [192.168.1.254] 989 (ftps-data) open - (UNKNOWN) [192.168.1.254] 443 (https) open - (UNKNOWN) [192.168.1.254] 53 (domain) open - -See also - -- [Scanning network for open ports with the nmap command][3] for more info. -- Man pages - [nc(1)][4], [nmap(1)][5] - --------------------------------------------------------------------------------- - -via: http://www.cyberciti.biz/faq/linux-port-scanning/ - -作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ -[2]:http://www.cyberciti.biz/tips/linux-scanning-network-for-open-ports.html -[3]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ -[4]:http://www.manpager.com/linux/man1/nc.1.html -[5]:http://www.manpager.com/linux/man1/nmap.1.html From 292729924948886b8fbf967d50335462c3a7665c Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 6 Dec 2015 00:36:08 +0800 Subject: [PATCH 127/160] Create 20151201 Linux and Unix Port Scanning With netcat [nc] Command.md --- ... Port Scanning With netcat [nc] Command.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 translated/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md diff --git a/translated/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md b/translated/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md new file mode 100644 index 0000000000..9672c222af --- /dev/null +++ b/translated/tech/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md @@ -0,0 +1,99 @@ + +使用 netcat [nc] 命令对 Linux 和 Unix 进行端口扫描 +================================================================================ + +我如何在自己的服务器上找出哪些端口是开放的?如何使用 nc 命令进行端口扫描来替换 [Linux 或 类 Unix 中的 nmap 命令][1]? + +nmap (“Network Mapper”)是一个开源工具用于网络探测和安全审核。如果 nmap 没有安装或者你不希望使用 nmap,那你可以用 netcat/nc 命令进行端口扫描。它对于查看目标计算机上哪些端口是开放的或者运行着服务是非常有用的。你也可以使用 [nmap 命令进行端口扫描][2] 。 + +### 如何使用 nc 来扫描 Linux,UNIX 和 Windows 服务器的端口呢? ### + +If nmap is not installed try nc / netcat command as follow. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. Run nc command with -z flag. You need to specify host name / ip along with the port range to limit and speedup operation: + + +如果未安装 nmap,如下所示,试试 nc/netcat 命令。-z 参数用来告诉 nc 报告开放的端口,而不是启动连接。在 nc 命令中使用 -z 参数时,你需要在主机名/ip 后面指定端口的范围来限制和加速其运行: + + ## 语法 ## + nc -z -v {host-name-here} {port-range-here} + nc -z -v host-name-here ssh + nc -z -v host-name-here 22 + nc -w 1 -z -v server-name-here port-Number-her + + ## 扫描 1 to 1023 端口 ## + nc -zv vip-1.vsnl.nixcraft.in 1-1023 + +输出示例: + + Connection to localhost 25 port [tcp/smtp] succeeded! + Connection to vip-1.vsnl.nixcraft.in 25 port [tcp/smtp] succeeded! + Connection to vip-1.vsnl.nixcraft.in 80 port [tcp/http] succeeded! + Connection to vip-1.vsnl.nixcraft.in 143 port [tcp/imap] succeeded! + Connection to vip-1.vsnl.nixcraft.in 199 port [tcp/smux] succeeded! + Connection to vip-1.vsnl.nixcraft.in 783 port [tcp/*] succeeded! + Connection to vip-1.vsnl.nixcraft.in 904 port [tcp/vmware-authd] succeeded! + Connection to vip-1.vsnl.nixcraft.in 993 port [tcp/imaps] succeeded! + +你也可以扫描单个端口: + + nc -zv v.txvip1 443 + nc -zv v.txvip1 80 + nc -zv v.txvip1 22 + nc -zv v.txvip1 21 + nc -zv v.txvip1 smtp + nc -zvn v.txvip1 ftp + + ## really fast scanner with 1 timeout value ## + netcat -v -z -n -w 1 v.txvip1 1-1023 + +输出示例: + +![Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server](http://s0.cyberciti.org/uploads/faq/2007/07/scan-with-nc.jpg) + +图01:Linux/Unix:使用 Netcat 来测试 TCP 和 UDP 与服务器建立连接, + +1. -z : 端口扫描模式即 I/O 模式。 +1. -v : 显示详细信息 [使用 -vv 来输出更详细的信息]。 +1. -n : 使用纯数字 IP 地址,即不用 DNS 来解析 IP 地址。 +1. -w 1 : 设置超时值设置为1。 + +更多例子: + + $ netcat -z -vv www.cyberciti.biz http + www.cyberciti.biz [75.126.153.206] 80 (http) open + sent 0, rcvd 0 + $ netcat -z -vv google.com https + DNS fwd/rev mismatch: google.com != maa03s16-in-f2.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f6.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f5.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f3.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f8.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f0.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f7.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f4.1e100.net + google.com [74.125.236.162] 443 (https) open + sent 0, rcvd 0 + $ netcat -v -z -n -w 1 192.168.1.254 1-1023 + (UNKNOWN) [192.168.1.254] 989 (ftps-data) open + (UNKNOWN) [192.168.1.254] 443 (https) open + (UNKNOWN) [192.168.1.254] 53 (domain) open + +也可以看看 : + +- [使用 nmap 命令扫描网络中开放的端口][3]。 +- 手册页 - [nc(1)][4], [nmap(1)][5] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/linux-port-scanning/ + +作者:Vivek Gite +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ +[2]:http://www.cyberciti.biz/tips/linux-scanning-network-for-open-ports.html +[3]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/ +[4]:http://www.manpager.com/linux/man1/nc.1.html +[5]:http://www.manpager.com/linux/man1/nmap.1.html From 9ad6d5c6593ff82f228174c80c3a5bf9448ade47 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 6 Dec 2015 09:52:04 +0800 Subject: [PATCH 128/160] [Translated] sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md --- ...roid Studio on Ubuntu 15.04 or CentOS 7.md | 140 ------------------ ...roid Studio on Ubuntu 15.04 or CentOS 7.md | 139 +++++++++++++++++ 2 files changed, 139 insertions(+), 140 deletions(-) delete mode 100644 sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md create mode 100644 translated/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md diff --git a/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md b/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md deleted file mode 100644 index e0a8272395..0000000000 --- a/sources/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md +++ /dev/null @@ -1,140 +0,0 @@ -ictlyh Translating -How to install Android Studio on Ubuntu 15.04 / CentOS 7 -================================================================================ -With the advancement of smart phones in the recent years, Android has become one of the biggest phone platforms and all the tools required to build Android applications are also freely available. Android Studio is an Integrated Development Environment (IDE) for developing Android applications based on [IntelliJ IDEA][1]. It is a free and open source software by Google released in 2014 and succeeds Eclipse as the main IDE. - -In this article, we will learn how to install Android Studio on Ubuntu 15.04 and CentOS 7. - -### Installation on Ubuntu 15.04 ### - -We can install Android Studio in two ways. One is to set up the required repository and install it; other is to download it from the official Android site and install it locally. In the following example, we will be setting up the repo using command line and install it. Before proceeding, we need to make sure that we have JDK version1.6 or greater installed. - -Here, I'm installing JDK 1.8. - - $ sudo add-apt-repository ppa:webupd8team/java - - $ sudo apt-get update - - $ sudo apt-get install oracle-java8-installer oracle-java8-set-default - -Verify if java installation was successful: - - poornima@poornima-Lenovo:~$ java -version - -Now, setup the repo for installing Android Studio - - $ sudo apt-add-repository ppa:paolorotolo/android-studio - -![Android-Studio-repo](http://blog.linoxide.com/wp-content/uploads/2015/11/Android-studio-repo.png) - - $ sudo apt-get update - - $ sudo apt-get install android-studio - -Above install command will install android-studio in the directory /opt. - -Now, run the following command to start the setup wizard: - - $ /opt/android-studio/bin/studio.sh - -This will invoke the setup screen. Following are the screen shots that follow to set up Android studio: - -![Android Studio setup](http://blog.linoxide.com/wp-content/uploads/2015/11/Studio-setup.png) - -![Install-type](Android Studio setup) - -![Emulator Settings](http://blog.linoxide.com/wp-content/uploads/2015/11/Emulator-settings.png) - -Once you press the Finish button, Licence agreement will be displayed. After you accept the licence, it starts downloading the required components. - -![Download components](http://blog.linoxide.com/wp-content/uploads/2015/11/Download.png) - -Android studio installation will be complete after this step. When you relaunch Android studio, you will be shown the following welcome screen from where you will be able to start working with your Android Studio. - -![Welcome screen](http://blog.linoxide.com/wp-content/uploads/2015/11/Welcome-screen.png) - -### Installation on CentOS 7 ### - -Let us now learn how to install Android Studio on CentOS 7. Here also, you need to install JDK 1.6 or later. Remember to use 'sudo' before the commands if you are not a root user. You can download the [latest version][2] of JDK. In case you already have an older version installed, remove the same before installing the new one. In the below example, I will be installing JDK version 1.8.0_65 by downloading the required rpm. - - [root@li1260-39 ~]# rpm -ivh jdk-8u65-linux-x64.rpm - Preparing... ################################# [100%] - Updating / installing... - 1:jdk1.8.0_65-2000:1.8.0_65-fcs ################################# [100%] - Unpacking JAR files... - tools.jar... - plugin.jar... - javaws.jar... - deploy.jar... - rt.jar... - jsse.jar... - charsets.jar... - localedata.jar... - jfxrt.jar... - -If Java path is not set properly, you will get error messages. Hence, set the correct path: - - export JAVA_HOME=/usr/java/jdk1.8.0_25/ - export PATH=$PATH:$JAVA_HOME - -Check if the correct version has been installed: - - [root@li1260-39 ~]# java -version - java version "1.8.0_65" - Java(TM) SE Runtime Environment (build 1.8.0_65-b17) - Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode) - -If you notice any error message of the sort "unable-to-run-mksdcard-sdk-tool:" while trying to install Android Studio, you might also have to install the following packages on CentOS 7 64-bit: - - glibc.i686 - - glibc-devel.i686 - - libstdc++.i686 - - zlib-devel.i686 - - ncurses-devel.i686 - - libX11-devel.i686 - - libXrender.i686 - - libXrandr.i686 - -Let us know install studio by downloading the ide file from [Android site][3] and unzipping the same. - - [root@li1260-39 tmp]# unzip android-studio-ide-141.2343393-linux.zip - -Move android-studio directory to /opt directory - - [root@li1260-39 tmp]# mv /tmp/android-studio/ /opt/ - -You can create a simlink to the studio executable to quickly start it whenever you need it. - - [root@li1260-39 tmp]# ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/android-studio - -Now launch the studio from a terminal: - - [root@localhost ~]#studio - -The screens that follow for completing the installation are same as the ones shown above for Ubuntu. When the installation completes, you can start creating your own Android applications. - -### Conclusion ### - -Within a year of its release, Android Studio has taken over as the primary IDE for Android development by eclipsing Eclipse. It is the only official IDE tool that will support future Android SDKs and other Android features that will be provided by Google. So, what are you waiting for? Go install Android Studio and have fun developing Android apps. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/tools/install-android-studio-ubuntu-15-04-centos-7/ - -作者:[B N Poornima][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/bnpoornima/ -[1]:https://www.jetbrains.com/idea/ -[2]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html -[3]:http://developer.android.com/sdk/index.html \ No newline at end of file diff --git a/translated/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md b/translated/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md new file mode 100644 index 0000000000..11d2e9b5b4 --- /dev/null +++ b/translated/tech/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md @@ -0,0 +1,139 @@ +如何在 Ubuntu 15.04 / CentOS 7 上安装 Android Studio +================================================================================ +随着最近几年智能手机的进步,安卓成为了最大的手机平台之一,也有很多免费的用于开发安卓应用的工具。Android Studio 是基于 [IntelliJ IDEA][1] 用于开发安卓应用的集成开发环境。它是 Google 2014 年发布的免费开源软件,继 Eclipse 之后成为主要的 IDE。 + +在这篇文章,我们一起来学习如何在 Ubuntu 15.04 和 CentOS 7 上安装 Android Studio。 + +### 在 Ubuntu 15.04 上安装 ### + +我们可以用两种方式安装 Android Studio。第一种是配置必须的库然后再安装它;另一种是从 Android 官方网站下载然后再本地编译安装。在下面的例子中,我们会使用命令行设置库并安装它。在继续下一步之前,我们需要确保我们已经安装了 JDK 1.6 或者更新版本。 + +这里,我打算安装 JDK 1.8。 + + $ sudo add-apt-repository ppa:webupd8team/java + + $ sudo apt-get update + + $ sudo apt-get install oracle-java8-installer oracle-java8-set-default + +验证 java 是否安装成功: + + poornima@poornima-Lenovo:~$ java -version + +现在,设置安装 Android Studio 需要的库 + + $ sudo apt-add-repository ppa:paolorotolo/android-studio + +![Android-Studio-repo](http://blog.linoxide.com/wp-content/uploads/2015/11/Android-studio-repo.png) + + $ sudo apt-get update + + $ sudo apt-get install android-studio + +上面的安装命令会在 /opt 目录下面安装 Android Studio。 + +现在,运行下面的命令启动安装窗口: + + $ /opt/android-studio/bin/studio.sh + +这会激活安装窗口。下面的截图展示了安装 Android Studio 的过程。 + +![安装 Android Studio](http://blog.linoxide.com/wp-content/uploads/2015/11/Studio-setup.png) + +![安装类型](http://blog.linoxide.com/wp-content/uploads/2015/11/Install-type.png) + +![设置模拟器](http://blog.linoxide.com/wp-content/uploads/2015/11/Emulator-settings.png) + +你点击了 Finish 按钮之后,就会显示同意协议页面。当你接受协议之后,它就开始下载需要的组件。 + +![下载组件](http://blog.linoxide.com/wp-content/uploads/2015/11/Download.png) + +这一步之后就完成了 Android Studio 的安装。当你重启 Android Studio 时,你会看到下面的欢迎界面,从这里你可以开始用 Android Studio 工作了。 + +![欢迎界面](http://blog.linoxide.com/wp-content/uploads/2015/11/Welcome-screen.png) + +### 在 CentOS 7 上安装 ### + +现在再让我们来看看如何在 CentOS 7 上安装 Android Studio。这里你同样需要安装 JDK 1.6 或者更新版本。如果你不是 root 用户,记得在命令前面使用 ‘sudo’。你可以下载[最新版本][2]的 JDK。如果你已经安装了一个比较旧的版本,在安装新的版本之前你需要先卸载旧版本。在下面的例子中,我会通过下载需要的 rpm 包安装 JDK 1.8.0_65。 + + [root@li1260-39 ~]# rpm -ivh jdk-8u65-linux-x64.rpm + Preparing... ################################# [100%] + Updating / installing... + 1:jdk1.8.0_65-2000:1.8.0_65-fcs ################################# [100%] + Unpacking JAR files... + tools.jar... + plugin.jar... + javaws.jar... + deploy.jar... + rt.jar... + jsse.jar... + charsets.jar... + localedata.jar... + jfxrt.jar... + +如果没有正确设置 Java 路径,你会看到错误信息。因此,设置正确的路径: + + export JAVA_HOME=/usr/java/jdk1.8.0_25/ + export PATH=$PATH:$JAVA_HOME + +检查是否安装了正确的版本: + + [root@li1260-39 ~]# java -version + java version "1.8.0_65" + Java(TM) SE Runtime Environment (build 1.8.0_65-b17) + Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode) + +如果你安装 Android Studio 的时候看到任何类似 “unable-to-run-mksdcard-sdk-tool:” 的错误信息,你可能要在 CentOS 7 64 位系统中安装以下软件包: + + glibc.i686 + + glibc-devel.i686 + + libstdc++.i686 + + zlib-devel.i686 + + ncurses-devel.i686 + + libX11-devel.i686 + + libXrender.i686 + + libXrandr.i686 + +通过从 [Android 网站][3] 下载 IDE 文件然后解压安装 studio 也是一样的。 + + [root@li1260-39 tmp]# unzip android-studio-ide-141.2343393-linux.zip + +移动 android-studio 目录到 /opt 目录 + + [root@li1260-39 tmp]# mv /tmp/android-studio/ /opt/ + +需要的话你可以创建一个到 studio 可执行文件的符号链接用于快速启动。 + + [root@li1260-39 tmp]# ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/android-studio + +现在在终端中启动 studio: + + [root@localhost ~]#studio + +之后用于完成安装的截图和前面 Ubuntu 安装过程中的是一样的。安装完成后,你就可以开始开发你自己的 Android 应用了。 + +### 总结 ### + +虽然发布不到一年,但是 Android Studio 已经替代 Eclipse 成为了安装开发最主要的 IDE。它是唯一一个能支持之后 Google 提供的 Android SDKs 和其它 Android 特性的官方 IDE 工具。那么,你还在等什么呢?赶快安装 Android Studio 然后体验开发安装应用的乐趣吧。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-android-studio-ubuntu-15-04-centos-7/ + +作者:[B N Poornima][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/bnpoornima/ +[1]:https://www.jetbrains.com/idea/ +[2]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html +[3]:http://developer.android.com/sdk/index.html \ No newline at end of file From aaba87b2f2cd603f6824089c5e5b42d538ac8cfe Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 6 Dec 2015 20:33:09 +0800 Subject: [PATCH 129/160] =?UTF-8?q?20151206-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20151206 Supporting secure DNS in glibc.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 sources/tech/20151206 Supporting secure DNS in glibc.md diff --git a/sources/tech/20151206 Supporting secure DNS in glibc.md b/sources/tech/20151206 Supporting secure DNS in glibc.md new file mode 100644 index 0000000000..8933c3c891 --- /dev/null +++ b/sources/tech/20151206 Supporting secure DNS in glibc.md @@ -0,0 +1,46 @@ +Supporting secure DNS in glibc +======================== + +Credit: Jonathan Corbet + +One of the many weak links in Internet security is the domain name system (DNS); it is subject to attacks that, among other things, can mislead applications regarding the IP address of a system they wish to connect to. That, in turn, can cause connections to go to the wrong place, facilitating man-in-the-middle attacks and more. The DNSSEC protocol extensions are meant to address this threat by setting up a cryptographically secure chain of trust for DNS information. When DNSSEC is set up properly, applications should be able to trust the results of domain lookups. As the discussion over an attempt to better integrate DNSSEC into the GNU C Library shows, though, ensuring that DNS lookups are safe is still not a straightforward problem. + +In a sense, the problem was solved years ago; one can configure a local nameserver to perform full DNSSEC verification and use that server via glibc calls in applications. DNSSEC can even be used to increase security in other areas; it can, for example, carry SSH or TLS key fingerprints, allowing applications to verify that they are talking to the right server. Things get tricky, though, when one wants to be sure that DNS results claiming to have DNSSEC verification are actually what they claim to be — when one wants the security that DNSSEC is meant to provide, in other words. + +The /etc/resolv.conf problem + +Part of the problem, from the glibc perspective, is that glibc itself does not do DNSSEC verification. Instead, it consults /etc/resolv.conf and asks the servers found therein to do the lookup and verification; the results are then returned to the application. If the application is using the low-level res_query() interface, those results may include the "authenticated data" (AD) flag (if the nameserver has set it) indicating that DNSSEC verification has been successfully performed. But glibc knows nothing about the trustworthiness of the nameserver that has provided those results, so it cannot tell the application anything about whether they should really be trusted. + +One of the first steps suggested by glibc maintainer Carlos O'Donell is to add an option (dns-strip-dnssec-ad-bit) to the resolv.conf file telling glibc to unconditionally remove the AD bit. This option could be set by distributions to indicate that the DNS lookup results cannot be trusted at a DNSSEC level. Once things have been set up so that the results can be trusted, that option can be removed. In the meantime, though, applications would have a way to judge the DNS lookup results they get from glibc, something that does not exist now. + +What would a trustworthy setup look like? The standard picture looks something like this: there is a local nameserver, accessed via the loopback interface, as the only entry in /etc/resolv.conf. That nameserver would be configured to do verification and, in the case that verification fails, simply return no results at all. There would, in almost all cases, be no need to worry about whether applications see the AD bit or not; if the results are not trustworthy, applications will simply not see them at all. A number of distributions are moving toward this model, but the situation is still not as simple as some might think. + +One problem is that this scheme makes /etc/resolv.conf into a central point of trust for the system. But, in a typical Linux system, there are no end of DHCP clients, networking scripts, and more that will make changes to that file. As Paul Wouters pointed out, locking down this file in the short term is not really an option. Sometimes those changes are necessary: when a diskless system is booting, it may need name-resolution service before it is at a point where it can start up its own nameserver. A system's entire DNS environment may change depending on which network it is attached to. Systems in containers may be best configured to talk to a nameserver on the host. And so on. + +So there seems to be a general belief that /etc/resolv.conf cannot really be trusted on current systems. Ideas to add secondary configuration files (/etc/secure-resolv.conf or whatever) have been floated, but they don't much change the basic nature of the situation. Beyond that, some participants felt that even a local nameserver running on the loopback interface is not really trustworthy; Zack Weinberg suggested that administrators might intentionally short out DNSSEC validation, for example. + +Since the configuration cannot be trusted on current systems, the reasoning goes, glibc needs to have a way to indicate to applications when the situation has improved and things can be trusted. That could include the AD-stripping option described above (or, conversely, an explicit "this nameserver is trusted" option); that, of course, would require that the system be locked down to a level where surprising changes to /etc/resolv.conf no longer happen. A variant, as suggested by Petr Spacek, is to have a way for an application to ask glibc whether it is talking to a local nameserver or not. + +Do it in glibc? + +An alternative would be to dispense with the nameserver and have glibc do DNSSEC validation itself. There is, however, resistance to putting a big pile of cryptographic code into glibc itself. That would increase the size of the library and, it is felt, increase the attack surface of any application using it. A variant of this idea, suggested by Zack, would be to put the validation code into the name-service caching daemon (nscd) instead. Since nscd is part of glibc, it is under the control of the glibc developers and there could be a certain amount of confidence that DNSSEC validation is being performed properly. The location of the nscd socket is well known, so the /etc/resolv.confissues don't come into play. Carlos worried, though, that this approach might deter adoption by users who do not want the caching features of nscd; in his mind, that seems to rule out the nscd option. + +So, in the short term, at least, it seems unlikely that glibc will take on the full task of performing validated DNSSEC lookups. That means that, if security-conscious applications are going to use glibc for their name lookups, the library will have to provide an indication of how trustworthy the results received from a separate nameserver are. And that will almost certainly require explicit action on the part of the distributor and/or system administrator. As Simo Sorce put it: + +A situation in which glibc does not use an explicit configuration option to signal applications that it is using a trusted resolver is not useful ... no scratch that, it is actively harmful, because applications developers will quickly realize they cannot trust any information coming from glibc and will simply not use it for DNSSEC related information. + +Configuring a system to properly use DNSSEC involves change to many of the components of that system — it is a distribution-wide problem that will take time to solve fully. The role that glibc plays in this transition is likely to be relatively small, but it is an important one: glibc is probably the only place where applications can receive some assurance that their DNS results are trustworthy without implementing their own resolver code. Running multiple DNSSEC implementations on a system seems like an unlikely path to greater security, so it would be good to get this right. + +The glibc project has not yet chosen a path by which it intends to get things right, though some sort of annotation in /etc/resolv.conf looks like a likely outcome. Any such change would then have to get into a release; given the conservative nature of glibc development, it may already be late for the 2.23 release, which is likely to happen in February. So higher DNSSEC awareness in glibc may not happen right away, but there is at least some movement in that direction. + +--------------------------- + +via: https://lwn.net/Articles/663474/ + +作者:Jonathan Corbet + +译者:[译者ID](https://github.com/译者ID) + +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 15edcca1ec9037d32e2105e523839207b2b8675a Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 6 Dec 2015 20:46:13 +0800 Subject: [PATCH 130/160] =?UTF-8?q?20151206-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit net, ipv6, security --- ...anager and privacy in the IPv6 internet.md | 54 +++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 sources/tech/20151206 NetworkManager and privacy in the IPv6 internet.md diff --git a/sources/tech/20151206 NetworkManager and privacy in the IPv6 internet.md b/sources/tech/20151206 NetworkManager and privacy in the IPv6 internet.md new file mode 100644 index 0000000000..b485db2927 --- /dev/null +++ b/sources/tech/20151206 NetworkManager and privacy in the IPv6 internet.md @@ -0,0 +1,54 @@ +NetworkManager and privacy in the IPv6 internet +====================== + +IPv6 is gaining momentum. With growing use of the protocol concerns about privacy that were not initially anticipated arise. The Internet community actively publishes solutions to them. What’s the current state and how does NetworkManager catch up? Let’s figure out! + +![](https://blogs.gnome.org/lkundrak/files/2015/12/cameras1.jpg) + +## The identity of a IPv6-connected host + +The IPv6 enabled nodes don’t need a central authority similar to IPv4 [DHCP](https://tools.ietf.org/html/rfc2132) servers to configure their addresses. They discover the networks they are in and [complete the addresses themselves](https://tools.ietf.org/html/rfc4862) by generating the host part. This makes the network configuration simpler and scales better to larger networks. However, there’s some drawbacks to this approach. Firstly, the node needs to ensure that its address doesn’t collide with an address of any other node on the network. Secondly, if the node uses the same host part of the address in every network it enters then its movement can be tracked and the privacy is at risk. + +Internet Engineering Task Force (IETF), the organization behind the Internet standards, [acknowledged this problem](https://tools.ietf.org/html/draft-iesg-serno-privacy-00) and recommends against use of hardware serial numbers to identify the node in the network. + +But what does the actual implementation look like? + +The problem of address uniqueness is addressed with [Duplicate Address Detection](https://tools.ietf.org/html/rfc4862#section-5.4) (DAD) mechanism. When a node creates an address for itself it first checks whether another node uses the same address using the [Neighbor Discovery Protocol](https://tools.ietf.org/html/rfc4861) (a mechanism not unlike IPv4 [ARP](https://tools.ietf.org/html/rfc826) protocol). When it discovers the address is already used, it must discard it. + +The other problem (privacy) is a bit harder to solve. An IP address (be it IPv4 or IPv6) address consists of a network part and the host part. The host discovers the relevant network parts and is supposed generate the host part. Traditionally it just uses an Interface Identifier derived from the network hardware’s (MAC) address. The MAC address is set at manufacturing time and can uniquely identify the machine. This guarantees the address is stable and unique. That’s a good thing for address collision avoidance but a bad thing for privacy. The host part remaining constant in different network means that the machine can be uniquely identified as it enters different networks. This seemed like non-issue at the time the protocol was designed, but the privacy concerns arose as the IPv6 gained popularity. Fortunately, there’s a solution to this problem. + +## Enter privacy extensions + +It’s no secret that the biggest problem with IPv4 is that the addresses are scarce. This is no longer true with IPv6 and in fact an IPv6-enabled host can use addresses quite liberally. There’s absolutely nothing wrong with having multiple IPv6 addresses attached to the same interface. On the contrary, it’s a pretty standard situation. At the very minimum each node has an address that is used for contacting nodes on the same hardware link called a link-local address.  When the network contains a router that connects it to other networks in the internet, a node has an address for every network it’s directly connected to. If a host has more addresses in the same network the node accepts incoming traffic for all of them. For the outgoing connections which, of course, reveal the address to the remote host, the kernel picks the fittest one. But which one is it? + +With privacy extensions enabled, as defined by [RFC4941](https://tools.ietf.org/html/rfc4941), a new address with a random host part is generated every now and then. The newest one is used for new outgoing connections while the older ones are deprecated when they’re unused. This is a nifty trick — the host does not reveal the stable address as it’s not used for outgoing connections, but still accepts connections to it from the hosts that are aware of it. + +There’s a downside to this. Certain applications tie the address to the user identity. Consider a web application that issues a HTTP Cookie for the user during the authentication but only accepts it for the connections that come from the address that conducted the authentications. As the kernel generates a new temporary address, the server would reject the requests that use it, effectively logging the user out. It could be argued that the address is not an appropriate mechanism for establishing user’s identity but that’s what some real-world applications do. + +## Privacy stable addressing to the rescue + +Another approach would be needed to cope with this. There’s a need for an address that is unique (of course), stable for a particular network but still changes when user enters another network so that tracking is not possible. The RFC7217 introduces a mechanism that provides exactly this. + +Creation of a privacy stable address relies on a pseudo-random key that’s only known the host itself and never revealed to other hosts in the network. This key is then hashed using a cryptographically secure algorithm along with values specific for a particular network connection. It includes an identifier of the network interface, the network prefix and possibly other values specific to the network such as the wireless SSID. The use of the secret key makes it impossible to predict the resulting address for the other hosts while the network-specific data causes it to be different when entering a different network. + +This also solves the duplicate address problem nicely. The random key makes collisions unlikely. If, in spite of this, a collision occurs then the hash can be salted with a DAD failure counter and a different address can be generated instead of failing the network connectivity. Now that’s clever. + +Using privacy stable address doesn’t interfere with the privacy extensions at all. You can use the [RFC7217](https://tools.ietf.org/html/rfc7217) stable address while still employing the RFC4941 temporary addresses at the same time. + +## Where does NetworkManager stand? + +We’ve already enabled the privacy extensions with the release NetworkManager 1.0.4. They’re turned on by default; you can control them with ipv6.ip6-privacy property. + +With the release of NetworkManager 1.2, we’re adding the stable privacy addressing. It’s supposed to address the situations where the privacy extensions don’t make the cut. The use of the feature is controlled with the ipv6.addr-gen-mode property. If it’s set to stable-privacy then stable privacy addressing is used. Setting it to “eui64” or not setting it at all preserves the traditional default behavior. + +Stay tuned for NetworkManager 1.2 release in early 2016! If you want to try the bleeding-edge snapshot, give Fedora Rawhide a try. It will eventually become Fedora 24. + +*I’d like to thank Hannes Frederic Sowa for a valuable feedback. The article would make less sense without his corrections. Hannes also created the in-kernel implementation of the RFC7217 mechanism which can be used when the networking is not managed by NetworkManager.* + +-------------------------------------------------------------------------------- + +via: https://blogs.gnome.org/lkundrak/2015/12/03/networkmanager-and-privacy-in-the-ipv6-internet/ +作者:[Lubomir Rintel] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From b22800f39ca0498574ef1293a8b3734ca0d059d3 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 6 Dec 2015 21:33:52 +0800 Subject: [PATCH 131/160] PUB:Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration @ictlyh --- ... (SMTP) using null-client Configuration.md | 75 ++++++++----------- 1 file changed, 33 insertions(+), 42 deletions(-) rename {translated/tech => published}/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md (55%) diff --git a/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md b/published/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md similarity index 55% rename from translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md rename to published/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md index ccc67dbb30..59d6d9de0c 100644 --- a/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md +++ b/published/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md @@ -1,25 +1,25 @@ -第九部分 - 如果使用零客户端配置 Postfix 邮件服务器(SMTP) +RHCE 系列(九):如何使用无客户端配置 Postfix 邮件服务器(SMTP) ================================================================================ -尽管现在有很多在线联系方式,邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。 +尽管现在有很多在线联系方式,电子邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。 -下面的图描述了邮件从发送者发出直到信息到达接收者收件箱的传递过程。 +下面的图描述了电子邮件从发送者发出直到信息到达接收者收件箱的传递过程。 -![邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) +![电子邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) -邮件如何工作 +*电子邮件如何工作* -要使这成为可能,背后发生了好多事情。为了使邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者网络邮件服务,例如 Gmail 或 Yahoo 邮件)到一个邮件服务器,并从其到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。 +要实现这一切,背后发生了好多事情。为了使电子邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者 web 邮件服务,例如 Gmail 或 Yahoo 邮件)投递到一个邮件服务器,并从其投递到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。 -这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从中本地用户发送的邮件(甚至发送到本地用户)被转发到一个中央邮件服务器以便于访问。 +这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从本地用户发送的邮件(甚至发送到另外一个本地用户)被转发(forward)到一个中央邮件服务器以便于访问。 -在实际需求中这称为零客户端安装。 +在这个考试的要求中这称为无客户端(null-client)安装。 -在我们的测试环境中将包括一个原始邮件服务器和一个中央服务器或中继主机。 +在我们的测试环境中将包括一个起源(originating)邮件服务器和一个中央服务器或中继主机(relayhost)。 - 原始邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18) - 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20) +- 起源邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18) +- 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20) -为了域名解析我们在两台机器中都会使用有名的 /etc/hosts 文件: +我们在两台机器中都会使用你熟知的 `/etc/hosts` 文件做名字解析: 192.168.0.18 box1.mydomain.com box1 192.168.0.20 mail.mydomain.com mail @@ -28,34 +28,29 @@ 首先,我们需要(在两台机器上): -**1. 安装 Postfix:** +**1、 安装 Postfix:** # yum update && yum install postfix -**2. 启动服务并启用开机自动启动:** +**2、 启动服务并启用开机自动启动:** # systemctl start postfix # systemctl enable postfix -**3. 允许邮件流量通过防火墙:** +**3、 允许邮件流量通过防火墙:** # firewall-cmd --permanent --add-service=smtp # firewall-cmd --add-service=smtp - ![在防火墙中开通邮件服务器端口](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png) -在防火墙中开通邮件服务器端口 +*在防火墙中开通邮件服务器端口* -**4. 在 box1.mydomain.com 配置 Postfix** +**4、 在 box1.mydomain.com 配置 Postfix** -Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一个很大的文本,因为其中包含的注释解析了程序设置的目的。 +Postfix 的主要配置文件是 `/etc/postfix/main.cf`。这个文件本身是一个很大的文本文件,因为其中包含了解释程序设置的用途的注释。 -为了简洁,我们只显示了需要编辑的行(是的,在原始服务器中你需要保留 mydestination 为空;否则邮件会被保存到本地而不是我们实际想要的中央邮件服务器): - -**在 box1.mydomain.com 配置 Postfix** - ----------- +为了简洁,我们只显示了需要编辑的行(没错,在起源服务器中你需要保留 `mydestination` 为空;否则邮件会被存储到本地,而不是我们实际想要发往的中央邮件服务器): myhostname = box1.mydomain.com mydomain = mydomain.com @@ -64,11 +59,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 mydestination = relayhost = 192.168.0.20 -**5. 在 mail.mydomain.com 配置 Postfix** - -** 在 mail.mydomain.com 配置 Postfix ** - ----------- +**5、 在 mail.mydomain.com 配置 Postfix** myhostname = mail.mydomain.com mydomain = mydomain.com @@ -83,23 +74,23 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 ![设置 Postfix SELinux 权限](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png) -设置 Postfix SELinux 权限 +*设置 Postfix SELinux 权限* -上面的 SELinux 布尔值会允许 Postfix 在中央服务器写入邮件池。 +上面的 SELinux 布尔值会允许中央服务器上的 Postfix 可以写入邮件池(mail spool)。 -**6. 在两台机子上重启服务以使更改生效:** +**6、 在两台机子上重启服务以使更改生效:** # systemctl restart postfix 如果 Postfix 没有正确启动,你可以使用下面的命令进行错误处理。 - # systemctl –l status postfix - # journalctl –xn - # postconf –n + # systemctl -l status postfix + # journalctl -xn + # postconf -n ### 测试 Postfix 邮件服务 ### -为了测试邮件服务器,你可以使用任何邮件用户代理(最常见的简称为 MUA)例如 [mail 或 mutt][2]。 +要测试邮件服务器,你可以使用任何邮件用户代理(Mail User Agent,常简称为 MUA),例如 [mail 或 mutt][2]。 由于我个人喜欢 mutt,我会在 box1 中使用它发送邮件给用户 tecmint,并把现有文件(mailbody.txt)作为信息内容: @@ -107,7 +98,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 ![测试 Postfix 邮件服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png) -测试 Postfix 邮件服务器 +*测试 Postfix 邮件服务器* 现在到中央邮件服务器(mail.mydomain.com)以 tecmint 用户登录,并检查是否收到了邮件: @@ -116,15 +107,15 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 ![检查 Postfix 邮件服务器发送](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png) -检查 Postfix 邮件服务器发送 +*检查 Postfix 邮件服务器发送* -如果没有收到邮件,检查 root 用户的邮件池查看警告或者错误提示。你也需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中 打开了 25 号端口: +如果没有收到邮件,检查 root 用户的邮件池看看是否有警告或者错误提示。你也许需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中打开了 25 号端口: # nmap -PN 192.168.0.20 ![Postfix 邮件服务器错误处理](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png) -Postfix 邮件服务器错误处理 +*Postfix 邮件服务器错误处理* ### 总结 ### @@ -134,7 +125,7 @@ Postfix 邮件服务器错误处理 - [在 CentOS/RHEL 07 上配置仅缓存的 DNS 服务器][4] -最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎及时的帮助。 +最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎是及时的帮助。 -------------------------------------------------------------------------------- @@ -142,7 +133,7 @@ via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on- 作者:[Gabriel Cánepa][a] 译者:[ictlyh](https//www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c3e723319f751627568fe9382a78b9e6412e30e5 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 6 Dec 2015 22:24:03 +0800 Subject: [PATCH 132/160] PUB:Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7 @ictlyh --- ...e Protocol) Server' in RHEL or CentOS 7.md | 55 ++++++++++--------- 1 file changed, 28 insertions(+), 27 deletions(-) rename {translated/tech => published}/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md (50%) diff --git a/translated/tech/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md b/published/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md similarity index 50% rename from translated/tech/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md rename to published/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md index 54c4330ae2..404ad003fd 100644 --- a/translated/tech/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md +++ b/published/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md @@ -1,12 +1,13 @@ -第 10 部分:在 RHEL/CentOS 7 中设置 “NTP(网络时间协议) 服务器” +RHCE 系列(十):在 RHEL/CentOS 7 中设置 NTP(网络时间协议)服务器 ================================================================================ -网络时间协议 - NTP - 是运行在传输层 123 号端口允许计算机通过网络同步准确时间的协议。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要备份服务器资源或数据库。 + +网络时间协议 - NTP - 是运行在传输层 123 号端口的 UDP 协议,它允许计算机通过网络同步准确时间。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要复制服务器的资源或数据库。 ![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Install-in-CentOS.png) -在 CentOS 和 RHEL 7 上安装 NTP 服务器 +*在 CentOS 和 RHEL 7 上安装 NTP 服务器* -#### 要求: #### +#### 前置要求: #### - [CentOS 7 安装过程][1] - [RHEL 安装过程][2] @@ -17,62 +18,62 @@ - [在 CentOS/RHCE 7 上配置静态 IP][4] - [在 CentOS/RHEL 7 上停用并移除不需要的服务][5] -这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池列表中和你服务器地理位置最近的可用节点中同步时间。 +这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池(NTP Public Pool Time Servers)列表中和你服务器地理位置最近的可用节点中同步时间。 #### 步骤一:安装和配置 NTP 守护进程 #### -1. 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。 +1、 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。 # yum install ntp ![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Install-NTP-in-CentOS.png) -安装 NTP 服务器 +*安装 NTP 服务器* -2. 安装完服务器之后,首先到官方 [NTP 公共时间服务器池][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。 +2、 安装完服务器之后,首先到官方 [NTP 公共时间服务器池(NTP Public Pool Time Servers)][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。 ![NTP 服务器池](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Pool-Server.png) -NTP 服务器池 +*NTP 服务器池* -3. 然后打开编辑 NTP 守护进程主要配置文件,从 pool.ntp.org 中注释掉默认的公共服务器列表并用类似下面截图提供给你国家的列表替换。 +3、 然后打开编辑 NTP 守护进程的主配置文件,注释掉来自 pool.ntp.org 项目的公共服务器默认列表,并用类似下面截图中提供给你所在国家的列表替换。(LCTT 译注:中国使用 0.cn.pool.ntp.org 等) ![在 CentOS 中配置 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Configure-NTP-Server.png) -配置 NTP 服务器 +*配置 NTP 服务器* -4. 下一步,你需要允许客户端从你的网络中和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中限制语句控制允许哪些网络查询和同步时间 - 根据需要替换网络 IP。 +4、 下一步,你需要允许来自你的网络的客户端和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中 **restrict** 语句控制允许哪些网络查询和同步时间 - 请根据需要替换网络 IP。 restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap nomodify notrap 语句意味着不允许你的客户端配置服务器或者作为同步时间的节点。 -5. 如果你需要额外的信息用于错误处理,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。 +5、 如果你需要用于错误处理的额外信息,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。 logfile /var/log/ntp.log ![在 CentOS 中启用 NTP 日志](http://www.tecmint.com/wp-content/uploads/2014/09/Enable-NTP-Log.png) -启用 NTP 日志 +*启用 NTP 日志* -6. 你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。 +6、 在你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。 ![CentOS 中 NTP 服务器的配置](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Configuration.png) -NTP 服务器配置 +*NTP 服务器配置* ### 步骤二:添加防火墙规则并启动 NTP 守护进程 ### -7. NTP 服务在传输层(第四层)使用 123 号 UDP 端口。它是针对限制可变延迟的影响特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。 +7、 NTP 服务使用 OSI 传输层(第四层)的 123 号 UDP 端口。它是为了避免可变延迟的影响所特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。 # firewall-cmd --add-service=ntp --permanent # firewall-cmd --reload ![在 Firewall 中开放 NTP 端口](http://www.tecmint.com/wp-content/uploads/2014/09/Open-NTP-Port.png) -在 Firewall 中开放 NTP 端口 +*在 Firewall 中开放 NTP 端口* -8. 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。 +8、 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。 # systemctl start ntpd # systemctl enable ntpd @@ -80,34 +81,34 @@ NTP 服务器配置 ![启动 NTP 服务](http://www.tecmint.com/wp-content/uploads/2014/09/Start-NTP-Service.png) -启动 NTP 服务 +*启动 NTP 服务* ### 步骤三:验证服务器时间同步 ### -9. 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。 +9、 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。 # ntpq -p # date -R ![验证 NTP 服务器时间](http://www.tecmint.com/wp-content/uploads/2014/09/Verify-NTP-Time-Sync.png) -验证 NTP 时间同步 +*验证 NTP 时间同步* -10. 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行事例。 +10、 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行示例。 # ntpdate -q 0.ro.pool.ntp.org 1.ro.pool.ntp.org ![同步 NTP 同步](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-NTP-Time.png) -同步 NTP 时间 +*同步 NTP 时间* ### 步骤四:设置 Windows NTP 客户端 ### -11. 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。 +11、 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。 ![和 NTP 同步 Windows 时间](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-Windows-Time-with-NTP.png) -和 NTP 同步 Windows 时间 +*和 NTP 同步 Windows 时间* 就是这些。在你的网络中配置一个本地 NTP 服务器能确保你所有的服务器和客户端有相同的时间设置,以防出现网络连接失败,并且它们彼此都相互同步。 @@ -117,7 +118,7 @@ via: http://www.tecmint.com/install-ntp-server-in-centos/ 作者:[Matei Cezar][a] 译者:[ictlyh](http://motouxiaogui.cn/blog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8a9793e1a71b9557a1c67ed802e3a32929f77f4d Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 6 Dec 2015 22:30:20 +0800 Subject: [PATCH 133/160] Update 20151119 Going Beyond Hello World Containers is Hard Stuff.md --- ...nd Hello World Containers is Hard Stuff.md | 25 ++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md index 3a2fd08d6f..877e2e7d21 100644 --- a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md +++ b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md @@ -1,48 +1,67 @@ translating by ezio Going Beyond Hello World Containers is Hard Stuff + ================================================================================ In [my previous post][1], I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff. +在[我的上一篇文章里][1], 我介绍了Linux 容器背后的技术的概念。我写了我知道的一切。容器对我来说也是比较新的概念。我写这篇文章的目的就是鼓励我真正的来学习这些东西。 I intend to learn by doing. First get the concepts down, then get hands-on and write about it as I go. I assumed there must be a lot of Hello World type stuff out there to give me up to speed with the basics. Then, I could take things a bit further and build a microservice container or something. +我打算在使用中学习。首先实践,然后上手并记录下我是怎么走过来的。我假设这里肯定有很多想"Hello World" 这种类型的知识帮助我快速的掌握基础。然后我能够更进一步,构建一个微服务容器或者其它东西。 I mean, it can’t be that hard, right? +我的意思是还会比着更难吗,对吧? Wrong. +错了。 Maybe it’s easy for someone who spends significant amount of their life immersed in operations work. But for me, getting started with this stuff turned out to be hard to the point of posting my frustrations to Facebook... +可能对某些人来说这很简单,因为他们会耗费大量的时间专注在操作工作上。但是对我来说实际上是很困难的,可以从我在Facebook 上的状态展示出来的挫折感就可以看出了。 But, there is good news: I got it to work! And it’s always nice being able to make lemonade from lemons. So I am going to share the story of how I made my first microservice container with you. Maybe my pain will save you some time. +但是还有一个好消息:我最终让它工作了。而且他工作的还不错。所以我准备分享向你分享我如何制作我的第一个微服务容器。我的痛苦可能会节省你不少时间呢。 If you've ever found yourself in a situation like this, fear not: folks like me are here to deal with the problems so you don't have to! Let’s begin. +如果你曾经发现或者从来都没有发现自己处在这种境地:像我这样的人在这里解决一些你不需要解决的问题。 -### A Thumbnail Micro Service ### +让我们开始吧。 + + +### 一个缩略图微服务 ### The microservice I designed was simple in concept. Post a digital image in JPG or PNG format to an HTTP endpoint and get back a a 100px wide thumbnail. +我设计的微服务在理论上很简单。以JPG 或者PNG 格式在HTTP 终端发布一张数字照片,然后获得一个100像素宽的缩略图。 Here’s what that looks like: +下面是它实际的效果: ![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png) I decide to use a NodeJS for my code and version of [ImageMagick][2] to do the thumbnail transformation. +我决定使用NodeJS 作为我的开发语言,使用[ImageMagick][2] 来转换缩略图。 I did my first version of the service, using the logic shown here: +我的服务的第一版的逻辑如下所示: ![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png) I download the [Docker Toolbox][3] which installs an the Docker Quickstart Terminal. Docker Quickstart Terminal makes creating containers easier. The terminal fires up a Linux virtual machine that has Docker installed, allowing you to run Docker commands from within a terminal. +我下载了[Docker Toolbox][3],用它安装了Docker 的快速启动终端。Docker 快速启动终端使得创建容器更简单了。终端会启动一个装好了Docker 的Linux 虚拟机,它允许你在一个终端里运行Docker 命令。 In my case, I am running on OS X. But there’s a Windows version too. +虽然在我的例子里,我的操作系统是Mac OS X。但是Windows 下也有相同的工具。 I am going to use Docker Quickstart Terminal to build a container image for my microservice and run a container from that image. +我准备使用Docker 快速启动终端里为我的微服务创建一个容器镜像,然后从这个镜像运行容器。 The Docker Quickstart Terminal runs in your regular terminal, like so: +Docker 快速启动终端就运行在你使用的普通终端里,就像这样: ![container-diagram-2](https://deis.com/images/blog-images/containers-hard-2.png) -### The First Little Problem and the First Big Problem ### +### 第一个小问题和第一个大问题### So I fiddled around with NodeJS and ImageMagick and I got the service to work on my local machine. @@ -315,7 +334,7 @@ And don't miss part two... via: https://deis.com/blog/2015/beyond-hello-world-containers-hard-stuff 作者:[Bob Reselman][a] -译者:[译者ID](https://github.com/译者ID) +译者:[Ezio](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 399cc987f299bef006a19a946662ce3446cecd4e Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 6 Dec 2015 22:47:27 +0800 Subject: [PATCH 134/160] Update 20151119 Going Beyond Hello World Containers is Hard Stuff.md --- ...nd Hello World Containers is Hard Stuff.md | 21 ++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md index 877e2e7d21..35465b1dec 100644 --- a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md +++ b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md @@ -64,56 +64,75 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样 ### 第一个小问题和第一个大问题### So I fiddled around with NodeJS and ImageMagick and I got the service to work on my local machine. +所以我用NodeJS 和ImageMagick 瞎搞了一通然后让我的服务在本地运行起来了。 Then, I created the Dockerfile, which is the configuration script Docker uses to build your container. (I’ll go more into builds and Dockerfile more later on.) +然后我创建了Dockerfile,这是Docker 用来构建容器的配置脚本。(我会在后面深入介绍构建和Dockerfile) Here’s the build command I ran on the Docker Quickstart Terminal: +这是我运行Docker 快速启动终端的命令: $ docker build -t thumbnailer:0.1 I got this response: +获得如下回应: docker: "build" requires 1 argument. Huh. +呃。 After 15 minutes I realized: I forgot to put a period . as the last argument! - +我估摸着过了15分钟:我忘记了在末尾参数输入一个点`.`。 It needs to be: +正确的指令应该是这样的: $ docker build -t thumbnailer:0.1 . But this wasn’t the end of my problems. +但是这不是我最后一个问题。 I got the image to build and then I typed [the the `run` command][4] on the Docker Quickstart Terminal to fire up a container based on the image, called `thumbnailer:0.1`: +我让这个镜像构建好了,然后我Docker 快速启动终端输入了[`run` 命令][4]来启动容器,名字叫`thumbnailer:0.1`: $ docker run -d -p 3001:3000 thumbnailer:0.1 The `-p 3001:3000` argument makes it so the NodeJS microservice running on port 3000 within the container binds to port 3001 on the host virtual machine. +参数`-p 3001:3000` 让NodeJS 微服务在Docker 内运行在端口3000,而在主机上则是3001。 Looks so good so far, right? +到目前卡起来都很好,对吧? Wrong. Things are about to get pretty bad. +错了。事情要马上变糟了。 I determined the IP address of the virtual machine created by Docker Quickstart Terminal by running the `docker-machine` command: +我指定了在Docker 快速启动中端里用命令`docker-machine` 运行的Docker 虚拟机的ip地址: $ docker-machine ip default This returns the IP address of the default virtual machine, the one that is run under the Docker Quickstart Terminal. For me, this IP address was 192.168.99.100. +这句话返回了默认虚拟机的IP地址,即运行docker 的虚拟机。对于我来说,这个ip 地址是192.168.99.100。 I browsed to http://192.168.99.100:3001/ and got the file upload page I built: +我浏览网页http://192.168.99.100:3001/ ,然后找到了我创建的上传图片的网页: ![container-diagram-3](https://deis.com/images/blog-images/containers-hard-3.png) I selected a file and clicked the Upload Image button. +我选择了一个文件,然后点击上传图片的按钮。 But it didn’t work. +但是它并没有工作。 The terminal is telling me it can’t find the `/upload` directory my microservice requires. +终端告诉我他无法找到我的微服务需要的`/upload` 目录。 Now, keep in mind, I had been at this for about a day—between the fiddling and research. I’m feeling a little frustrated by this point. +现在开始记住,我已经在此耗费了将近一天的时间-从浪费时间到研究问题。我此时感到了一些挫折感。 Then, a brain spark flew. Somewhere along the line remembered reading a microservice should not do any data persistence on its own! Saving data should be the job of another service. +然后灵光一闪。某人记起来微服务不应该自己做任何数据持久化的工作!保存数据应该是另一个服务的工作。 So what if the container can’t find the `/upload` directory? The real issue is: my microservice has a fundamentally flawed design. From 77d4a03c74c582597145f0a2c4444630af7c33c1 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Mon, 7 Dec 2015 08:25:18 +0800 Subject: [PATCH 135/160] [translated]Why did you start using Linux --- .../20150820 Why did you start using Linux.md | 148 ------------------ .../20150820 Why did you start using Linux.md | 144 +++++++++++++++++ 2 files changed, 144 insertions(+), 148 deletions(-) delete mode 100644 sources/talk/20150820 Why did you start using Linux.md create mode 100644 translated/talk/20150820 Why did you start using Linux.md diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md deleted file mode 100644 index 58b89e1f74..0000000000 --- a/sources/talk/20150820 Why did you start using Linux.md +++ /dev/null @@ -1,148 +0,0 @@ -[bazz2222] -Why did you start using Linux? -================================================================================ -> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux - -### Why did you start using Linux? ### - -Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers. - -SilverKnight asked his question on the Linux subreddit: - -> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here. -> -> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet. -> -> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon. -> -> [More at Reddit][1] - -Fellow redditors in the Linux subreddit responded with their thoughts: - -> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want." -> -> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now. -> -> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together. -> -> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux. -> -> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python. -> -> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...). -> -> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons. -> -> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked! -> -> I continued using it for another 6 months. -> -> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server. -> -> After the 6 months I got a new PC (which I still use!) I wanted to try something different. -> -> I decided to install openSUSE. -> -> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it." -> -> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games." -> -> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!" -> -> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP. -> -> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew. -> -> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance. -> -> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu. -> -> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is. -> -> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them." -> -> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it. -> -> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching." -> -> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux. -> -> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses. -> -> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's. -> -> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer. -> -> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day. -> -> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence. -> -> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great! -> -> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it." -> -> **Linuxllc**: "You also can learn from old farts like me. -> -> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux. -> -> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games." -> -> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for. -> -> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS. -> -> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday. -> -> /shrug" -> -> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally." -> -> [More at Reddit][2] - -### IBM's Linux only Mainframe ### - -IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne. - -Ron Miller reports for TechCrunch: - -> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer. -> -> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef. -> -> The metered mainframe will still sit inside the customer’s on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained. -> -> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market. -> -> [More at TechCrunch][3] - -### Why you should skip Windows 10 and opt for Linux ### - -Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer. - -SJVN reports for ZDNet: - -> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux. -> -> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft. -> -> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux. -> -> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve. -> -> [More at ZDNet][4] - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html - -作者:[Jim Lynch][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Jim-Lynch/ -[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ -[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ -[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ -[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ diff --git a/translated/talk/20150820 Why did you start using Linux.md b/translated/talk/20150820 Why did you start using Linux.md new file mode 100644 index 0000000000..aa48db697c --- /dev/null +++ b/translated/talk/20150820 Why did you start using Linux.md @@ -0,0 +1,144 @@ +年轻人,你为啥使用 linux +================================================================================ +> 今天的开源综述:是什么带你进入 linux 的世界?号外:IBM 基于 Linux 的大型机。以及,你应该抛弃 win10 选择 Linux 的原因。 + +### 当初你为何使用 Linux? ### + +Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营了。但是你知道是什么让他们开始使用 Linux 的吗?一个 Reddit 用户在网站上问了这个问题,并且得到了很多有趣的回答。 + +一个名为 SilverKnight 的用户在 Reddit 的 Linux 板块上问了如下问题: + +> 我知道这个问题肯定被问过了,但我还是想听听年轻一代使用 Linux 的原因,以及是什么让他们坚定地成为 Linux 用户。 +> +> 我无意阻止大家讲出你们那些精彩的 Linux 故事,但是我还是对那些没有经历过什么精彩故事的新人的想法比较感兴趣。 +> +> 我27岁,半吊子 Linux 用户,这些年装过不少发行版,但没有投入全部精力去玩 Linux。我正在找更多的、能让我全身心投入到 Linux 潮流的理由,或者说激励。 +> +> [详见 Reddit][1] + +以下是网站上的回复: + +> **DoublePlusGood**:我12岁开始使用 Backtrack(现在改名为 Kali),因为我想成为一名黑客(LCTT 译注:原文1337 haxor,1337 是 leet 的火星文写法,意为'火星文',haxor 为 hackor 的火星文写法,意为'黑客',另一种写法是 1377 h4x0r,满满的火星文文化)。我现在一直使用 ArchLinux,因为它给我无限自由,让我对我的电脑可以为所欲为。 +> +> **Zack**:我记得是12、3岁的时候使用 Linux,现在15岁了。 +> +> 我11岁的时候就对 Windows XP 感到不耐烦,一个简单的功能,比如关机,TMD 都要让我耐心等着它慢慢完成。 +> +> 在那之前几个月,我在 freenode IRC 聊天室参与讨论了一个游戏,它是一个开源项目,大多数用户使用 Linux。 +> +> 我不断听到 Linux 但当时对它还没有兴趣。然而由于这些聊天频道(大部分在 freenode 上)谈论了很多编程话题,我就开始学习 python 了。 +> +> 一年后我尝试着安装 GNU/Linux (主要是 ubuntu)到我的新电脑(其实不新,但它是作为我的生日礼物被我得到的)。不幸的是它总是不能正常工作,原因未知,也许硬盘坏了,也许灰尘太多了。 +> +> 那时我放弃自己解决这个问题,然后缠着老爸给我的电脑装上 Ubuntu,他也无能为力,原因同上。 +> +> 在追求 Linux 一段时间后,我打算抛弃 Windows,使用 Linux Mint 代替 Ubuntu,本来没抱什么希望,但 Linux Mint 竟然能跑起来! +> +> 于是这个系统我用了6个月。 +> +> 那段时间我的一个朋友给了我一台虚拟机,跑 Ubuntu 的,我用了一年,直到我爸给了我一台服务器。 +> +> 6个月后我得到一台新 PC(现在还在用)。于是起想折腾点不一样的东西。 +> +> 我打算装 openSUSE。 +> +> 我很喜欢这个系统。然后在圣诞节的时候我得到树莓派,上面只能跑 Debian,还不能支持其它发行版。 +> +> **Cqz**:我9岁的时候有一次玩 Windows 98,结果这货当机了,原因未知。我没有 Windows 安装盘,但我爸的一本介绍编程的杂志上有一张随书附赠的光盘,这张光盘上刚好有 Mandrake Linux 的安装软件,于是我瞬间就成为了 Linux 用户。我当时还不知道自己在玩什么,但是玩得很嗨皮。这些年我虽然在电脑上装了多种 Windows 版本,但是 FLOSS 世界才是我的家。现在我只把 Windows 装在虚拟机上,用来玩游戏。 +> +> **Tosmarcel**:15岁那年对'编程'这个概念很好奇,然后我开始了哈佛课程'CS50',这个课程要我们安装 Linux 虚拟机用来执行一些命令。当时我问自己为什么 Windows 没有这些命令?于是我 Google 了 Linux,搜索结果出现了 Ubuntu,在安装 Ubuntu。的时候不小心把 Windows 分区给删了。。。当时对 Linux 毫无所知,适应这个系统非常困难。我现在16岁,用 ArchLinux,不想用回 Windows,我爱 ArchLinux。 +> +> **Micioonthet**:第一次听说 Linux 是在我5年级的时候,当时去我一朋友家,他的笔记本装的就是 MEPIS(Debian的一个比较老的衍生版),而不是 XP。 +> +> 原来是他爸爸是个美国的社会学家,而他全家都不信任微软。我对这些东西完全陌生,这系统完全没有我熟悉的软件,我很疑惑他怎么能使用。 +> +> 我13岁那年还没有自己的笔记本电脑,而我另一位朋友总是抱怨他的电脑有多慢,所以我打算把它买下来并修好它。我花了20美元买下了这台装着 Windows Vista 系统、跑满病毒、完全无法使用的惠普笔记本。我不想重装讨厌的 Windows 系统,记得 Linux 是免费的,所以我刻了一张 Ubuntu 14.04 光盘,马上把它装起来,然后我被它的高性能给震精了。 +> +> 我的世界(由于它允运行在 JAVA 上,所以当时它是 Linux 下为数不多的几个游戏之一)在 Vista 上只能跑5帧每秒,而在 Ubuntu 上能跑到25帧。 +> +> 我到现在还会偶尔使用一下那台笔记本,Linux 可不会在乎你的硬件设备有多老。 +> +> 之后我把我爸也拉入 Linux 行列,我们会以很低的价格买老电脑,装上 Linux Mint 或其他轻量级发行版,这省了好多钱。 +> +> **Webtm**:我爹每台电脑都会装多个发行版,有几台是 opensuse 和 Debian,他的个人电脑装的是 Slackware。所以我记得很小的时候一直在玩 debian,但没有投入很多精力,我用了几年的 Windows,然后我爹问我有没有兴趣试试 debian。这是个有趣的经历,在那之后我一直使用 debian。而现在我不用 Linux,转投 freeBSD,5个月了,用得很开心。 +> +> 完全控制自己的系统是个很奇妙的体验。开源届有好多酷酷的软件,我认为在自己解决一些问题并且利用这些工具解决其他事情的过程是最有趣的。当然稳定和高效也是吸引我的地方。更不用说它的保密级别了。 +> +> **Wyronaut**:我今年18,第一次玩 Linux 是13岁,当时玩的 Ubuntu,为啥要碰 Linux?因为我想搭一个'我的世界'的服务器来和小伙伴玩游戏,当时'我的世界'可是个新鲜玩意儿。而搭个私服需要用 Linux 系统。 +> +> 当时我还是个新手,对着 Linux 的命令行有些傻眼,因为很多东西都要我自己处理。还是多亏了 Google 和维基,我成功地在多台老 PC 上部署了一些简单的服务器,那些早已无人问津的老古董机器又能发挥余热了。 +> +> 跑过游戏服务器后,我又开始跑 web 服务器,先是跑了几年 HTML,CSS 和 PHP,之后受 TheNewBoston 视频的误导转到了 JAVA。 +> +> 一周后放弃 JAVA 改用 Python,当时学习 Python 用的书名叫《Learn Python The Hard Way》,作者是 Zed A. Shaw。我花了两周学完 Python,然后开始看《C++ Primer》,因为我想做游戏开发。看到一半(大概500页)的时候我放弃了。那个时候我有点讨厌玩电脑了。 +> +> 这样中断了一段时间之后,我决定学习 JavaScript,读了2本书,试了4个平台,然后又不玩了。 +> +> 现在到了不得不找一所学校并决定毕业后找什么样工作的糟糕时刻。我不想玩图形界面编程,所以我不会进游戏行业。我也不喜欢画画和建模。然后我发现了一个涉及网络安全的专业,于是我立刻爱上它了。我挑了很多 C 语言的书来度过这个假期,并且复习了一下数学来迎接新的校园生活。 +> +> 目前我玩 archlinux,不同 PC 上跑着不同任务,它们运行很稳定。 +> +> 可以说 Linux 带我进入编程的世界,而反过来,我最终在学校要学的就是 Linux。我估计会终生感谢 Linux。 +> +> **Linuxllc**:你们可以学学像我这样的老头。 +> +> 扔掉 Windows!扔掉 Windows!扔掉 Windows!给自己一个坚持使用 Linux 的理由,那就是完全,彻底,远离,Windows。 +> +> 我在 2003 年放弃 Windows,只用了5天就把所有电脑跑成 Linux,包括所有的外围设备(LCTT 译注:比如打印机?)。我不玩 Windows 里的游戏,只玩 Linux 里的。 +> +> **Highclass**:我28岁,不知道还是不是你要找的年轻人类型。 +> +> 老实说我对电脑挺感兴趣的,当我还没接触'自由软件哲学'的时候,我认为 free 是免费的意思。我也不认为命令行界面很让人难以接受,因为我小时候就接触过 DOS 系统。 +> +> 我第一个发行版是 Mandrake,在我11岁还是12岁那年我把家里的电脑弄得乱七八糟,然后我一直折腾那台电脑,试着让我技的技能提升一个台阶。现在我在一家公司全职使用 Linux。(请允许我耸个肩)。 +> +> **Matto**:我的电脑是旧货市场淘回来的,装 XP,跑得慢,于是我想换个系统。Google 了一下,发现 Ubuntu。当年我15、6岁,现在23了,就职的公司内部使用 Linux。 +> +> [更多评论移步 Reddit][2] + +### IBM 的 Linux 大型机 ### + +IBM 很久前就用 Linux 了。现在这家公司退推出一款机器专门使用 Ubuntu,机器名叫 LinuxOne。 + +Ron Miller 在 TecchCrunch 博客上说: + +> 新的大型机包括两款机型,都是以企鹅名称命名的(Linux 的吉祥物就是一只企鹅,懂18摸的命名用意了没?)第一款叫帝企鹅,使用 IBM z13 机型,我们早在1月份就介绍过了。另一款稍微小一点,名叫跳岩企鹅,供入门级买家使用。 +> +> 也许你会以为大型机就像恐龙一样早就灭绝了,但世界上许多大型机构中都还在使用它们,它们还健在。作为发展云技术战略的一部分,数据分析与安全有望于提升 Ubuntu 大型机的市场,这种大型机能提供一系列开源的企业级软件,比如 Apache Spark,Node.js,MongoDB,MariaDB,PostgreSQL 和 Chef。 +> +> 大型机还会存在于客户预置的数据中心中,但是市场的大小取决于会有多少客户使用这种类似于云服务的系统。Mauri 解释道,IBM 正在寻求增加大型机销量的途径,与 Canonical 公司合作,鼓励使用开源工具,都能为大型机打开一个小的,却能赚钱的市场。 +> +> +> [详情移步 TechCrunch][3] + +### 你为什么要放弃 Windows10 而选择 Linux ### + +自从 Windows10 出来以后,各种媒体都报道过它的隐藏间谍功能。ZDNet 列出了一些放弃 Windows10 的理由。 + +SJVN 在 ZDNet 的报告: + +> 你试试关掉 Windows10 的数据分享功能,坏消息来了:window10 会继续把你的数据分享给微软公司。请选择 Linux 吧。 +> +> 你可以有很多方法不让 Windows10 泄露你的秘密,但你不能阻止它交谈。Cortana,win10 小娜,语音助手,就算你把她关了,她也会把数据发给微软公司。这些数据包括你的电脑 ID,微软用它来识别你的 PC 机。 +> +> 所以如果这些泄密给你带来了烦恼,你可以使用老版本 Windows7,或者换到 Linux。然而,当 Windows7 不再提供技术支持的那天到来,如果你还想保留隐私,最终你还是只能选择 Linux。 +> +> 这里还有些小众的桌面系统能保护你的隐私,比如 BSD 家族的 FreeBSD,PCBSD,NetBSD,eComStation,OS/2。但是,最好的选择还是 Linux,它提供最低的学习曲线。 +> +> [详情移步 ZDNet][4] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html + +作者:[Jim Lynch][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Jim-Lynch/ +[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ +[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ From 059bec5f522ef8bf54b3627cfc8aa2ee36aa87d7 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Mon, 7 Dec 2015 08:39:23 +0800 Subject: [PATCH 136/160] [translating]Linus Torvalds Lambasts Open Source Programmers over Insecure Code --- ...alds Lambasts Open Source Programmers over Insecure Code.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md b/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md index 1e37549646..d4cf419adb 100644 --- a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md +++ b/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md @@ -1,3 +1,4 @@ +[bazz2222222] Linus Torvalds Lambasts Open Source Programmers over Insecure Code ================================================================================ ![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg) @@ -32,4 +33,4 @@ via: http://thevarguy.com/open-source-application-software-companies/110415/linu [a]:http://thevarguy.com/author/christopher-tozzi [1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html -[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate \ No newline at end of file +[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate From 4d115bcbfcd153497f8fc8f2786b88d88d3fbf35 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Mon, 7 Dec 2015 09:34:11 +0800 Subject: [PATCH 137/160] [translated]Linus Torvalds Lambasts Open Source Programmers over Insecure Code --- ...n Source Programmers over Insecure Code.md | 36 ------------------- 1 file changed, 36 deletions(-) delete mode 100644 sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md diff --git a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md b/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md deleted file mode 100644 index d4cf419adb..0000000000 --- a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md +++ /dev/null @@ -1,36 +0,0 @@ -[bazz2222222] -Linus Torvalds Lambasts Open Source Programmers over Insecure Code -================================================================================ -![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg) - -Linus Torvalds's latest rant underscores the high expectations the Linux developer places on open source programmers—as well the importance of security for Linux kernel code. - -Torvalds is the unofficial "benevolent dictator" of the Linux kernel project. That means he gets to decide which code contributions go into the kernel, and which ones land in the reject pile. - -On Oct. 28, open source coders whose work did not meet Torvalds's expectations faced an [angry rant][1]. "Christ people," Torvalds wrote about the code. "This is just sh*t." - -He went on to call the coders "just incompetent and out to lunch." - -What made Torvalds so angry? He believed the code could have been written more efficiently. It could have been easier for other programmers to understand and would run better through a compiler, the program that translates human-readable code into the binaries that computers understand. - -Torvalds posted his own substitution for the code in question and suggested that the programmers should have written it his way. - -Torvalds has a history of lashing out against people with whom he disagrees. It stretches back to 1991, when he famously [flamed Andrew Tanenbaum][2]—whose Minix operating system he later described as a series of "brain-damages." No doubt this latest criticism of fellow open source coders will go down as another example of Torvalds's confrontational personality. - -But Torvalds may also have been acting strategically during this latest rant. "I want to make it clear to *everybody* that code like this is completely unacceptable," he wrote, suggesting that his goal was to send a message to all Linux programmers, not just vent his anger at particular ones. - -Torvalds also used the incident as an opportunity to highlight the security concerns that arise from poorly written code. Those are issues dear to open source programmers' hearts in an age when enterprises are finally taking software security seriously, and demanding top-notch performance from their code in this regard. Lambasting open source programmers who write insecure code thus helps Linux's image. - --------------------------------------------------------------------------------- - -via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse - -作者:[Christopher Tozzi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://thevarguy.com/author/christopher-tozzi -[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html -[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate From ed2c16631bb5e14c63e7df9634680b8217e64bdf Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 7 Dec 2015 10:24:45 +0800 Subject: [PATCH 138/160] Update 20151119 Going Beyond Hello World Containers is Hard Stuff.md --- ...nd Hello World Containers is Hard Stuff.md | 53 ++++++++++++++++--- 1 file changed, 47 insertions(+), 6 deletions(-) diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md index 35465b1dec..0ffda13bf0 100644 --- a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md +++ b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md @@ -1,8 +1,7 @@ -translating by ezio - Going Beyond Hello World Containers is Hard Stuff - +要超越Hello World 容器是件困难的事情 ================================================================================ + In [my previous post][1], I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff. 在[我的上一篇文章里][1], 我介绍了Linux 容器背后的技术的概念。我写了我知道的一切。容器对我来说也是比较新的概念。我写这篇文章的目的就是鼓励我真正的来学习这些东西。 @@ -135,18 +134,23 @@ Then, a brain spark flew. Somewhere along the line remembered reading a microser 然后灵光一闪。某人记起来微服务不应该自己做任何数据持久化的工作!保存数据应该是另一个服务的工作。 So what if the container can’t find the `/upload` directory? The real issue is: my microservice has a fundamentally flawed design. +所以容器找不到目录`/upload` 的原因到底是什么?这个问题的根本就是我的微服务在基础设计上就有问题。 Let’s take another look: +让我们看看另一幅图: ![container-diagram-4](https://deis.com/images/blog-images/containers-hard-4.png) Why am I saving a file to disk? Microservices are supposed to be fast. Why not do all my work in memory? Using memory buffers will make the "I can’t find no stickin’ directory" error go away and will increase the performance of my app dramatically. +我为什么要把文件保存到磁盘?微服务按理来说是很快的。为什么不能让我的全部工作都在内存里完成?使用内存缓冲可以解决“找不到目录”这个问题,而且可以提高我的应用的性能。 So that’s what I did. And here’s what the plan was: +这就是我现在所做的。下面是我的计划: ![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png) Here’s the NodeJS I wrote to do all the in-memory work for creating a thumbnail: +这是我用NodeJS 写的在内存工作、生成缩略图的代码: // Bind to the packages var express = require('express'); @@ -209,20 +213,26 @@ Here’s the NodeJS I wrote to do all the in-memory work for creating a thumbnai module.exports = router; Okay, so we’re back on track and everything is hunky dory on my local machine. I go to sleep. +好了,回到正轨,已经可以在我的本地机器正常工作了。我该去休息了。 But, before I do I test the microservice code running as standard Node app on localhost... +但是,在我测试把这个微服务当作一个普通的Node 应用运行在本地时... ![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png) It works fine. Now all I needed to do was get it working in a container. +它工作的很好。现在我要做的就是让他在容器里面工作。 The next day I woke up, grabbed some coffee, and built an image—not forgetting to put in the period! +第二天我起床后喝点咖啡,然后创建一个镜像——这次没有忘记那个"."! $ docker build -t thumbnailer:01 . I am building from the root directory of my thumbnailer project. The build command uses the Dockerfile that is in the root directory. That’s how it goes: put the Dockerfile in the same place you want to run build and the Dockerfile will be used by default. +我从缩略图工程的根目录开始构建。构建命令使用了根目录下的Dockerfile。它是这样工作的:把Dockerfile 放到你想构建镜像的地方,然后系统就默认使用这个Dockerfile。 Here is the text of the Dockerfile I was using: +下面是我使用的Dockerfile 的内容: FROM ubuntu:latest MAINTAINER bob@CogArtTech.com @@ -243,54 +253,70 @@ Here is the text of the Dockerfile I was using: CMD npm start What could go wrong? +这怎么可能出错呢? -### The Second Big Problem ### +### 第二个大问题 ### I ran the `build` command and I got this error: +我运行了`build` 命令,然后出了这个错: Do you want to continue? [Y/n] Abort. The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 I figured something was wrong with the microservice. I went back to my machine, fired up the service on localhost, and uploaded a file. +我猜测微服务出错了。我回到本地机器,从本机启动微服务,然后试着上传文件。 Then I got this error from NodeJS: +然后我从NodeJS 获得了这个错误: Error: spawn convert ENOENT What’s going on? This worked the other night! +怎么回事?之前还是好好的啊! I searched and searched, for every permutation of the error I could think of. After about four hours of replacing different node modules here and there, I figured: why not restart the machine? +我搜索了我能想到的所有的错误原因。差不多4个小时后,我想:为什么不重启一下机器呢? I did. And guess what? The error went away! +重启了,你猜猜结果?错误消失了!(译注:万能的重启) Go figure. +继续。 -### Putting the Genie Back in the Bottle ### +### 将精灵关进瓶子 ### So, back to the original quest: I needed to get this build working. +跳回正题:我需要完成构建工作。 I removed all of the containers running on the VM, using [the `rm` command][5]: +我使用[`rm` 命令][5]删除了虚拟机里所有的容器。 $ docker rm -f $(docker ps -a -q) The `-f` flag here force removes running images. +`-f` 在这里的用处是强制删除运行中的镜像。 Then I removed all of my Docker images, using [the `rmi` command][6]: +然后删除了全部Docker 镜像,用的是[命令`rmi`][6]: $ docker rmi if $(docker images | tail -n +2 | awk '{print $3}') I go through the whole process of rebuilding the image, installing the container and try to get the microservice running. Then after about an hour of self-doubt and accompanying frustration, I thought to myself: maybe this isn’t a problem with the microservice. +我重新执行了命令构建镜像,安装容器,运行微服务。然后过了一个充满自我怀疑和沮丧的一个小时,我告诉我自己:这个错误可能不是微服务的原因。 So, I looked that the the error again: +所以我重新看到了这个错误: Do you want to continue? [Y/n] Abort. The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 Then it hit me: the build is looking for a Y input from the keyboard! But, this is a non-interactive Dockerfile script. There is no keyboard. +这太打击我了:构建脚本好像需要有人从键盘输入Y! 但是,这是一个非交互的Dockerfile 脚本啊。这里并没有键盘。 I went back to the Dockerfile, and there it was: +回到Dockerfile,脚本元来时这样的: RUN apt-get update RUN apt-get install -y nodejs nodejs-legacy npm @@ -298,8 +324,10 @@ I went back to the Dockerfile, and there it was: RUN apt-get clean The second `apt-get` command is missing the `-y` flag which causes "yes" to be given automatically where usually it would be prompted for. +第二个`apt-get` 忘记了`-y` 标志,这才是错误的根本原因。 I added the missing `-y` to the command: +我在这条命令后面添加了`-y` : RUN apt-get update RUN apt-get install -y nodejs nodejs-legacy npm @@ -307,43 +335,56 @@ I added the missing `-y` to the command: RUN apt-get clean And guess what: after two days of trial and tribulation, it worked! Two whole days! +猜一猜结果:经过将近两天的尝试和痛苦,容器终于正常工作了!整整两天啊! So, I did my build: +我完成了构建工作: $ docker build -t thumbnailer:0.1 . I fired up the container: +启动了容器: $ docker run -d -p 3001:3000 thumbnailer:0.1 Got the IP address of the Virtual Machine: +获取了虚拟机的IP 地址: $ docker-machine ip default Went to my browser and entered http://192.168.99.100:3001/ into the address bar. +在我的浏览器里面输入 http://192.168.99.100:3001/ : The upload page loaded. +上传页面打开了。 I selected an image, and this is what I got: +我选择了一个图片,然后得到了这个: ![container-diagram-7](https://deis.com/images/blog-images/containers-hard-7.png) It worked! +工作了! Inside a container, for the first time! +在容器里面工作了,我的第一次啊! -### So What Does It All Mean? ### +### 这意味着什么? ### A long time ago, I accepted the fact when it comes to tech, sometimes even the easy stuff is hard. Along with that, I abandoned the desire to be the smartest guy in the room. Still, the last few days trying get basic competency with containers has been, at times, a journey of self doubt. +很久以前,我接受了这样一个道理:当你刚开始尝试某项技术时,即使是最简单的事情也会变得很困难。因此,我压抑了要成为房间里最聪明的人的欲望。然而最近几天尝试容器的过程就是一个充满自我怀疑的旅程。 But, you wanna know something? It’s 2 AM on an early morning as I write this, and every nerve wracking hour has been worth it. Why? Because you gotta put in the time. This stuff is hard and it does not come easy for anyone. And don’t forget: you’re learning tech and tech runs the world! +但是你想知道一些其它的事情吗?这篇文章是我在凌晨2点完成的,而每一个折磨的小时都值得了。为什么?因为这段时间你将自己全身心投入了喜欢的工作里。这件事很难,对于所有人来说都不是很容易就获得结果的。但是不要忘记:你在学习技术,运行世界的技术。 P.S. Check out this two part video of Hello World containers, check out [Raziel Tabib’s][7] excellent work in this video... +P.S. 了解一下Hello World 容器的两段视频,这里会有 [Raziel Tabib’s][7] 的精彩工作内容。 注:youtube视频 And don't miss part two... +千万被忘记第二部分... 注:youtube视频 From c09ccf9b7e1612ebf3d01d139bbe6f427822c3a4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 7 Dec 2015 10:28:09 +0800 Subject: [PATCH 139/160] Update 20151119 Going Beyond Hello World Containers is Hard Stuff.md clean --- ...nd Hello World Containers is Hard Stuff.md | 78 +------------------ 1 file changed, 2 insertions(+), 76 deletions(-) diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md index 0ffda13bf0..22ecb24715 100644 --- a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md +++ b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md @@ -2,27 +2,18 @@ Going Beyond Hello World Containers is Hard Stuff 要超越Hello World 容器是件困难的事情 ================================================================================ -In [my previous post][1], I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff. 在[我的上一篇文章里][1], 我介绍了Linux 容器背后的技术的概念。我写了我知道的一切。容器对我来说也是比较新的概念。我写这篇文章的目的就是鼓励我真正的来学习这些东西。 -I intend to learn by doing. First get the concepts down, then get hands-on and write about it as I go. I assumed there must be a lot of Hello World type stuff out there to give me up to speed with the basics. Then, I could take things a bit further and build a microservice container or something. 我打算在使用中学习。首先实践,然后上手并记录下我是怎么走过来的。我假设这里肯定有很多想"Hello World" 这种类型的知识帮助我快速的掌握基础。然后我能够更进一步,构建一个微服务容器或者其它东西。 -I mean, it can’t be that hard, right? 我的意思是还会比着更难吗,对吧? -Wrong. 错了。 -Maybe it’s easy for someone who spends significant amount of their life immersed in operations work. But for me, getting started with this stuff turned out to be hard to the point of posting my frustrations to Facebook... 可能对某些人来说这很简单,因为他们会耗费大量的时间专注在操作工作上。但是对我来说实际上是很困难的,可以从我在Facebook 上的状态展示出来的挫折感就可以看出了。 -But, there is good news: I got it to work! And it’s always nice being able to make lemonade from lemons. So I am going to share the story of how I made my first microservice container with you. Maybe my pain will save you some time. 但是还有一个好消息:我最终让它工作了。而且他工作的还不错。所以我准备分享向你分享我如何制作我的第一个微服务容器。我的痛苦可能会节省你不少时间呢。 -If you've ever found yourself in a situation like this, fear not: folks like me are here to deal with the problems so you don't have to! - -Let’s begin. 如果你曾经发现或者从来都没有发现自己处在这种境地:像我这样的人在这里解决一些你不需要解决的问题。 让我们开始吧。 @@ -30,126 +21,95 @@ Let’s begin. ### 一个缩略图微服务 ### -The microservice I designed was simple in concept. Post a digital image in JPG or PNG format to an HTTP endpoint and get back a a 100px wide thumbnail. 我设计的微服务在理论上很简单。以JPG 或者PNG 格式在HTTP 终端发布一张数字照片,然后获得一个100像素宽的缩略图。 -Here’s what that looks like: 下面是它实际的效果: ![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png) -I decide to use a NodeJS for my code and version of [ImageMagick][2] to do the thumbnail transformation. 我决定使用NodeJS 作为我的开发语言,使用[ImageMagick][2] 来转换缩略图。 -I did my first version of the service, using the logic shown here: 我的服务的第一版的逻辑如下所示: ![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png) -I download the [Docker Toolbox][3] which installs an the Docker Quickstart Terminal. Docker Quickstart Terminal makes creating containers easier. The terminal fires up a Linux virtual machine that has Docker installed, allowing you to run Docker commands from within a terminal. 我下载了[Docker Toolbox][3],用它安装了Docker 的快速启动终端。Docker 快速启动终端使得创建容器更简单了。终端会启动一个装好了Docker 的Linux 虚拟机,它允许你在一个终端里运行Docker 命令。 -In my case, I am running on OS X. But there’s a Windows version too. 虽然在我的例子里,我的操作系统是Mac OS X。但是Windows 下也有相同的工具。 -I am going to use Docker Quickstart Terminal to build a container image for my microservice and run a container from that image. 我准备使用Docker 快速启动终端里为我的微服务创建一个容器镜像,然后从这个镜像运行容器。 -The Docker Quickstart Terminal runs in your regular terminal, like so: Docker 快速启动终端就运行在你使用的普通终端里,就像这样: ![container-diagram-2](https://deis.com/images/blog-images/containers-hard-2.png) ### 第一个小问题和第一个大问题### -So I fiddled around with NodeJS and ImageMagick and I got the service to work on my local machine. 所以我用NodeJS 和ImageMagick 瞎搞了一通然后让我的服务在本地运行起来了。 -Then, I created the Dockerfile, which is the configuration script Docker uses to build your container. (I’ll go more into builds and Dockerfile more later on.) 然后我创建了Dockerfile,这是Docker 用来构建容器的配置脚本。(我会在后面深入介绍构建和Dockerfile) -Here’s the build command I ran on the Docker Quickstart Terminal: 这是我运行Docker 快速启动终端的命令: $ docker build -t thumbnailer:0.1 -I got this response: 获得如下回应: docker: "build" requires 1 argument. -Huh. 呃。 -After 15 minutes I realized: I forgot to put a period . as the last argument! 我估摸着过了15分钟:我忘记了在末尾参数输入一个点`.`。 -It needs to be: + 正确的指令应该是这样的: $ docker build -t thumbnailer:0.1 . -But this wasn’t the end of my problems. + 但是这不是我最后一个问题。 -I got the image to build and then I typed [the the `run` command][4] on the Docker Quickstart Terminal to fire up a container based on the image, called `thumbnailer:0.1`: 我让这个镜像构建好了,然后我Docker 快速启动终端输入了[`run` 命令][4]来启动容器,名字叫`thumbnailer:0.1`: $ docker run -d -p 3001:3000 thumbnailer:0.1 -The `-p 3001:3000` argument makes it so the NodeJS microservice running on port 3000 within the container binds to port 3001 on the host virtual machine. 参数`-p 3001:3000` 让NodeJS 微服务在Docker 内运行在端口3000,而在主机上则是3001。 -Looks so good so far, right? 到目前卡起来都很好,对吧? -Wrong. Things are about to get pretty bad. 错了。事情要马上变糟了。 -I determined the IP address of the virtual machine created by Docker Quickstart Terminal by running the `docker-machine` command: 我指定了在Docker 快速启动中端里用命令`docker-machine` 运行的Docker 虚拟机的ip地址: $ docker-machine ip default -This returns the IP address of the default virtual machine, the one that is run under the Docker Quickstart Terminal. For me, this IP address was 192.168.99.100. 这句话返回了默认虚拟机的IP地址,即运行docker 的虚拟机。对于我来说,这个ip 地址是192.168.99.100。 -I browsed to http://192.168.99.100:3001/ and got the file upload page I built: 我浏览网页http://192.168.99.100:3001/ ,然后找到了我创建的上传图片的网页: ![container-diagram-3](https://deis.com/images/blog-images/containers-hard-3.png) -I selected a file and clicked the Upload Image button. 我选择了一个文件,然后点击上传图片的按钮。 -But it didn’t work. 但是它并没有工作。 -The terminal is telling me it can’t find the `/upload` directory my microservice requires. 终端告诉我他无法找到我的微服务需要的`/upload` 目录。 -Now, keep in mind, I had been at this for about a day—between the fiddling and research. I’m feeling a little frustrated by this point. 现在开始记住,我已经在此耗费了将近一天的时间-从浪费时间到研究问题。我此时感到了一些挫折感。 -Then, a brain spark flew. Somewhere along the line remembered reading a microservice should not do any data persistence on its own! Saving data should be the job of another service. 然后灵光一闪。某人记起来微服务不应该自己做任何数据持久化的工作!保存数据应该是另一个服务的工作。 -So what if the container can’t find the `/upload` directory? The real issue is: my microservice has a fundamentally flawed design. 所以容器找不到目录`/upload` 的原因到底是什么?这个问题的根本就是我的微服务在基础设计上就有问题。 -Let’s take another look: 让我们看看另一幅图: ![container-diagram-4](https://deis.com/images/blog-images/containers-hard-4.png) -Why am I saving a file to disk? Microservices are supposed to be fast. Why not do all my work in memory? Using memory buffers will make the "I can’t find no stickin’ directory" error go away and will increase the performance of my app dramatically. 我为什么要把文件保存到磁盘?微服务按理来说是很快的。为什么不能让我的全部工作都在内存里完成?使用内存缓冲可以解决“找不到目录”这个问题,而且可以提高我的应用的性能。 -So that’s what I did. And here’s what the plan was: 这就是我现在所做的。下面是我的计划: ![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png) -Here’s the NodeJS I wrote to do all the in-memory work for creating a thumbnail: 这是我用NodeJS 写的在内存工作、生成缩略图的代码: // Bind to the packages @@ -212,26 +172,20 @@ Here’s the NodeJS I wrote to do all the in-memory work for creating a thumbnai module.exports = router; -Okay, so we’re back on track and everything is hunky dory on my local machine. I go to sleep. 好了,回到正轨,已经可以在我的本地机器正常工作了。我该去休息了。 -But, before I do I test the microservice code running as standard Node app on localhost... 但是,在我测试把这个微服务当作一个普通的Node 应用运行在本地时... ![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png) -It works fine. Now all I needed to do was get it working in a container. 它工作的很好。现在我要做的就是让他在容器里面工作。 -The next day I woke up, grabbed some coffee, and built an image—not forgetting to put in the period! 第二天我起床后喝点咖啡,然后创建一个镜像——这次没有忘记那个"."! $ docker build -t thumbnailer:01 . -I am building from the root directory of my thumbnailer project. The build command uses the Dockerfile that is in the root directory. That’s how it goes: put the Dockerfile in the same place you want to run build and the Dockerfile will be used by default. 我从缩略图工程的根目录开始构建。构建命令使用了根目录下的Dockerfile。它是这样工作的:把Dockerfile 放到你想构建镜像的地方,然后系统就默认使用这个Dockerfile。 -Here is the text of the Dockerfile I was using: 下面是我使用的Dockerfile 的内容: FROM ubuntu:latest @@ -252,70 +206,54 @@ Here is the text of the Dockerfile I was using: CMD npm start -What could go wrong? 这怎么可能出错呢? ### 第二个大问题 ### -I ran the `build` command and I got this error: 我运行了`build` 命令,然后出了这个错: Do you want to continue? [Y/n] Abort. The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 -I figured something was wrong with the microservice. I went back to my machine, fired up the service on localhost, and uploaded a file. 我猜测微服务出错了。我回到本地机器,从本机启动微服务,然后试着上传文件。 -Then I got this error from NodeJS: 然后我从NodeJS 获得了这个错误: Error: spawn convert ENOENT -What’s going on? This worked the other night! 怎么回事?之前还是好好的啊! -I searched and searched, for every permutation of the error I could think of. After about four hours of replacing different node modules here and there, I figured: why not restart the machine? 我搜索了我能想到的所有的错误原因。差不多4个小时后,我想:为什么不重启一下机器呢? -I did. And guess what? The error went away! 重启了,你猜猜结果?错误消失了!(译注:万能的重启) -Go figure. 继续。 ### 将精灵关进瓶子 ### -So, back to the original quest: I needed to get this build working. 跳回正题:我需要完成构建工作。 -I removed all of the containers running on the VM, using [the `rm` command][5]: 我使用[`rm` 命令][5]删除了虚拟机里所有的容器。 $ docker rm -f $(docker ps -a -q) -The `-f` flag here force removes running images. `-f` 在这里的用处是强制删除运行中的镜像。 -Then I removed all of my Docker images, using [the `rmi` command][6]: 然后删除了全部Docker 镜像,用的是[命令`rmi`][6]: $ docker rmi if $(docker images | tail -n +2 | awk '{print $3}') -I go through the whole process of rebuilding the image, installing the container and try to get the microservice running. Then after about an hour of self-doubt and accompanying frustration, I thought to myself: maybe this isn’t a problem with the microservice. 我重新执行了命令构建镜像,安装容器,运行微服务。然后过了一个充满自我怀疑和沮丧的一个小时,我告诉我自己:这个错误可能不是微服务的原因。 -So, I looked that the the error again: 所以我重新看到了这个错误: Do you want to continue? [Y/n] Abort. The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 -Then it hit me: the build is looking for a Y input from the keyboard! But, this is a non-interactive Dockerfile script. There is no keyboard. 这太打击我了:构建脚本好像需要有人从键盘输入Y! 但是,这是一个非交互的Dockerfile 脚本啊。这里并没有键盘。 -I went back to the Dockerfile, and there it was: 回到Dockerfile,脚本元来时这样的: RUN apt-get update @@ -334,15 +272,12 @@ I added the missing `-y` to the command: RUN apt-get install -y imagemagick libmagickcore-dev libmagickwand-dev RUN apt-get clean -And guess what: after two days of trial and tribulation, it worked! Two whole days! 猜一猜结果:经过将近两天的尝试和痛苦,容器终于正常工作了!整整两天啊! -So, I did my build: 我完成了构建工作: $ docker build -t thumbnailer:0.1 . -I fired up the container: 启动了容器: $ docker run -d -p 3001:3000 thumbnailer:0.1 @@ -352,38 +287,29 @@ Got the IP address of the Virtual Machine: $ docker-machine ip default -Went to my browser and entered http://192.168.99.100:3001/ into the address bar. 在我的浏览器里面输入 http://192.168.99.100:3001/ : -The upload page loaded. 上传页面打开了。 -I selected an image, and this is what I got: 我选择了一个图片,然后得到了这个: ![container-diagram-7](https://deis.com/images/blog-images/containers-hard-7.png) -It worked! 工作了! -Inside a container, for the first time! 在容器里面工作了,我的第一次啊! ### 这意味着什么? ### -A long time ago, I accepted the fact when it comes to tech, sometimes even the easy stuff is hard. Along with that, I abandoned the desire to be the smartest guy in the room. Still, the last few days trying get basic competency with containers has been, at times, a journey of self doubt. 很久以前,我接受了这样一个道理:当你刚开始尝试某项技术时,即使是最简单的事情也会变得很困难。因此,我压抑了要成为房间里最聪明的人的欲望。然而最近几天尝试容器的过程就是一个充满自我怀疑的旅程。 -But, you wanna know something? It’s 2 AM on an early morning as I write this, and every nerve wracking hour has been worth it. Why? Because you gotta put in the time. This stuff is hard and it does not come easy for anyone. And don’t forget: you’re learning tech and tech runs the world! 但是你想知道一些其它的事情吗?这篇文章是我在凌晨2点完成的,而每一个折磨的小时都值得了。为什么?因为这段时间你将自己全身心投入了喜欢的工作里。这件事很难,对于所有人来说都不是很容易就获得结果的。但是不要忘记:你在学习技术,运行世界的技术。 -P.S. Check out this two part video of Hello World containers, check out [Raziel Tabib’s][7] excellent work in this video... P.S. 了解一下Hello World 容器的两段视频,这里会有 [Raziel Tabib’s][7] 的精彩工作内容。 注:youtube视频 -And don't miss part two... 千万被忘记第二部分... 注:youtube视频 From a6347bccf646b5d4e66823d14f95d61b6daebc5d Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 7 Dec 2015 10:29:04 +0800 Subject: [PATCH 140/160] Update 20151119 Going Beyond Hello World Containers is Hard Stuff.md --- ...20151119 Going Beyond Hello World Containers is Hard Stuff.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md index 22ecb24715..c42b278787 100644 --- a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md +++ b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md @@ -1,4 +1,3 @@ -Going Beyond Hello World Containers is Hard Stuff 要超越Hello World 容器是件困难的事情 ================================================================================ From 45cc9ab4726f1e392c7f376db8ef99d125e7ac7e Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 7 Dec 2015 10:34:32 +0800 Subject: [PATCH 141/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20151119 Going Beyond Hello World Containers is Hard Stuff.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md (100%) diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/translated/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md similarity index 100% rename from sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md rename to translated/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md From 543453a7c57c2aa51c0c54f85002e32facece5e6 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Tue, 8 Dec 2015 08:58:26 +0800 Subject: [PATCH 142/160] [translating]Learn with Linux--Physics Simulation --- .../Learn with Linux--Physics Simulation.md | 7 ++-- ...n Source Programmers over Insecure Code.md | 35 +++++++++++++++++++ 2 files changed, 39 insertions(+), 3 deletions(-) create mode 100644 translated/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md diff --git a/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md b/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md index 2a8415dda7..7c210ff0c8 100644 --- a/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md +++ b/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md @@ -1,8 +1,9 @@ -Learn with Linux: Physics Simulation +[bazz222222] +Linux 学习系列之物理模拟 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg) -This article is part of the [Learn with Linux][1] series: +[Linux 学习系列][1]的所有文章: - [Learn with Linux: Learning to Type][2] - [Learn with Linux: Physics Simulation][3] @@ -104,4 +105,4 @@ via: https://www.maketecheasier.com/linux-physics-simulation/ [7]:https://edu.kde.org/applications/all/step [8]:https://edu.kde.org/ [9]:http://lightspeed.sourceforge.net/ -[10]:http://www.physion.net/ \ No newline at end of file +[10]:http://www.physion.net/ diff --git a/translated/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md b/translated/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md new file mode 100644 index 0000000000..e927a02b8c --- /dev/null +++ b/translated/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md @@ -0,0 +1,35 @@ +开源开发者提交不安全代码,遭 Linus 炮轰 +================================================================================ +![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg) + +Linus 最近(LCTT 译注:其实是11月份,没有及时翻译出来,看官轻喷Orz)骂了一个 Linux 开发者,原因是他向 kernel 提交了一份不安全的代码。 + +Linus 是个 Linux 内核项目非官方的“仁慈的独裁者(LCTT译注:英国《卫报》曾将乔布斯评价为‘仁慈的独裁者’)”,这意味着他有权决定将哪些代码合入内核,哪些代码直接丢掉。 + +在10月28号,一个开源开发者提交的代码未能符合 Torvalds 的要求,于是遭来了[一顿臭骂][1]。Torvalds 在他提交的代码下评论道:“你提交的是什么东西。” + +接着他说这个开发者是“毫无能力的神经病”。 + +Torvalds 为什么会这么生气?他觉得那段代码可以写得更有效率一点,可读性更强一点,编译器编译后跑得更好一点(编译器的作用就是将让人看的代码翻译成让电脑看的代码)。 + +Torvalds 重新写了一版代码将原来的那份替换掉,并建议所有开发者应该像他那种风格来写代码。 + +Torvalds 一直在嘲讽那些不符合他观点的人。早在1991年他就攻击过[Andrew Tanenbaum][2]——那个 Minix 操作系统的作者,而那个 Minix 操作系统被 Torvalds 描述为“脑残”。 + +但是 Torvalds 在这次嘲讽中表现得更有战略性了:“我想让*每个人*都知道,像他这种代码是完全不能被接收的。”他说他的目的是提醒每个 Linux 开发者,而不是针对那个开发者。 + +Torvalds 也用这个机会强调了烂代码的安全问题。现在的企业对安全问题很重视,所以安全问题需要在开源开发者心中得到足够重视,甚至需要在代码中表现为最高等级(LCTT 译注:操作系统必须权衡许多因素:安全、处理速度、灵活性、易用性等,而这里 Torvalds 将安全提升为最高优先级了)。骂一下那些提交不安全代码的开发者可以帮助提高 Linux 系统的安全性。 + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse + +作者:[Christopher Tozzi][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html +[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate From 080925b76ce69573a1bc5b746a7ee18272120b71 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 8 Dec 2015 09:58:55 +0800 Subject: [PATCH 143/160] translating by ezio kernel,datastructure --- sources/tech/20151122 Doubly linked list in the Linux Kernel.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md index e6b5c97a77..4d6ff02ab8 100644 --- a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md +++ b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md @@ -1,3 +1,5 @@ +translating by Ezio + Data Structures in the Linux Kernel ================================================================================ From 5102620ae4d2163a2dd09ed163b4fb69d5802eb5 Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 8 Dec 2015 10:24:21 +0800 Subject: [PATCH 144/160] update --- .../20151122 Doubly linked list in the Linux Kernel.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md index 4d6ff02ab8..96a515fe93 100644 --- a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md +++ b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md @@ -3,12 +3,14 @@ translating by Ezio Data Structures in the Linux Kernel ================================================================================ -Doubly linked list +双向链表 -------------------------------------------------------------------------------- Linux kernel provides its own implementation of doubly linked list, which you can find in the [include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h). We will start `Data Structures in the Linux kernel` from the doubly linked list data structure. Why? Because it is very popular in the kernel, just try to [search](http://lxr.free-electrons.com/ident?i=list_head) +Linux 内核自己实现了双向链表,可以在[include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h)找到定义。我们将会从双向链表数据结构开始`内核的数据结构`。为什么?因为它在内核里使用的很广泛,你只需要在[free-electrons.com](http://lxr.free-electrons.com/ident?i=list_head) 检索一下就知道了。 First of all, let's look on the main structure in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h): +首先让我们看一下在[include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 里的主结构体: ```C struct list_head { @@ -17,6 +19,7 @@ struct list_head { ``` You can note that it is different from many implementations of doubly linked list which you have seen. For example, this doubly linked list structure from the [glib](http://www.gnu.org/software/libc/) library looks like : +你可能注意到这和你以前见过的双向链表的实现方法是不同的。举个例子来说,在[glib](http://www.gnu.org/software/libc/) 库里是这样实现的: ```C struct GList { @@ -27,8 +30,10 @@ struct GList { ``` Usually a linked list structure contains a pointer to the item. The implementation of linked list in Linux kernel does not. So the main question is - `where does the list store the data?`. The actual implementation of linked list in the kernel is - `Intrusive list`. An intrusive linked list does not contain data in its nodes - A node just contains pointers to the next and previous node and list nodes part of the data that are added to the list. This makes the data structure generic, so it does not care about entry data type anymore. +通常来说一个链表会包含一个指向某个项目的指针。但是内核的实现并没有这样做。所以问题来了:`链表在哪里保存数据呢?`。实际上内核里实现的链表实际上是`侵入式链表`。侵入式链表并不在节点内保存数据-节点仅仅包含指向前后节点的指针,然后把数据是附加到链表的。这就使得这个数据结构是通用的,使用起来就不需要考虑节点数据的类型了。 For example: +比如: ```C struct nmi_desc { From 7910711afb7a4a672f75032775dbcebb03c2c9e1 Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Tue, 8 Dec 2015 10:55:21 +0800 Subject: [PATCH 145/160] Translated --- ...everse Proxy for Apache on FreeBSD 10.2.md | 327 ++++++++++++++++++ 1 file changed, 327 insertions(+) create mode 100644 translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md diff --git a/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md b/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md new file mode 100644 index 0000000000..db809e167a --- /dev/null +++ b/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md @@ -0,0 +1,327 @@ +Translated by KnightJoker +如何在FreeBSD 10.2上安装Nginx作为Apache的反向代理 +================================================================================ +Nginx是一款免费的,开源的HTTP和反向代理服务器, 以及一个代理POP3/IMAP的邮件服务器. Nginx是一款高性能的web服务器,其特点是丰富的功能,简单的结构以及低内存的占用. 第一个版本由 Igor Sysoev在2002年发布,然而到现在为止很多大的科技公司都在使用,包括 Netflix, Github, Cloudflare, WordPress.com等等 + +在这篇教程里我们会 "**在freebsd 10.2系统上,安装和配置Nginx网络服务器作为Apache的反向代理**". Apache 会用PHP在8080端口上运行,并且我们需要在80端口配置Nginx的运行,用来接收用户/访问者的请求.如果网页的用户请求来自于浏览器的80端口, 那么Nginx会用Apache网络服务器和PHP来通过这个请求,并运行在8080端口. + +#### 前提条件 #### + +- FreeBSD 10.2. +- Root 权限. + +### 步骤 1 - 更新系统 ### + +使用SSH证书登录到你的FreeBSD服务器以及使用下面命令来更新你的系统 : + + freebsd-update fetch + freebsd-update install + +### 步骤 2 - 安装 Apache ### + +Apache是现在使用范围最广的网络服务器以及开源的HTTP服务器.在FreeBSD里Apache是未被默认安装的, 但是我们可以直接从端口下载,或者解压包在"/usr/ports/www/apache24" 目录下,再或者直接从PKG命令的FreeBSD系统信息库安装。在本教程中,我们将使用PKG命令从FreeBSD的库中安装: + + pkg install apache24 + +### 步骤 3 - 安装 PHP ### + +一旦成功安装Apache, 接着将会安装PHP并由一个用户处理一个PHP的文件请求. 我们将会用到如下的PKG命令来安装PHP : + + pkg install php56 mod_php56 php56-mysql php56-mysqli + +### 步骤 4 - 配置 Apache 和 PHP ### + +一旦所有都安装好了, 我们将会配置Apache在8080端口上运行, 并让PHP与Apache一同工作. 为了配置Apache,我们可以编辑 "httpd.conf"这个配置文件, 然而PHP我们只需要复制PHP的配置文件 php.ini 在 "/usr/local/etc/"目录下. + +进入到 "/usr/local/etc/" 目录 并且复制 php.ini-production 文件到 php.ini : + + cd /usr/local/etc/ + cp php.ini-production php.ini + +下一步, 在Apache目录下通过编辑 "httpd.conf"文件来配置Apache : + + cd /usr/local/etc/apache24 + nano -c httpd.conf + +端口配置在第 **52**行 : + + Listen 8080 + +服务器名称配置在第 **219** 行: + + ServerName 127.0.0.1:8080 + +在第 **277**行,如果目录需要,添加的DirectoryIndex文件,Apache将直接作用于它 : + + DirectoryIndex index.php index.html + +在第 **287**行下,配置Apache通过添加脚本来支持PHP : + + + SetHandler application/x-httpd-php + + + SetHandler application/x-httpd-php-source + + +保存然后退出. + +现在用sysrc命令,来添加Apache作为开机启动项目 : + + sysrc apache24_enable=yes + +然后用下面的命令测试Apache的配置 : + + apachectl configtest + +如果到这里都没有问题的话,那么就启动Apache吧 : + + service apache24 start + +如果全部完毕, 在"/usr/local/www/apache24/data" 目录下,创建一个phpinfo文件是验证PHP在Apache下完美运行的好方法 : + + cd /usr/local/www/apache24/data + echo "" > info.php + +现在就可以访问 freebsd 的服务器 IP : 192.168.1.123:8080/info.php. + +![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png) + +Apache 是使用 PHP 在 8080端口下运行的. + +### 步骤 5 - 安装 Nginx ### + +Nginx 以低内存的占用作为一款高性能的web服务器以及反向代理服务器.在这个步骤里,我们将会使用Nginx作为Apache的反向代理, 因此让我们用pkg命令来安装它吧 : + + pkg install nginx + +### 步骤 6 - 配置 Nginx ### + +一旦 Nginx 安装完毕, 在 "**nginx.conf**" 文件里,我们需要做一个新的配置文件来替换掉原来的nginx文件. 更改到 "/usr/local/etc/nginx/"目录下 并且默认备份到 nginx.conf 文件: + + cd /usr/local/etc/nginx/ + mv nginx.conf nginx.conf.oroginal + +现在就可以创建一个新的 nginx 配置文件了 : + + nano -c nginx.conf + +然后粘贴下面的配置: + + user www; + worker_processes 1; + error_log /var/log/nginx/error.log; + + events { + worker_connections 1024; + } + + http { + include mime.types; + default_type application/octet-stream; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + access_log /var/log/nginx/access.log; + + sendfile on; + keepalive_timeout 65; + + # Nginx cache configuration + proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; + proxy_temp_path /var/nginx/cache/tmp; + proxy_cache_key "$scheme$host$request_uri"; + + gzip on; + + server { + #listen 80; + server_name _; + + location /nginx_status { + + stub_status on; + access_log off; + } + + # redirect server error pages to the static page /50x.html + # + error_page 500 502 503 504 /50x.html; + location = /50x.html { + root /usr/local/www/nginx-dist; + } + + # proxy the PHP scripts to Apache listening on 127.0.0.1:8080 + # + location ~ \.php$ { + proxy_pass http://127.0.0.1:8080; + include /usr/local/etc/nginx/proxy.conf; + } + } + + include /usr/local/etc/nginx/vhost/*; + + } + +保存退出. + +下一步, 在nginx目录下面,创建一个 **proxy.conf** 文件,使其作为反向代理 : + + cd /usr/local/etc/nginx/ + nano -c proxy.conf + +粘贴如下配置 : + + proxy_buffering on; + proxy_redirect off; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + client_max_body_size 10m; + client_body_buffer_size 128k; + proxy_connect_timeout 90; + proxy_send_timeout 90; + proxy_read_timeout 90; + proxy_buffers 100 8k; + add_header X-Cache $upstream_cache_status; + +保存退出. + +最后一步, 为 nginx 的高速缓存创建一个 "/var/nginx/cache"的新目录 : + + mkdir -p /var/nginx/cache + +### 步骤 7 - 配置 Nginx 的虚拟主机 ### + +在这个步骤里面,我们需要创建一个新的虚拟主机域 "saitama.me", 以跟文件 "/usr/local/www/saitama.me" 和日志文件一同放在 "/var/log/nginx" 目录下. + +我们必须做的第一件事情就是创建新的目录来存放虚拟主机文件, 在这里我们将用到一个"**vhost**"的新文件. 并创建它 : + + cd /usr/local/etc/nginx/ + mkdir vhost + +创建好vhost 目录, 那么我们就进入这个目录并创建一个新的虚拟主机文件. 这里我取名为 "**saitama.conf**" : + + cd vhost/ + nano -c saitama.conf + +粘贴如下虚拟主机的配置 : + + server { + # Replace with your freebsd IP + listen 192.168.1.123:80; + + # Document Root + root /usr/local/www/saitama.me; + index index.php index.html index.htm; + + # Domain + server_name www.saitama.me saitama.me; + + # Error and Access log file + error_log /var/log/nginx/saitama-error.log; + access_log /var/log/nginx/saitama-access.log main; + + # Reverse Proxy Configuration + location ~ \.php$ { + proxy_pass http://127.0.0.1:8080; + include /usr/local/etc/nginx/proxy.conf; + + # Cache configuration + proxy_cache my-cache; + proxy_cache_valid 10s; + proxy_no_cache $cookie_PHPSESSID; + proxy_cache_bypass $cookie_PHPSESSID; + proxy_cache_key "$scheme$host$request_uri"; + + } + + # Disable Cache for the file type html, json + location ~* .(?:manifest|appcache|html?|xml|json)$ { + expires -1; + } + + # Enable Cache the file 30 days + location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ { + proxy_cache_valid 200 120m; + expires 30d; + proxy_cache my-cache; + access_log off; + } + + } + +保存退出. + +下一步, 为nginx和虚拟主机创建一个新的日志目录 "/var/log/" : + + mkdir -p /var/log/nginx/ + +如果一切顺利, 在文件的根目录下创建文件 saitama.me : + + cd /usr/local/www/ + mkdir saitama.me + +### 步骤 8 - 测试 ### + +在这个步骤里面,我们只是测试我们的nginx和虚拟主机的配置. + +用如下命令测试nginx的配置 : + + nginx -t + +如果一切都没有问题, 用 sysrc 命令添加nginx为启动项,并且启动nginx和重启apache: + + sysrc nginx_enable=yes + service nginx start + service apache24 restart + +一切完毕后, 在 saitama.me 目录下,添加一个新的phpinfo文件来验证php的正常运行 : + + cd /usr/local/www/saitama.me + echo "" > info.php + +然后便访问这个文档 : **www.saitama.me/info.php**. + +![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png) + +Nginx 作为Apache的反向代理正在运行了,PHP也同样在进行工作了. + +这是另一种结果 : + +Test .html 文件无缓存. + + curl -I www.saitama.me + +![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png) + +Test .css 文件只有三十天的缓存. + + curl -I www.saitama.me/test.css + +![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png) + +Test .php 文件正常缓存 : + + curl -I www.saitama.me/info.php + +![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png) + +全部完成. + +### 总结 ### + +Nginx 是最广泛的 HTTP 和反向代理的服务器. 拥有丰富的高性能和低内存/RAM的使用功能. Nginx使用了太多的缓存, 我们可以在网络上缓存静态文件使得网页加速, 并且在用户需要的时候再缓存php文件. 这样Nginx 的轻松配置和使用,可以让它用作HTTP服务器 或者 apache的反向代理. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/ + +作者:[Arul][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ \ No newline at end of file From 3af952e1421eed5c21cc64985682fcd15b066868 Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Tue, 8 Dec 2015 11:28:42 +0800 Subject: [PATCH 146/160] Delete --- ...everse Proxy for Apache on FreeBSD 10.2.md | 327 ------------------ 1 file changed, 327 deletions(-) delete mode 100644 sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md diff --git a/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md b/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md deleted file mode 100644 index b3638a61ea..0000000000 --- a/sources/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md +++ /dev/null @@ -1,327 +0,0 @@ -Translating by KnightJoker -How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2 -================================================================================ -Nginx is free and open source HTTP server and reverse proxy, as well as an mail proxy server for IMAP/POP3. Nginx is high performance web server with rich of features, simple configuration and low memory usage. Originally written by Igor Sysoev on 2002, and until now has been used by a big technology company including Netflix, Github, Cloudflare, WordPress.com etc. - -In this tutorial we will "**install and configure nginx web server as reverse proxy for apache on freebsd 10.2**". Apache will run with php on port 8080, and then we need to configure nginx run on port 80 to receive a request from user/visitor. If user request for web page from the browser on port 80, then nginx will pass the request to apache webserver and PHP that running on port 8080. - -#### Prerequisite #### - -- FreeBSD 10.2. -- Root privileges. - -### Step 1 - Update the System ### - -Log in to your freebsd server with ssh credential and update system with command below : - - freebsd-update fetch - freebsd-update install - -### Step 2 - Install Apache ### - -pache is open source HTTP server and the most widely used web server. Apache is not installed by default on freebsd, but we can install it from the ports or package on "/usr/ports/www/apache24" or install it from freebsd repository with pkg command. In this tutorial we will use pkg command to install from the freebsd repository : - - pkg install apache24 - -### Step 3 - Install PHP ### - -Once apache is installed, followed with installing php for handling a PHP file request by a user. We will install php with pkg command as below : - - pkg install php56 mod_php56 php56-mysql php56-mysqli - -### Step 4 - Configure Apache and PHP ### - -Once all is installed, we will configure apache to run on port 8080, and php working with apache. To configure apache, we can edit the configuration file "httpd.conf", and for PHP we just need to copy the php configuration file php.ini on "/usr/local/etc/" directory. - -Go to "/usr/local/etc/" directory and copy php.ini-production file to php.ini : - - cd /usr/local/etc/ - cp php.ini-production php.ini - -Next, configure apache by editing file "httpd.conf" on apache directory : - - cd /usr/local/etc/apache24 - nano -c httpd.conf - -Port configuration on line **52** : - - Listen 8080 - -ServerName configuration on line **219** : - - ServerName 127.0.0.1:8080 - -Add DirectoryIndex file that apache will serve it if a directory requested on line **277** : - - DirectoryIndex index.php index.html - -Configure apache to work with php by adding script below under line **287** : - - - SetHandler application/x-httpd-php - - - SetHandler application/x-httpd-php-source - - -Save and exit. - -Now add apache to start at boot time with sysrc command : - - sysrc apache24_enable=yes - -And test apache configuration with command below : - - apachectl configtest - -If there is no error, start apache : - - service apache24 start - -If all is done, verify that php is running well with apache by creating phpinfo file on "/usr/local/www/apache24/data" directory : - - cd /usr/local/www/apache24/data - echo "" > info.php - -Now visit the freebsd server IP : 192.168.1.123:8080/info.php. - -![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png) - -Apache is working with php on port 8080. - -### Step 5 - Install Nginx ### - -Nginx high performance web server and reverse proxy with low memory consumption. In this step we will use nginx as reverse proxy for apache, so let's install it with pkg command : - - pkg install nginx - -### Step 6 - Configure Nginx ### - -Once nginx is installed, we must configure it by replacing nginx file "**nginx.conf**" with new configuration below. Change the directory to "/usr/local/etc/nginx/" and backup default nginx.conf : - - cd /usr/local/etc/nginx/ - mv nginx.conf nginx.conf.oroginal - -Now create new nginx configuration file : - - nano -c nginx.conf - -and paste configuration below : - - user www; - worker_processes 1; - error_log /var/log/nginx/error.log; - - events { - worker_connections 1024; - } - - http { - include mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - access_log /var/log/nginx/access.log; - - sendfile on; - keepalive_timeout 65; - - # Nginx cache configuration - proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; - proxy_temp_path /var/nginx/cache/tmp; - proxy_cache_key "$scheme$host$request_uri"; - - gzip on; - - server { - #listen 80; - server_name _; - - location /nginx_status { - - stub_status on; - access_log off; - } - - # redirect server error pages to the static page /50x.html - # - error_page 500 502 503 504 /50x.html; - location = /50x.html { - root /usr/local/www/nginx-dist; - } - - # proxy the PHP scripts to Apache listening on 127.0.0.1:8080 - # - location ~ \.php$ { - proxy_pass http://127.0.0.1:8080; - include /usr/local/etc/nginx/proxy.conf; - } - } - - include /usr/local/etc/nginx/vhost/*; - - } - -Save and exit. - -Next, create new file called **proxy.conf** for reverse proxy configuration on nginx directory : - - cd /usr/local/etc/nginx/ - nano -c proxy.conf - -Paste configuration below : - - proxy_buffering on; - proxy_redirect off; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - client_max_body_size 10m; - client_body_buffer_size 128k; - proxy_connect_timeout 90; - proxy_send_timeout 90; - proxy_read_timeout 90; - proxy_buffers 100 8k; - add_header X-Cache $upstream_cache_status; - -Save and exit. - -And the last, create new directory for nginx cache on "/var/nginx/cache" : - - mkdir -p /var/nginx/cache - -### Step 7 - Configure Nginx VirtualHost ### - -In this step we will create new virtualhost for domain "saitama.me", with document root on "/usr/local/www/saitama.me" and the log file on "/var/log/nginx" directory. - -First thing we must do is creating new directory to store the virtualhost file, we here use new directory called "**vhost**". Let's create it : - - cd /usr/local/etc/nginx/ - mkdir vhost - -vhost directory has been created, now go to the directory and create new file virtualhost. I'me here will create new file "**saitama.conf**" : - - cd vhost/ - nano -c saitama.conf - -Paste virtualhost configuration below : - - server { - # Replace with your freebsd IP - listen 192.168.1.123:80; - - # Document Root - root /usr/local/www/saitama.me; - index index.php index.html index.htm; - - # Domain - server_name www.saitama.me saitama.me; - - # Error and Access log file - error_log /var/log/nginx/saitama-error.log; - access_log /var/log/nginx/saitama-access.log main; - - # Reverse Proxy Configuration - location ~ \.php$ { - proxy_pass http://127.0.0.1:8080; - include /usr/local/etc/nginx/proxy.conf; - - # Cache configuration - proxy_cache my-cache; - proxy_cache_valid 10s; - proxy_no_cache $cookie_PHPSESSID; - proxy_cache_bypass $cookie_PHPSESSID; - proxy_cache_key "$scheme$host$request_uri"; - - } - - # Disable Cache for the file type html, json - location ~* .(?:manifest|appcache|html?|xml|json)$ { - expires -1; - } - - # Enable Cache the file 30 days - location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ { - proxy_cache_valid 200 120m; - expires 30d; - proxy_cache my-cache; - access_log off; - } - - } - -Save and exit. - -Next, create new log directory for nginx and virtualhost on "/var/log/" : - - mkdir -p /var/log/nginx/ - -If all is done, let's create a directory for document root for saitama.me : - - cd /usr/local/www/ - mkdir saitama.me - -### Step 8 - Testing ### - -This step is just test our nginx configuration and test the nginx virtualhost. - -Test nginx configuration with command below : - - nginx -t - -If there is no problem, add nginx to boot time with sysrc command, and then start it and restart apache: - - sysrc nginx_enable=yes - service nginx start - service apache24 restart - -All is done, now verify the the php is working by adding new file phpinfo on saitama.me directory : - - cd /usr/local/www/saitama.me - echo "" > info.php - -Visit the domain : **www.saitama.me/info.php**. - -![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png) - -Nginx as reverse proxy for apache is working, and php is working too. - -And this is another results : - -Test .html file with no-cache. - - curl -I www.saitama.me - -![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png) - -Test .css file with 30day cache. - - curl -I www.saitama.me/test.css - -![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png) - -Test .php file with cache : - - curl -I www.saitama.me/info.php - -![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png) - -All is done. - -### Conclusion ### - -Nginx is most popular HTTP server and reverse proxy. Has a rich of features with high performance and low memory/RAM usage. Nginx use too for caching, we can cache a static file on the web to make the web fast load, and cache for php file if a user request for it. Nginx is easy to configure and use, use for HTTP server or act as reverse proxy for apache. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/ - -作者:[Arul][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arulm/ \ No newline at end of file From c41d237423576784f044f902026e9add3ffd3b4d Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Tue, 8 Dec 2015 12:47:50 +0800 Subject: [PATCH 147/160] Translated --- ...stall Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md b/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md index db809e167a..83877c8488 100644 --- a/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md +++ b/translated/tech/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md @@ -1,4 +1,4 @@ -Translated by KnightJoker + 如何在FreeBSD 10.2上安装Nginx作为Apache的反向代理 ================================================================================ Nginx是一款免费的,开源的HTTP和反向代理服务器, 以及一个代理POP3/IMAP的邮件服务器. Nginx是一款高性能的web服务器,其特点是丰富的功能,简单的结构以及低内存的占用. 第一个版本由 Igor Sysoev在2002年发布,然而到现在为止很多大的科技公司都在使用,包括 Netflix, Github, Cloudflare, WordPress.com等等 @@ -319,7 +319,7 @@ Nginx 是最广泛的 HTTP 和反向代理的服务器. 拥有丰富的高性能 via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/ 作者:[Arul][a] -译者:[译者ID](https://github.com/译者ID) +译者:[KnightJoker](https://github.com/KnightJoker) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a57d25a3319edae86ebffa508d553aa4f5c823bc Mon Sep 17 00:00:00 2001 From: ictlyh Date: Tue, 8 Dec 2015 14:15:34 +0800 Subject: [PATCH 148/160] Translating sources/tech/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md --- ...ign Multiple IP Addresses To One Interface On Ubuntu 15.10.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md b/sources/tech/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md index a045ab953f..65da864ec4 100644 --- a/sources/tech/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md +++ b/sources/tech/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md @@ -1,3 +1,4 @@ +ictlyh Translating Assign Multiple IP Addresses To One Interface On Ubuntu 15.10 ================================================================================ Some times you might want to use more than one IP address for your network interface card. What will you do in such cases? Buy an extra network card and assign new IP? No, It’s not necessary(at least in the small networks). We can now assign multiple IP addresses to one interface on Ubuntu systems. Curious to know how? Well, Follow me, It is not that difficult. From c3b322ab3d517a829c8c7121fa9f0c8b693991a3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 8 Dec 2015 14:19:01 +0800 Subject: [PATCH 149/160] =?UTF-8?q?20151208-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ift Programming Language Comes To Linux.md | 41 +++++++++++++ ...208 Install Wetty on Centos or RHEL 6.X.md | 61 +++++++++++++++++++ 2 files changed, 102 insertions(+) create mode 100644 sources/news/20151208 Apple Swift Programming Language Comes To Linux.md create mode 100644 sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md diff --git a/sources/news/20151208 Apple Swift Programming Language Comes To Linux.md b/sources/news/20151208 Apple Swift Programming Language Comes To Linux.md new file mode 100644 index 0000000000..1c13c09dba --- /dev/null +++ b/sources/news/20151208 Apple Swift Programming Language Comes To Linux.md @@ -0,0 +1,41 @@ +Apple Swift Programming Language Comes To Linux +================================================================================ +![](http://itsfoss.com/wp-content/uploads/2015/12/Apple-Swift-Open-Source.jpg) + +Apple and Open Source toogether? Yes! Apple’s Swift programming language is now open source. This should not come as surprise because [Apple had already announced it six months back][1]. + +Apple announced the launch of open source Swift community came this week. A [new website][2] dedicated to the open source Swift community has been put in place with the following message: + +> We are excited by this new chapter in the story of Swift. After Apple unveiled the Swift programming language, it quickly became one of the fastest growing languages in history. Swift makes it easy to write software that is incredibly fast and safe by design. Now that Swift is open source, you can help make the best general purpose programming language available everywhere. + +[swift.org][2] will work as the one stop shop providing downloads for various platforms, community guidelines, news, getting started tutorials, instructions for contribution to open source Swift, documentation and other guidelines. If you are looking forward to learn Swift, this website must be bookmarked. + +In this announcement, a new package manager for easy sharing and building code has been made available as well. + +Most important of all for Linux users, the source code is now available at [Github][3]. You can check it out from the link below: + +- [Apple Swift Source Code][3] + +In addition to that, there are prebuilt binaries for Ubuntu 14.04 and 15.10. + +- [Swift binaries for Ubuntu][4] + +Don’t rush to use them because these are development branches and will not be suitable for production machine. So avoid it for now. Once stable version of Swift for Linux is released, I hope that Ubuntu will include it in [umake][5] on the line of [Visual Studio][6]. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/swift-open-source-linux/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/apple-open-sources-swift-programming-language-linux/ +[2]:https://swift.org/ +[3]:https://github.com/apple +[4]:https://swift.org/download/#latest-development-snapshots +[5]:https://wiki.ubuntu.com/ubuntu-make +[6]:http://itsfoss.com/install-visual-studio-code-ubuntu/ \ No newline at end of file diff --git a/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md b/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md new file mode 100644 index 0000000000..6856b1d71e --- /dev/null +++ b/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md @@ -0,0 +1,61 @@ +Install Wetty on Centos/RHEL 6.X +================================================================================ +![](http://www.unixmen.com/wp-content/uploads/2015/11/Terminal.png) + +What is Wetty? + +As a system administrator, you probably connect to remote servers using a program such as GNOME Terminal (or the like) if you’re on a Linux desktop, or a SSH client such as Putty if you have a Windows machine, while you perform other tasks like browsing the web or checking your email. + +### Step 1: Install epel repo ### + + # wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm + # rpm -ivh epel-release-6-8.noarch.rpm + +### Step 2: Install dependencies ### + + # yum install epel-release git nodejs npm -y + +### Step 3: After installing these dependencies, clone the GitHub repository ### + + # git clone https://github.com/krishnasrinivas/wetty + +### Step 4: Run Wetty ### + + # cd wetty + # npm install + +### Step 5: Starting Wetty and Access Linux Terminal from Web Browser ### + + # node app.js -p 8080 + +### Step 6: Wetty through HTTPS ### + + # openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes (complete this) + +### Step 7: launch Wetty via HTTPS ### + + # nohup node app.js --sslkey key.pem --sslcert cert.pem -p 8080 & + +### Step 8: Add an user for wetty ### + + # useradd + # Passwd + +### Step 9: Access wetty ### + + http://Your_IP-Address:8080 + give the credential have created before for wetty and access + +Enjoy + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-wetty-centosrhel-6-x/ + +作者:[Debojyoti Das][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/debjyoti/ \ No newline at end of file From 68304a43951075eacc7df867cf5ae4790bdd725e Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 8 Dec 2015 15:16:50 +0800 Subject: [PATCH 150/160] =?UTF-8?q?20151208-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ze Time and Date Format in Ubuntu Panel.md | 65 +++++ ...lla with Apache and SSL on FreeBSD 10.2.md | 267 ++++++++++++++++++ 2 files changed, 332 insertions(+) create mode 100644 sources/tech/20151208 How to Customize Time and Date Format in Ubuntu Panel.md create mode 100644 sources/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md diff --git a/sources/tech/20151208 How to Customize Time and Date Format in Ubuntu Panel.md b/sources/tech/20151208 How to Customize Time and Date Format in Ubuntu Panel.md new file mode 100644 index 0000000000..a9e72c626b --- /dev/null +++ b/sources/tech/20151208 How to Customize Time and Date Format in Ubuntu Panel.md @@ -0,0 +1,65 @@ +How to Customize Time & Date Format in Ubuntu Panel +================================================================================ +![Time & Date format](http://ubuntuhandbook.org/wp-content/uploads/2015/08/ubuntu_tips1.png) + +This quick tutorial is going to show you how to customize your Time & Date indicator in Ubuntu panel, though there are already a few options available in the settings page. + +![custom-timedate](http://ubuntuhandbook.org/wp-content/uploads/2015/12/custom-timedate.jpg) + +To get started, search for and install **dconf Editor** in Ubuntu Software Center. Then launch the software and follow below steps: + +**1.** When dconf Editor launches, navigate to **com -> canonical -> indicator -> datetime**. Set the value of **time-format** to **custom**. + +![custom time format](http://ubuntuhandbook.org/wp-content/uploads/2015/12/time-format.jpg) + +You can also do this via a command in terminal: + + gsettings set com.canonical.indicator.datetime time-format 'custom' + +**2.** Now you can customize the Time & Date format by editing the value of **custom-time-format**. + +![customize-timeformat](http://ubuntuhandbook.org/wp-content/uploads/2015/12/customize-timeformat.jpg) + +You can also do this via command: + + gsettings set com.canonical.indicator.datetime custom-time-format 'FORMAT_VALUE_HERE' + +Interpreted sequences are: + +- %a = abbreviated weekday name +- %A = full weekday name +- %b = abbreviated month name +- %B = full month name +- %d = day of month +- %l = hour ( 1..12), %I = hour (01..12) +- %k = hour ( 1..23), %H = hour (01..23) +- %M = minute (00..59) +- %p = AM or PM, %P = am or pm. +- %S = second (00..59) +- open terminal and run command `man date` to get more details. + +Some examples: + +custom time format value: **%a %H:%M %m/%d/%Y** + +![exam-1](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-1.jpg) + +**%a %r %b %d or %a %I:%M:%S %p %b %d** + +![exam-2](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-2.jpg) + +**%a %-d %b %l:%M %P %z** + +![exam-3](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-3.jpg) + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ \ No newline at end of file diff --git a/sources/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md b/sources/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md new file mode 100644 index 0000000000..ab2ea88946 --- /dev/null +++ b/sources/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md @@ -0,0 +1,267 @@ +How to Install Bugzilla with Apache and SSL on FreeBSD 10.2 +================================================================================ +Bugzilla is open source web base application for bug tracker and testing tool, develop by mozilla project, and licensed under Mozilla Public License. It is used by high tech company like mozilla, redhat and gnome. Bugzilla was originally created by Terry Weissman in 1998. It written in perl, use MySQL as the database back-end. It is a server software designed to help you manage software development. Bugzilla has a lot of features, optimized database, excellent security, advanced search tool, integrated with email capabilities etc. + +In this tutorial we will install bugzilla 5.0 with apache for the web server, and enable SSL for it. Then install mysql51 as the database system on freebsd 10.2. + +#### Prerequisite #### + + FreeBSD 10.2 - 64bit. + Root privileges. + +### Step 1 - Update System ### + +Log in to the freebsd server with ssl login, and update the repository database : + + sudo su + freebsd-update fetch + freebsd-update install + +### Step 2 - Install and Configure Apache ### + +In this step we will install apache from the freebsd repositories with pkg command. Then configure apache by editing file "httpd.conf" on apache24 directory, configure apache to use SSL, and CGI support. + +Install apache with pkg command : + + pkg install apache24 + +Go to the apache directory and edit the file "httpd.conf" with nanao editor : + + cd /usr/local/etc/apache24 + nano -c httpd.conf + +Uncomment the list line below : + + #Line 70 + LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so + + #Line 89 + LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so + + # Line 117 + LoadModule expires_module libexec/apache24/mod_expires.so + + #Line 141 to enabling SSL + LoadModule ssl_module libexec/apache24/mod_ssl.so + + # Line 162 for cgi support + LoadModule cgi_module libexec/apache24/mod_cgi.so + + # Line 174 to enable mod_rewrite + LoadModule rewrite_module libexec/apache24/mod_rewrite.so + + # Line 219 for the servername configuration + ServerName 127.0.0.1:80 + +Save and exit. + +Next, we need to install mod perl from freebsd repository, and then enable it : + + pkg install ap24-mod_perl2 + +To enable mod_perl, edit httpd.conf and add to the "Loadmodule" line below : + + nano -c httpd.conf + +Add line below : + + # Line 175 + LoadModule perl_module libexec/apache24/mod_perl.so + +Save and exit. + +And before start apache, add it to start at boot time with sysrc command : + + sysrc apache24_enable=yes + service apache24 start + +### Step 3 - Install and Configure MySQL Database ### + +We will use mysql51 for the database back-end, and it is support for perl module for mysql. Install mysql51 with pkg command below : + + pkg install p5-DBD-mysql51 mysql51-server mysql51-client + +Now we must add mysql to the boot time, and then start and configure the root password for mysql. + +Run command below to do it all : + + sysrc mysql_enable=yes + service mysql-server start + mysqladmin -u root password aqwe123 + +Note : + +mysql password : aqwe123 + +![Configure MySQL Password](http://blog.linoxide.com/wp-content/uploads/2015/12/Configure-MySQL-Password.png) + +Next, we will log in to the mysql shell with user root and password that we've configured above, then we will create new database and user for bugzilla installation. + +Log in to the mysql shell with command below : + + mysql -u root -p + password: aqwe123 + +Add the database : + + create database bugzilladb; + create user bugzillauser@localhost identified by 'bugzillauser@'; + grant all privileges on bugzilladb.* to bugzillauser@localhost identified by 'bugzillauser@'; + flush privileges; + \q + +![Creating Database for Bugzilla](http://blog.linoxide.com/wp-content/uploads/2015/12/Creating-Database-for-Bugzilla.png) + +Database for bugzilla is created, database "bugzilladb" with user "bugzillauser" and password "bugzillauser@". + +### Step 4 - Generate New SSL Certificate ### + +Generate new self signed ssl certificate on directory "ssl" for bugzilla site. + +Go to the apache24 directory and create new directory "ssl" on it : + + cd /usr/local/etc/apache24/ + mkdir ssl; cd ssl + +Next, generate the certificate file with openssl command, then change the permission of the certificate file : + + sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /usr/local/etc/apache24/ssl/bugzilla.key -out /usr/local/etc/apache24/ssl/bugzilla.crt + chmod 600 * + +### Step 5 - Configure Virtualhost ### + +We will install bugzilla on directory "/usr/local/www/bugzilla", so we must create new virtualhost configuration for it. + +Go to the apache directory and create new directory called "vhost" for virtualhost file : + + cd /usr/local/etc/apache24/ + mkdir vhost; cd vhost + +Now create new file "bugzilla.conf" for the virtualhost file : + + nano -c bugzilla.conf + +Paste configuration below : + + + ServerName mybugzilla.me + ServerAlias www.mybuzilla.me + DocumentRoot /usr/local/www/bugzilla + Redirect permanent / https://mybugzilla.me/ + + + Listen 443 + + ServerName mybugzilla.me + DocumentRoot /usr/local/www/bugzilla + + ErrorLog "/var/log/mybugzilla.me-error_log" + CustomLog "/var/log/mybugzilla.me-access_log" common + + SSLEngine On + SSLCertificateFile /usr/local/etc/apache24/ssl/bugzilla.crt + SSLCertificateKeyFile /usr/local/etc/apache24/ssl/bugzilla.key + + + AddHandler cgi-script .cgi + Options +ExecCGI + DirectoryIndex index.cgi index.html + AllowOverride Limit FileInfo Indexes Options + Require all granted + + + +Save and exit. + +If all is done, create new directory for bugzilla installation and then enable the bugzilla virtualhost by adding the virtualhost configuration to httpd.conf file. + +Run command below on "apache24" directory : + + mkdir -p /usr/local/www/bugzilla + cd /usr/local/etc/apache24/ + nano -c httpd.conf + +In the end of the line, add configuration below : + + Include etc/apache24/vhost/*.conf + +Save and exit. + +Now test the apache configuration with "apachectl" command and restart it : + + apachectl configtest + service apache24 restart + +### Step 6 - Install Bugzilla ### + +We can install bugzilla manually by downloading the source, or install it from freebsd repository. In this step we will install bugzilla from freebsd repository with pkg command : + + pkg install bugzilla50 + +If it's done, go to the bugzilla installation directory and install all perl module that needed by bugzilla. + + cd /usr/local/www/bugzilla + ./install-module --all + +Wait it until all is finished, it is take the time. + +Next, generate the configuration file "localconfig" by executing "checksetup.pl" file on bugzilla installation directory. + + ./checksetup.pl + +You will see the error message about the database configuration, so edit the file "localconfig" with nano editor : + + nano -c localconfig + +Now add the database that was created on step 3. + + #Line 57 + $db_name = 'bugzilladb'; + + #Line 60 + $db_user = 'bugzillauser'; + + #Line 67 + $db_pass = 'bugzillauser@'; + +Save and exit. + +Then run "checksetup.pl" again : + + ./checksetup.pl + +You will be prompt about mail and administrator account, fill all of it with your email, user and password. + +![Admin Setup](http://blog.linoxide.com/wp-content/uploads/2015/12/Admin-Setup.png) + +In the last, we need to change the owner of the installation directory to user "www", then restart apache with service command : + + cd /usr/local/www/ + chown -R www:www bugzilla + service apache24 restart + +Now Bugzilla is installed, you can see it by visiting mybugzilla.me and you will be redirect to the https connection. + +Bugzilla home page. + +![Bugzilla Home](http://blog.linoxide.com/wp-content/uploads/2015/12/Bugzilla-Home.png) + +Bugzilla admin panel. + +![Bugzilla Admin Page](http://blog.linoxide.com/wp-content/uploads/2015/12/Bugzilla-Admin-Page.png) + +### Conclusion ### + +Bugzilla is web based application help you to manage the software development. It is written in perl and use MySQL as the database system. Bugzilla used by mozilla, redhat, gnome etc for help their software development. Bugzilla has a lot of features and easy to configure and install. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-bugzilla-apache-ssl-freebsd-10-2/ + +作者:[Arul][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ \ No newline at end of file From a0699ecf9c3777e496ce9f7ef57f7e28f4b47728 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 8 Dec 2015 16:38:20 +0800 Subject: [PATCH 151/160] =?UTF-8?q?20151208-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r visual effects in Linux with Kdenlive.md | 155 ++++++++++++++++++ ...Raspberry Pi projects for the classroom.md | 96 +++++++++++ ...0151208 6 creative ways to use ownCloud.md | 94 +++++++++++ ...0151208 6 useful LibreOffice extensions.md | 79 +++++++++ ... open source community metrics to track.md | 79 +++++++++ ...o renew the ISPConfig 3 SSL Certificate.md | 59 +++++++ 6 files changed, 562 insertions(+) create mode 100644 sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md create mode 100644 sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md create mode 100644 sources/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md create mode 100644 sources/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md create mode 100644 sources/talk/yearbook2015/20151208 Top 5 open source community metrics to track.md create mode 100644 sources/tech/20151208 How to renew the ISPConfig 3 SSL Certificate.md diff --git a/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md b/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md new file mode 100644 index 0000000000..bf2ba1ff25 --- /dev/null +++ b/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md @@ -0,0 +1,155 @@ +10 tools for visual effects in Linux with Kdenlive +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/kdenlivetoolssummary.png) +Image credits : Seth Kenlon. [CC BY-SA 4.0.][1] + +[Kdenlive][2] is one of those applications; you can use it daily for a year and wake up one morning only to realize that you still have only grazed the surface of all of its potential. That's why it's nice every once in a while to sit back and look over some of the lesser-used tricks and tools in Kdenlive. Even though something's not used as often as, say, the Spacer or Razor tools, it still may end up being just the right finishing touch on your latest masterpiece. + +Most of the tools I'll discuss here are not officially part of Kdenlive; they are plugins from the [Frei0r][3] package. These are ubiquitous parts of video processing on Linux and Unix, and they usually get installed along with Kdenlive as distributed by most Linux distributions, so they often seem like part of the application. If your install of Kdenlive does not feature some of the tools mentioned here, make sure that you have Frei0r plugins installed. + +Since many of the tools in this article affect the look of an image, here is the base image, without effects or adjustment: + +![](https://opensource.com/sites/default/files/images/life-uploads/before_0.png) + +Still image grabbed from a video by Footage Firm, Inc. [CC BY-SA 4.0.][1] + +Let's get started. + +### 1. Color effect ### + +![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect.png) + +You can find the **Color Effect** filter in **Add Effect > Misc** context menu. As filters go, it's mostly just a preset; the only controls it has are which filter you want to use. + +![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect_ctl_0.png) + +Normally that's the kind of filter I avoid, but I have to be honest: Sometimes a plug-and-play solution is exactly what you want. This filter has a few different settings, but the two that make it worth while (at least for me) are the Sepia and XPro effects. Admittedly, controls to adjust how sepia tone the sepia effect is would be nice, but no matter what, when you need a quick and familiar color effect, this is the filter to throw onto a clip. It's immediate, it's easy, and if your client asks for that look, this does the trick every time. + +### 2. Colorize ### + +![](https://opensource.com/sites/default/files/images/life-uploads/colorize.png) + +The simplicity of the **Colorize** filter in **Add Effect > Misc** is also its strength. In some editing applications, it takes two filters and some compositing to achieve this simple color-wash effect. It's refreshing that in Kdenlive, it's a matter of one filter with three possible controls (only one of which, strictly speaking, is necessary to achieve the look). + +![](https://opensource.com/sites/default/files/images/life-uploads/colorize_ctl.png) + +Its use is intuitive; use the **Hue** slider to set the color. Use the other controls to adjust the luma of the base image as needed. + +This is not a filter I use every day, but for ad spots, bumpers, dreamy sequences, or titles, it's the easiest and quickest path to a commonly needed look. Get a company's color, use it as the colorize effect, slap a logo over the top of the screen, and you've just created a winning corporate intro. + +### 3. Dynamic Text ### + +![](https://opensource.com/sites/default/files/images/life-uploads/dyntext.png) + +For the assistant editor, the Add Effect > Misc > Dynamic **Text** effect is worth the price of Kdenlive. With one mostly pre-set filter, you can add a running timecode burn-in to your project, which is an absolute must-have safety feature when round-tripping your footage through effects and sound. + +The controls look more complex than they actually are. + +![](https://opensource.com/sites/default/files/images/life-uploads/dyntext_ctl.png) + +The font settings are self-explanatory. Placement of the text is controlled by the Horizontal and Vertical Alignment settings; steer clear of the **Size** setting (it controls the size of the "canvas" upon which you are compositing the burn-in, not the size of the burn-in itself). + +The text itself doesn't have to be timecode. From the dropdown menu, you can choose from a list of useful text, including frame count (useful for VFX, since animators work in frames), source frame rate, source dimensions, and more. + +You are not limited to just one choice. The text field in the control panel will take whatever arbitrary text you put into it, so if you want to burn in more information than just timecode and frame rate (such as **Sc 3 - #timecode# - #meta.media.0.stream.frame_rate#**), then have at it. + +### 4. Luminance ### + +![](https://opensource.com/sites/default/files/images/life-uploads/luminance.png) + +The **Add Effect > Misc > Luminance** filter is a no-options filter. Luminance does one thing and it does it well: It drops the chroma values of all pixels in an image so that they are displayed by their luma values. In simpler terms, it's a grayscale filter. + +The nice thing about this filter is that it's quick, easy, efficient, and effective. This filter combines particularly well with other related filters (meaning that yes, I'm cheating and including three filters for one). + +![](https://opensource.com/sites/default/files/images/life-uploads/luminance_ctl.png) + +Combining, in this order, the **RGB Noise** for emulated grain, **Luminance** for grayscale, and **LumaLiftGainGamma** for levels can render a textured image that suggests the classic look and feel of [Kodax Tri-X][4] film. + +### 5. Mask0mate ### + +![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate.png) +Image by Footage Firm, Inc. + +Better known as a four-point garbage mask, the **Add Effect > Alpha Manipulation > Mask0mate** tool is a quick, no-frills way to ditch parts of your frame that you don't need. There isn't much to say about it; it is what it is. + +![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate_ctl.png) + +The confusing thing about the effect is that it does not imply compositing. You can pull in the edges all you want, but you won't see it unless you add the **Composite** transition to reveal what's underneath the clip (even if that's nothing). Also, use the **Invert** function for the filter to act like you think it should act (without it, the controls will probably feel backward to you). + +### 6. Pr0file ### + +![](https://opensource.com/sites/default/files/images/life-uploads/pr0file.png) + +The **Add Effect > Misc > Pr0file** filter is an analytical tool, not something you would actually leave on a clip for final export (unless, of course, you do). Pr0file consists of two components: the Marker, which dictates what area of the image is being analyzed, and the Graph, which displays information about the marked region. + +Set the marker using the **X, Y, Tilt**, and **Length** controls. The graphical readout of all the relevant color channel information is displayed as a graph, superimposed over your image. + +![](https://opensource.com/sites/default/files/images/life-uploads/pr0file_ctl.jpg) + +The readout displays a profile of the colors within the region marked. The result is a sort of hyper-specific vectorscope (or oscilloscope, as the case may be) that can help you zero in on problem areas during color correction, or compare regions while color matching. + +In other editors, the way to get the same information was simply to temporarily scale your image up to the region you want to analyze, look at your readout, and then hit undo to scale back. Both ways work, but the Pr0file filter does feel a little more elegant. + +### 7. Vectorscope ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vectorscope.jpg) + +Kdenlive features an inbuilt vectorscope, available from the **View** menu in the main menu bar. A vectorscope is not a filter, it's just another view the footage in your Project Monitor, specifically a view of the color saturation in the current frame. If you are color correcting an image and you're not sure what colors you need to boost or counteract, looking at the vectorscope can be a huge help. + +There are several different views available. You can render the vectorscope in traditional green monochrome (like the hardware vectorscopes you'd find in a broadcast control room), or a chromatic view (my personal preference), or subtracted from a color-wheel background, and more. + +The vectorscope reads the entire frame, so unlike the Pr0file filter, you are not just getting a reading of one area in the frame. The result is a consolidated view of what colors are most prominent within a frame. Technically, the same sort of information can be intuited by several trial-and-error passes with color correction, or you can just leave your vectorscope open and watch the colors float along the color wheel and make adjustments accordingly. + +Aside from how you want the vectorscope to look, there are no controls for this tool. It is a readout only. + +### 8. Vertigo ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vertigo.jpg) + +There's no way around it; **Add Effect > Misc > Vertigo** is a gimmicky special effect filter. So unless you're remaking [Fear and Loathing][5] or the movie adaptation of [Dead Island][6], you probably aren't going to use it that much; however, it's one of those high-quality filters that does the exact trick you want when you happen to be looking for it. + +The controls are simple. You can adjust how distorted the image becomes and the rate at which it distorts. The overall effect is probably more drunk or vision-quest than vertigo, but it's good. + +![](https://opensource.com/sites/default/files/images/life-uploads/vertigo_ctl.png) + +### 9. Vignette ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vignette.jpg) + +Another beautiful effect, the **Add Effect > Misc > Vignette** darkens the outer edges of the frame to provide a sort of portrait, soft-focus nouveau look. Combined with the Color Effect or the Luminance faux Tri-X trick, this can be a powerful and emotional look. + +The softness of the border and the aspect ratio of the iris can be adjusted. The **Clear Center Size** attribute controls the size of the clear area, which has the effect of adjusting the intensity of the vignette effect. + +![](https://opensource.com/sites/default/files/images/life-uploads/vignette_ctl.png) + +### 10. Volume ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vol.jpg) + +I don't believe in mixing sound within the video editing application, but I do acknowledge that sometimes it's just necessary for a quick fix or, sometimes, even for a tight production schedule. And that's when the **Audio correction > Volume (Keyframable)** effect comes in handy. + +The control panel is clunky, and no one really wants to adjust volume that way, so the effect is best when used directly in the timeline. To create a volume change, double-click the volume line over the audio clip, and then click and drag to adjust. It's that simple. + +Should you use it? Not really. Sound mixing should be done in a sound mixing application. Will you use it? Absolutely. At some point, you'll get audio that is too loud to play as you edit, or you'll be up against a deadline without a sound engineer in sight. Use it judiciously, watch your levels, and get the show finished. + +### Everything else ### + +This has been 10 (OK, 13 or 14) effects and tools that Kdenlive has quietly lying around to help your edits become great. Obviously there's a lot more to Kdenlive than just these little tricks. Some are obvious, some are cliché, some are obtuse, but they're all in your toolkit. Get to know them, explore your options, and you might be surprised what a few cheap tricks will get you. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/15/12/10-kdenlive-tools + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/seth +[1]:https://creativecommons.org/licenses/by-sa/4.0/ +[2]:https://kdenlive.org/ +[3]:http://frei0r.dyne.org/ +[4]:http://www.kodak.com/global/en/professional/products/films/bw/triX2.jhtml +[5]:https://en.wikipedia.org/wiki/Fear_and_Loathing_in_Las_Vegas_(film) +[6]:https://en.wikipedia.org/wiki/Dead_Island \ No newline at end of file diff --git a/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md b/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md new file mode 100644 index 0000000000..c1aa541416 --- /dev/null +++ b/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md @@ -0,0 +1,96 @@ +5 great Raspberry Pi projects for the classroom +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png) + +Image by : opensource.com + +### 1. Minecraft Pi ### + +Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1] + +Minecraft is the favorite game of pretty much every teenager in the world—and it's one of the most creative games ever to capture the attention of young people. The version that comes with every Raspberry Pi is not only a creative thinking building game, but comes with a programming interface allowing for additional interaction with the Minecraft world through Python code. + +Minecraft: Pi Edition is a great way for teachers to engage students with problem solving and writing code to perform tasks. You can use the Python API to build a house and have it follow you wherever you go, build a bridge wherever you walk, make it rain lava, show the temperature in the sky, and anything else your imagination can create. + +Read more in "[Getting Started with Minecraft Pi][2]." + +### 2. Reaction game and traffic lights ### + +![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg) + +Courtesy of [Low Voltage Labs][3]. [CC BY-SA 4.0][1]. + +It's really easy to get started with physical computing on Raspberry Pi—just connect up LEDs and buttons to the GPIO pins, and with a few lines of code you can turn lights on and control things with button presses. Once you know the code to do the basics, it's down to your imagination as to what you do next! + +If you know how to flash one light, you can flash three. Pick out three LEDs in traffic light colors and you can code the traffic light sequence. If you know how to use a button to a trigger an event, then you have a pedestrian crossing! Also look out for great pre-built traffic light add-ons like [PI-TRAFFIC][4], [PI-STOP][5], [Traffic HAT][6], and more. + +It's not always about the code—this can be used as an exercise in understanding how real world systems are devised. Computational thinking is a useful skill in any walk of life. + +![](https://opensource.com/sites/default/files/reaction-game.png) + +Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0][1]. + +Next, try wiring up two buttons and an LED and making a two-player reaction game—let the light come on after a random amount of time and see who can press the button first! + +To learn more, check out "[GPIO Zero recipes][7]. Everything you need is in [CamJam EduKit 1][8]. + +### 3. Sense HAT Pixel Pet ### + +The Astro Pi—an augmented Raspberry Pi—is going to space this December, but you haven't missed your chance to get your hands on the hardware. The Sense HAT is the sensor board add-on used in the Astro Pi mission and it's available for anyone to buy. You can use it for data collection, science experiments, games and more. Watch this Gurl Geek Diaries video from Raspberry Pi's Carrie Anne for a great way to get started—by bringing to life an animated pixel pet of your own design on the Sense HAT display: + +注:youtube 视频 + + +Learn more in "[Exploring the Sense HAT][9]." + +### 4. Infrared bird box ### + +![](https://opensource.com/sites/default/files/ir-bird-box.png) +Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1] + +A great exercise for the whole class to get involved with—place a Raspberry Pi and the NoIR camera module inside a bird box along with some infra-red lights so you can see in the dark, then stream video from the Pi over the network or on the internet. Wait for birds to nest and you can observe them without disturbing them in their habitat. + +Learn all about infrared and the light spectrum, and how to adjust the camera focus and control the camera in software. + +Learn more in "[Make an infrared bird box.][10]" + +### 5. Robotics ### + +![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg) + +Courtesy of Low Voltage Labs. [CC BY-SA 4.0][1]. + +With a Raspberry Pi and as little as a couple of motors and a motor controller board, you can build your own robot. There is a vast range of robots you can make, from basic buggies held together by sellotape and a homemade chassis, all the way to self-aware, sensor-laden metallic stallions with camera attachments driven by games controllers. + +Learn how to control individual motors with something straightforward like the RTK Motor Controller Board (£8/$12), or dive into the new CamJam robotics kit (£17/$25) which comes with motors, wheels and a couple of sensors—great value and plenty of learning potential. + +Alternatively, if you'd like something more hardcore, try PiBorg's [4Borg][11] (£99/$150) or [DiddyBorg][12] (£180/$273) or go the whole hog and treat yourself to their DoodleBorg Metal edition (£250/$380)—and build a mini version of their infamous [DoodleBorg tank][13] (unfortunately not for sale). + +Check out the [CamJam robotics kit worksheets][14]. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom + +作者:[Ben Nuttall][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/bennuttall +[1]:https://creativecommons.org/licenses/by-sa/4.0/ +[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi +[3]:http://lowvoltagelabs.com/ +[4]:http://lowvoltagelabs.com/products/pi-traffic/ +[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390 +[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html +[7]:http://pythonhosted.org/gpiozero/recipes/ +[8]:http://camjam.me/?page_id=236 +[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat +[10]:https://www.raspberrypi.org/learning/infrared-bird-box/ +[11]:https://www.piborg.org/4borg +[12]:https://www.piborg.org/diddyborg +[13]:https://www.piborg.org/doodleborg +[14]:http://camjam.me/?page_id=1035#worksheets \ No newline at end of file diff --git a/sources/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md b/sources/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md new file mode 100644 index 0000000000..969e0637b5 --- /dev/null +++ b/sources/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md @@ -0,0 +1,94 @@ +6 creative ways to use ownCloud +================================================================================ +![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/osdc-open-source-yearbook-lead1-inc0335020sw-201511-01.png) + +Image by : Opensource.com + +[ownCloud][1] is a self-hosted open source file sync and share server. Like "big boys" Dropbox, Google Drive, Box, and others, ownCloud lets you access your files, calendar, contacts, and other data. You can synchronize everything (or part of it) between your devices and share files with others. But ownCloud can do much more than its proprietary, [hosted-on-somebody-else's-computer competitors][2]. + +Let's look at six creative things ownCloud can do. Some of these are possible because ownCloud is open source, whereas others are just unique features it offers. + +### 1. A scalable ownCloud Pi cluster ### + +Because ownCloud is open source, you can choose between self-hosting on your own server or renting space from a provider you trust—no need to put your files at a big company that stores it who knows where. [Find some ownCloud providers here][3] or grab packages or a virtual machine for [your own server here][4]. + +![](https://opensource.com/sites/default/files/images/life-uploads/banana-pi-owncloud-cluster.jpg) + +Photo by Jörn Friedrich Dreyer. [CC BY-SA 4.0.][5] + +The most creative things we've seen are a [Banana Pi cluster][6] and a [Raspberry Pi cluster][7]. Although ownCloud's scalability is often used to deploy to hundreds of thousands of users, some folks out there take it in a different direction, bringing multiple tiny systems together to make a super-fast ownCloud. Kudos! + +### 2. Keep your passwords synced ### + +To make ownCloud easier to extend, we have made it extremely modular and have an [ownCloud app store][8]. There you can find things like music and video players, calendars, contacts, productivity apps, games, a sketching app, and much more. + +Picking only one app from the almost 200 available is hard, but managing passwords is certainly a unique feature. There are no less than three apps providing this functionality: [Passwords][9], [Secure Container][10], and [Passman][11]. + +![](https://opensource.com/sites/default/files/images/life-uploads/password.png) + +### 3. Store your files where you want ### + +External storage allows you to hook your existing data storage into ownCloud, letting you to access files stored on FTP, WebDAV, Amazon S3, and even Dropbox and Google Drive through one interface. + +注:youtube 视频 + + +The "big boys" like to create their own little walled gardens—Box user can only collaborate with other Box users; and if you want to share your files from Google Drive, your mate needs a Google account or they can't do much. With ownCloud's external storage, you can break these barriers. + +A very creative solution is adding Google Drive and Dropbox as external storage. You can work with files on both seamlessly and share them with others through a simple link—no account needed to work with you! + +### 4. Get files uploaded ### + +Because ownCloud is open source, people contribute interesting features without being limited by corporate requirements. Our contributors have always cared about security and privacy, so ownCloud introduced features such as protecting a public link with a password and setting an expire date [years before anybody else did][12]. + +Today, ownCloud has the ability to configure a shared link as read-write, which means visitors can seamlessly edit the files you share with them (protected with a password or not) or upload new files to your server without being forced to sign up to another web service that wants their private data. + +注:youtube 视频 + + +This is great for when people want to share a large file with you. Rather than having to upload it to a third-party site, send you a link, and make you go there and download it (often requiring a login), they can just upload it to a shared folder you provide, and you can get to work right away. + +### 5. Get free secure storage ### + +We already talked about how many of our contributors care about security and privacy. That's why ownCloud has an app that can encrypt and decrypt stored data. + +Using ownCloud to store your files on Dropbox or Google Drive defeats the whole idea of retaking control of your data and keeping it private. The Encryption app changes that. By encrypting data before sending it to these providers and decrypting it upon retrieval, your data is safe as kittens. + +### 6. Share your files and stay in control ### + +As an open source project, ownCloud has no stake in building walled gardens. Enter Federated Cloud Sharing: a protocol [developed and published by ownCloud][13] that enables different file sync and share servers to talk to one another and exchange files securely. Federated Cloud Sharing has an interesting history. [Twenty-two German universities][14] decided to build a huge cloud for their 500,000 students. But as each university wanted to stay in control of the data of their own students, a creative solution was needed: Federated Cloud Sharing. The solution now connects all these universities so the students can seamlessly work together. At the same time, the system administrators at each university stay in control of the files their students have created and can apply policies, such as storage restrictions, or limitations on what, with whom, and how files can be shared. + +注:youtube 视频 + + +And this awesome technology isn't limited to German universities: Every ownCloud user can find their [Federated Cloud ID][15] in their user settings and share it with others. + +So there you have it. Six ways ownCloud enables people to do special and unique things, all made possible because it is open source and designed to help you liberate your data. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/15/12/6-creative-ways-use-owncloud + +作者:[Jos Poortvliet][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jospoortvliet +[1]:https://owncloud.com/ +[2]:https://blogs.fsfe.org/mk/new-stickers-and-leaflets-no-cloud-and-e-mail-self-defense/ +[3]:https://owncloud.org/providers +[4]:https://owncloud.org/install/#instructions-server +[5]:https://creativecommons.org/licenses/by-sa/4.0/ +[6]:http://www.owncluster.de/ +[7]:https://christopherjcoleman.wordpress.com/2013/01/05/host-your-owncloud-on-a-raspberry-pi-cluster/ +[8]:https://apps.owncloud.com/ +[9]:https://apps.owncloud.com/content/show.php/Passwords?content=170480 +[10]:https://apps.owncloud.com/content/show.php/Secure+Container?content=167268 +[11]:https://apps.owncloud.com/content/show.php/Passman?content=166285 +[12]:https://owncloud.com/owncloud45-community/ +[13]:http://karlitschek.de/2015/08/announcing-the-draft-federated-cloud-sharing-api/ +[14]:https://owncloud.com/customer/sciebo/ +[15]:https://owncloud.org/federation/ \ No newline at end of file diff --git a/sources/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md b/sources/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md new file mode 100644 index 0000000000..a2c1e393ff --- /dev/null +++ b/sources/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md @@ -0,0 +1,79 @@ +6 useful LibreOffice extensions +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/yearbook2015-osdc-lead-2.png) + +Image by : Opensource.com + +LibreOffice is the best free office suite around, and as such has been adopted by all major Linux distributions. Although LibreOffice is already packed with features, it can be extended by using specific add-ons, called extensions. + +The main LibreOffice extensions website is [extensions.libreoffice.org][1]. Extensions are tools that can be added or removed independently from the main installation, and may add new functionality or make existing functionality easier to use. + +### 1. MultiFormatSave ### + +MultiFormatSave lets users save a document in the OpenDocument, Microsoft Office (old and new), and/or PDF formats simultaneously, according to user settings. This extension is extremely useful during the migration from Microsoft Office document formats to the [Open Document Format][2] standard, because it offers the option to save in both flavors: ODF for interoperability, and Microsoft Office for compatibility with all users sticking to legacy formats. This makes the migration process softer, and easier to administer. + +**[Download MultiFormatSave][3]** + +![Multiformatsave extension](https://opensource.com/sites/default/files/images/business-uploads/multiformatsave.png) + +### 2. Alternative dialog Find & Replace for Writer (AltSearch) ### + +This extension adds many new features to Writer's find & replace function: searched or replaced text can contain one or more paragraphs; multiple search and replacement in one step; searching: Bookmarks, Notes, Text fields, Cross-references and Reference marks to their content, name or mark and their inserting; searching and inserting Footnote and Endnote; searching object of Table, Pictures and Text frames according to their name; searching out manual page and column break and their set up or deactivation; and searching similarly formatted text, according to cursor point. It is also possible to save and load search and replacement parameters, and execute the batch on several opened documents at the same time. + +**[Download Alternative dialog Find & Replace for Writer (AltSearch)][4]** + +![Alternative Find&amp;Replace add-on](https://opensource.com/sites/default/files/images/business-uploads/alternativefindreplace.png) + +### 3. Pepito Cleaner ### + +Pepito Cleaner is an extension of LibreOffice created to quickly resolve the most common formatting mistakes of old scans, PDF imports, and every digital text file. By clicking the Pepito Cleaner icon on the LibreOffice toolbar, users will open a window that will analyze the document and show the results broken down by category. This is extremely useful when converting PDF documents to ODF, as it cleans all the cruft left in place by the automatic process. + +**[Download Pepito Cleaner][5]** + +![Pepito cleaner screenshot](https://opensource.com/sites/default/files/images/business-uploads/pepitocleaner.png) + +### 4. ImpressRunner ### + +Impress Runner is a simple extension that transforms an [Impress][6] presentation into an auto-running file. The extension adds two icons, to set and remove the autostart function, which can also be added manually by editing the File | Properties | Custom Properties menu, and adding the term autostart in one of the first four text fields. This extension is especially useful for booths at conferences and events, where the slides are supposed to run unattended. + +**[Download ImpressRunner][7]** + +### 5. Export as Images ### + +The Export as Images extension adds a File menu entry export as Images... in Impress and [Draw][8], to export all slides or pages as images in JPG, PNG, GIF, BMP, and TIFF format, and allows users to choose a file name for exported images, the image size, and other parameters. + +**[Download Export as Images][9]** + +![Export as images extension](https://opensource.com/sites/default/files/images/business-uploads/exportasimages.png) + +### 6. Anaphraseus ### + +Anaphraseus is a CAT (Computer-Aided Translation) tool for creating, managing, and using bilingual Translation Memories. Anaphraseus is a LibreOffice macro set available as an extension or a standalone document. Originally, Anaphraseus was developed to work with the Wordfast format, but it can also export and import files in TMX format. Anaphraseus main features are: text segmentation, fuzzy search in Translation Memory, terminology recognition, and TMX Export/Import (OmegaT translation memory format). + +**[Download Anaphraseus][10]** + +![Anaphraseus screenshot](https://opensource.com/sites/default/files/images/business-uploads/anaphraseus.png) + +Do you have a favorite LibreOffice extension to recommend? Let us know about it in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/15/12/6-useful-libreoffice-extensions + +作者:[Italo Vignoli][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/italovignoli +[1]:http://extensions.libreoffice.org/ +[2]:http://www.opendocumentformat.org/ +[3]:http://extensions.libreoffice.org/extension-center/multisave-1 +[4]:http://extensions.libreoffice.org/extension-center/alternative-dialog-find-replace-for-writer +[5]:http://pepitoweb.altervista.org/pepito_cleaner/index.php +[6]:https://www.libreoffice.org/discover/impress/ +[7]:http://extensions.libreoffice.org/extension-center/impressrunner +[8]:https://www.libreoffice.org/discover/draw/ +[9]:http://extensions.libreoffice.org/extension-center/export-as-images +[10]:http://anaphraseus.sourceforge.net/ \ No newline at end of file diff --git a/sources/talk/yearbook2015/20151208 Top 5 open source community metrics to track.md b/sources/talk/yearbook2015/20151208 Top 5 open source community metrics to track.md new file mode 100644 index 0000000000..5098151775 --- /dev/null +++ b/sources/talk/yearbook2015/20151208 Top 5 open source community metrics to track.md @@ -0,0 +1,79 @@ +Top 5 open source community metrics to track +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/yearbook2015-osdc-lead-1.png) + +So you decided to use metrics to track your free, open source software (FOSS) community. Now comes the big question: Which metrics should I be tracking? + +To answer this question, you must have an idea of what information you need. For example, you may want to know about the sustainability of the project community. How quickly does the community react to problems? How is the community attracting, retaining, or losing contributors? Once you decide which information you need, you can figure out which traces of community activity are available to provide it. Fortunately, FOSS projects following an open development model tend to leave loads of public data in their software development repositories, which can be analyzed to gather useful data. + +In this article, I'll introduce metrics that help provide a multi-faceted view of your project community. + +### 1. Activity ### + +The overall activity of the community and how it evolves over time is a useful metric for all open source communities. Activity provides a first view of how much the community is doing, and can be used to track different kinds of activity. For example, the number of commits gives a first idea about the volume of the development effort. The number of tickets opened provides insight into how many bugs are reported or new features are proposed. The number of messages in mailing lists or posts in forums gives an idea of how much discussion is being held in public. + +![Activity metrics chart](https://opensource.com/sites/default/files/images/business-uploads/activity-metrics.png) + +Number of commits and number of merged changes after code review in the OpenStack project, as found in the [OpenStack Activity Dashboard][1]. Evolution over time (weekly data). + +### 2. Size ### + +The size of the community is the number of people participating in it, but, depending on the kind of participation, size numbers may vary. Usually you're interested in active contributors, which is good news. Active people may leave traces in the repositories of the project, which means you can count contributors who are active in producing code by looking at the **Author** field in git repositories, or count people participating in the resolution of tickets by looking at who is contributing to them. + +This basic idea of activity" (somebody did something) can be extended in many ways. One common way to track activity is to look at how many people did a sizable chunk of the activity. Generally most of a project's code contributions, for example, are from a small fraction of the people in the project's community. Knowing about that fraction helps provide an idea of the core group (i.e., the people who help lead the community). + +![Size metrics chart](https://opensource.com/sites/default/files/images/business-uploads/size-metrics.png) + +Number of authors and number of posters in mailing lists in the Xen project, as found in the [Xen Project Development Dashboard][2]. Evolution over time (monthly data). + +### 3. Performance ### + +So far, I have focused on measuring quantities of activities and contributors. You also can analyze how processes and people are performing. For example, you can measure how long processes take to finish. Time to resolve or close tickets shows how the project is reacting to new information that requires action, such as fixing a reported bug or implementing a requested new feature. Time spent in code review—from the moment when a change to the code is proposed to the moment it is accepted—shows how long upgrading a proposed change to the quality standards expected by the community takes. + +Other metrics deal with how well the project is coping with pending work, such as the ratio of new to closed tickets, or the backlog of still non-completed code reviews. Those parameters tell us, for example, whether or not the resources put into solving issues is enough. + +![Efficiency metrics chart](https://opensource.com/sites/default/files/images/business-uploads/efficiency-metrics.png) + +Ratio of tickets closed by tickets opened, and ratio of change proposals accepted or abandoned by new change proposals per quarter. OpenStack project, as shown in the [OpenStack Development Report, 2015-Q3][3] (PDF). + +### 4. Demographics ### + +Communities change as contributors move in and out. Depending on how people enter and leave a community over time, the age (time since members joined the community) of the community varies. The [community aging chart][4] nicely illustrates these exchanges over time. The chart is structured as a set of horizontal bars, two per "generation" of people joining the community. For each generation, the attracted bar shows how many new people joined the community during the corresponding period of time. The retained bar shows how many people are still active in the community. + +The relationship between the two bars for each generation is the retention rate: the fraction of people of that generation who are still in the project. The complete set of attracted bars show how attractive the project was in the past. And the complete set of the retention bars shows the current age structure of the community. + +![Demographics metrics chart](https://opensource.com/sites/default/files/images/business-uploads/demography-metrics.png) + +Community aging chart for the Eclipse community, as shown in the [Eclipse Development Dashboard][5]. Generations are defined every six months. + +### 5. Diversity ### + +Diversity is an important factor in the resiliency of communities. In general, the more diverse communities are—in terms of people or organizations participating—the more resilient they are. For example, when a company decides to leave a FOSS community, the potential problems the departure may cause are much smaller if its employees were contributing 5% of the work rather than 85%. + +The [Pony Factor][6], a term defined by [Daniel Gruno][7] for the minimum number of developers performing 50% of the commits. Based on the Pony Factor, the Elephant Factor is the minimum number of companies whose employees perform 50% of the commits. Both numbers provide an indication of how many people or companies the community depends on. + +![Diversity metrics chart](https://opensource.com/sites/default/files/images/business-uploads/diversity-metrics.png) + +Pony and Elephant Factor for several FOSS projects in the area of cloud computing, as presented in [The quantitative state of the open cloud 2015][8] (slides). + +There are many other metrics to help measure a community. When determing which metrics to collect, think about the goals of your community, and which metrics will help you reach them. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/15/12/top-5-open-source-community-metrics-track + +作者:[Jesus M. Gonzalez-Barahona][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jgbarah +[1]:http://activity.openstack.org/ +[2]:http://projects.bitergia.com/xen-project-dashboard/ +[3]:http://activity.openstack.org/dash/reports/2015-q3/pdf/2015-q3_OpenStack_report.pdf +[4]:http://radar.oreilly.com/2014/10/measure-your-open-source-communitys-age-to-keep-it-healthy.html +[5]:http://dashboard.eclipse.org/demographics.html +[6]:https://ke4qqq.wordpress.com/2015/02/08/pony-factor-math/ +[7]:https://twitter.com/humbedooh +[8]:https://speakerdeck.com/jgbarah/the-quantitative-state-of-the-open-cloud-2015-edition \ No newline at end of file diff --git a/sources/tech/20151208 How to renew the ISPConfig 3 SSL Certificate.md b/sources/tech/20151208 How to renew the ISPConfig 3 SSL Certificate.md new file mode 100644 index 0000000000..600c8941cf --- /dev/null +++ b/sources/tech/20151208 How to renew the ISPConfig 3 SSL Certificate.md @@ -0,0 +1,59 @@ +How to renew the ISPConfig 3 SSL Certificate +================================================================================ +This tutorial describes the steps to renew the SSL Certificate of the ISPConfig 3 control panel. There are two alternative ways to achieve that: + +- Create a new OpenSSL Certificate and CSR on the command line with OpenSSL. +- Renew the SSL Certificate with the ISPConfig updater + +I'll start with the manual way to renew the ssl cert. + +### 1) Create a new ISPConfig 3 SSL Certificate with OpenSSL ### + +Login to your server on the shell as root user. Before we create a new SSL Cert, backup the current ones. SSL Certs are security sensitive so I'll store the backup in the /root/ folder. + + tar pcfz /root/ispconfig_ssl_backup.tar.gz /usr/local/ispconfig/interface/ssl + chmod 600 /root/ispconfig_ssl_backup.tar.gz + +> Now create a new SSL Certificate key, Certificate Request (csr) and a self signed Certificate. + + cd /usr/local/ispconfig/interface/ssl + openssl genrsa -des3 -out ispserver.key 4096 + openssl req -new -key ispserver.key -out ispserver.csr + openssl x509 -req -days 3650 -in ispserver.csr \ + -signkey ispserver.key -out ispserver.crt + openssl rsa -in ispserver.key -out ispserver.key.insecure + mv ispserver.key ispserver.key.secure + mv ispserver.key.insecure ispserver.key + +Restart Apache to load the new SSL Certificate. + + service apache2 restart + +### 2) Renew the SSL Certificate with the ISPConfig installer ### + +The alternative way to get a new SSL Certificate is to use the ISPConfig update script. +Download ISPConfig to the /tmp folder, unpack the archive and start the update script. + + cd /tmp + wget http://www.ispconfig.org/downloads/ISPConfig-3-stable.tar.gz + tar xvfz ISPConfig-3-stable.tar.gz + cd ispconfig3_install/install + php -q update.php + +The update script will ask the following question during update: + + Create new ISPConfig SSL certificate (yes,no) [no]: + +Answer "yes" here and the SSL Certificate creation dialog will start. + +-------------------------------------------------------------------------------- + +via: http://www.faqforge.com/linux/how-to-renew-the-ispconfig-3-ssl-certificate/ + +作者:[Till][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.faqforge.com/author/till/ \ No newline at end of file From 9014fc443a0e5ebf85efac1bb3b8939406a16e25 Mon Sep 17 00:00:00 2001 From: Vic020 Date: Tue, 8 Dec 2015 19:00:20 +0800 Subject: [PATCH 152/160] translated --- ... to up- and download files on the shell.md | 86 +++++++++---------- 1 file changed, 43 insertions(+), 43 deletions(-) diff --git a/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md b/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md index e2ddf0cb2a..11983e0e80 100644 --- a/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md +++ b/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md @@ -1,14 +1,13 @@ -Vic020 - -How to use the Linux ftp command to up- and download files on the shell +如何在命令行中使用ftp命令上传和下载文件 ================================================================================ -In this tutorial, I will explain how to use the Linux ftp command on the shell. I will show you how to connect to an FTP server, up- and download files and create directories. While there are many nice desktops FTP clients available, the FTP command is still useful when you work remotely on a server over an SSH session and e.g. want to fetch a backup file from your FTP storage. +本文中,介绍在Linux shell中如何使用ftp命令。包括如何连接FTP服务器,上传或下载文件以及创建文件夹。尽管现在有许多不错的FTP桌面应用,但是在服务器、ssh、远程回话中命令行ftp命令还是有很多应用的。比如。需要服务器从ftp仓库拉取备份。 -### Step 1: Establishing an FTP connection ### +### 步骤 1: 建立FTP连接 ### -To connect to the FTP server, we have to type in the terminal window '**ftp**' and then the domain name 'domain.com' or IP address of the FTP server. -#### Examples: #### +想要连接FTP服务器,在命令上中先输入'**ftp**'然后空格跟上FTP服务器的域名'domain.com'或者IP地址 + +#### 例如: #### ftp domain.com @@ -16,42 +15,43 @@ To connect to the FTP server, we have to type in the terminal window '**ftp**' a ftp user@ftpdomain.com -**Note: for this example we used an anonymous server.** +**注意: 本次例子使用匿名服务器.** -Replace the IP and domain in the above examples with the IP address or domain of your FTP server. +替换下面例子中IP或域名为你的服务器地址。 -![The FTP login.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png) +![FTP登录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png) -### Step 2: Login with User and Password ### +### 步骤 2: 使用用户名密码登录 ### -Most FTP servers logins are password protected, so the server will ask us for a '**username**' and a '**password**'. +绝大多数的FTP服务器是使用密码保护的,因此这些FTP服务器会询问'**用户名**'和'**密码**'. -If you connect to a so-called anonymous FTP server, then try to use "anonymous" as user name and a nempty password: +如果你连接到被动匿名FTP服务器,可以尝试"anonymous"作为用户名以及空密码: Name: anonymous Password: -The terminal will return a message like this: +之后,终端会返回如下的信息: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> -When you are logged in successfully. +登录成功。 -![Successful FTP login.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png) +![FTP登录成功](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png) -### Step 3: Working with Directories ### +### 步骤 3: 目录操作 ### -The commands to list, move and create folders on an FTP server are almost the same as we would use locally on our computer, ls for list, cd to change directories, mkdir to create directories... +FTP命令可以列出、移动和创建文件夹,如同我们在本地使用我们的电脑一样。ls可以打印目录列表,cd可以改变目录,mkdir可以创建文件夹。 + +#### 使用安全设置列出目录 #### -#### Listing directories with security settings: #### ftp> ls -The server will return: +服务器将返回: 200 PORT command successful. Consider using PASV. 150 Here comes the directory listing. @@ -60,35 +60,35 @@ The server will return: .... 226 Directory send OK. -![List directories](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/listing.png) +![打印目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/listing.png) -#### Changing Directories: #### +#### 改变目录: #### -To change the directory we can type: +改变目录可以输入: ftp> cd directory -The server will return: +服务器将会返回: 250 Directory succesfully changed. -![Change a directory in FTP.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/directory.png) +![FTP中改变目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/directory.png) -### Step 4: Downloading files with FTP ### +### 步骤 4: 使用FTP下载文件 ### -Before downloading a file, we should set the local ftp file download directory by using 'lcd' command: +在下载一个文件之前,我们首先需要使用lcd命令设定本地接受目录位置。 lcd /home/user/yourdirectoryname -If you dont specify the download directory, the file will be downloaded to the current directory where you were at the time you started the FTP session. +如果你不指定下载目录,文件将会下载到你登录FTP时候的工作目录。 -Now, we can use the command 'get' command to download a file, the usage is: +现在,我们可以使用命令get来下载文件,比如: get file -The file will be downloaded to the directory previously set with the 'lcd' command. +文件会保存在使用lcd命令设置的目录位置。 -The server will return the next message: +服务器返回消息: local: file remote: file 200 PORT command successful. Consider using PASV. @@ -96,31 +96,31 @@ The server will return the next message: 226 File send OK. XXX bytes received in x.xx secs (x.xxx MB/s). -![Download a file with FTP.](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/gettingfile.png) +![使用FTP下载文件](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/gettingfile.png) -To download several files we can use wildcards. In this example I will download all files with the .xls file extension. +下载多个文件可以使用通配符。例如,下面这个例子我打算下载所有以.xls结尾的文件。 mget *.xls -### Step 5: Uploading Files with FTP ### +### 步骤 5: 使用FTP上传文件 ### -We can upload files that are in the local directory where we made the FTP connection. +完成FTP连接后,FTP同样可以上传文件 -To upload a file, we can use 'put' command. +使用put命令上传文件: put file -When the file that you want to upload is not in the local directory, you can use the absolute path starting with "/" as well: +当文件不再当前本地目录下的时候,可以使用绝对路径: put /path/file -To upload several files we can use the mput command similar to the mget example from above: +同样,可以上传多个文件: mput *.xls -### Step 6: Closing the FTP connection ### +### 步骤 6: 关闭FTP连接 ### -Once we have done the FTP work, we should close the connection for security reasons. There are three commands that we can use to close the connection: +完成FTP工作后,为了安全起见需要关闭连接。有三个命令可以关闭连接: bye @@ -128,13 +128,13 @@ Once we have done the FTP work, we should close the connection for security reas quit -Any of them will disconnect our PC from the FTP server and will return: +任意一个命令可以断开FTP服务器连接并返回: 221 Goodbye ![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/goodbye.png) -If you need some additional help, once you are connected to the FTP server, type 'help' and this will show you all the available FTP commands. +需要更多帮助,在使用ftp命令连接到服务器后,可以使用“help”获得更多帮助。 ![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/helpwindow.png) @@ -142,7 +142,7 @@ If you need some additional help, once you are connected to the FTP server, type via: https://www.howtoforge.com/tutorial/how-to-use-ftp-on-the-linux-shell/ -译者:[译者ID](https://github.com/译者ID) +译者:[VicYu](http://vicyu.net) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7a7b85199776f9fcde71058c673bd6803e055863 Mon Sep 17 00:00:00 2001 From: Vic020 Date: Tue, 8 Dec 2015 19:01:46 +0800 Subject: [PATCH 153/160] Translated and Moved --- ...he Linux ftp command to up- and download files on the shell.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md (100%) diff --git a/sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md b/translated/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md similarity index 100% rename from sources/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md rename to translated/tech/20151202 How to use the Linux ftp command to up- and download files on the shell.md From 1659dda4ff32ab45ead30ffd0b972d55435fb4be Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 8 Dec 2015 19:52:42 +0800 Subject: [PATCH 154/160] Update 20151208 Install Wetty on Centos or RHEL 6.X.md --- sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md b/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md index 6856b1d71e..e675537c90 100644 --- a/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md +++ b/sources/tech/20151208 Install Wetty on Centos or RHEL 6.X.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Install Wetty on Centos/RHEL 6.X ================================================================================ ![](http://www.unixmen.com/wp-content/uploads/2015/11/Terminal.png) @@ -58,4 +59,4 @@ via: http://www.unixmen.com/install-wetty-centosrhel-6-x/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.unixmen.com/author/debjyoti/ \ No newline at end of file +[a]:http://www.unixmen.com/author/debjyoti/ From 5e3ef89bfea721c5087f7e93f0e5a65391222ce5 Mon Sep 17 00:00:00 2001 From: runningwater Date: Tue, 8 Dec 2015 23:05:43 +0800 Subject: [PATCH 155/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...l series 2--Regular Expressions In grep.md | 290 ------------------ ...l series 2--Regular Expressions In grep.md | 288 +++++++++++++++++ 2 files changed, 288 insertions(+), 290 deletions(-) delete mode 100644 sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md create mode 100755 translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md deleted file mode 100644 index 8bac50fe25..0000000000 --- a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md +++ /dev/null @@ -1,290 +0,0 @@ -(translating by runningwater) -Regular Expressions In grep -================================================================================ -How do I use the Grep command with regular expressions on a Linux and Unix-like operating systems? - -Linux comes with GNU grep, which supports extended regular expressions. GNU grep is the default on all Linux systems. The grep command is used to locate information stored anywhere on your server or workstation. - -### Regular Expressions ### - -Regular Expressions is nothing but a pattern to match for each input line. A pattern is a sequence of characters. Following all are examples of pattern: - - ^w1 - w1|w2 - [^ ] - -#### grep Regular Expressions Examples #### - -Search for 'vivek' in /etc/passswd - - grep vivek /etc/passwd - -Sample outputs: - - vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash - vivekgite:x:1001:1001::/home/vivekgite:/bin/sh - gitevivek:x:1002:1002::/home/gitevivek:/bin/sh - -Search vivek in any case (i.e. case insensitive search) - - grep -i -w vivek /etc/passwd - -Search vivek or raj in any case - - grep -E -i -w 'vivek|raj' /etc/passwd - -The PATTERN in last example, used as an extended regular expression. - -### Anchors ### - -You can use ^ and $ to force a regex to match only at the start or end of a line, respectively. The following example displays lines starting with the vivek only: - - grep ^vivek /etc/passwd - -Sample outputs: - - vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash - vivekgite:x:1001:1001::/home/vivekgite:/bin/sh - -You can display only lines starting with the word vivek only i.e. do not display vivekgite, vivekg etc: - - grep -w ^vivek /etc/passwd - -Find lines ending with word foo: -grep 'foo$' filename - -Match line only containing foo: - - grep '^foo$' filename - -You can search for blank lines with the following examples: - - grep '^$' filename - -### Character Class ### - -Match Vivek or vivek: - - grep '[vV]ivek' filename - -OR - - grep '[vV][iI][Vv][Ee][kK]' filename - -You can also match digits (i.e match vivek1 or Vivek2 etc): - - grep -w '[vV]ivek[0-9]' filename - -You can match two numeric digits (i.e. match foo11, foo12 etc): - - grep 'foo[0-9][0-9]' filename - -You are not limited to digits, you can match at least one letter: - - grep '[A-Za-z]' filename - -Display all the lines containing either a "w" or "n" character: - - grep [wn] filename - -Within a bracket expression, the name of a character class enclosed in "[:" and ":]" stands for the list of all characters belonging to that class. Standard character class names are: - -- [:alnum:] - Alphanumeric characters. -- [:alpha:] - Alphabetic characters -- [:blank:] - Blank characters: space and tab. -- [:digit:] - Digits: '0 1 2 3 4 5 6 7 8 9'. -- [:lower:] - Lower-case letters: 'a b c d e f g h i j k l m n o p q r s t u v w x y z'. -- [:space:] - Space characters: tab, newline, vertical tab, form feed, carriage return, and space. -- [:upper:] - Upper-case letters: 'A B C D E F G H I J K L M N O P Q R S T U V W X Y Z'. - -In this example match all upper case letters: - - grep '[:upper:]' filename - -### Wildcards ### - -You can use the "." for a single character match. In this example match all 3 character word starting with "b" and ending in "t": - - grep '\' filename - -Where, - -- \< Match the empty string at the beginning of word -- \> Match the empty string at the end of word. - -Print all lines with exactly two characters: - - grep '^..$' filename - -Display any lines starting with a dot and digit: - - grep '^\.[0-9]' filename - -#### Escaping the dot #### - -The following regex to find an IP address 192.168.1.254 will not work: - - grep '192.168.1.254' /etc/hosts - -All three dots need to be escaped: - - grep '192\.168\.1\.254' /etc/hosts - -The following example will only match an IP address: - - egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' filename - -The following will match word Linux or UNIX in any case: - - egrep -i '^(linux|unix)' filename - -### How Do I Search a Pattern Which Has a Leading - Symbol? ### - -Searches for all lines matching '--test--' using -e option Without -e, grep would attempt to parse '--test--' as a list of options: - - grep -e '--test--' filename - -### How Do I do OR with grep? ### - -Use the following syntax: - - grep 'word1|word2' filename - -OR - - grep 'word1\|word2' filename - -### How Do I do AND with grep? ### - -Use the following syntax to display all lines that contain both 'word1' and 'word2' - - grep 'word1' filename | grep 'word2' - -### How Do I Test Sequence? ### - -You can test how often a character must be repeated in sequence using the following syntax: - - {N} - {N,} - {min,max} - -Match a character "v" two times: - - egrep "v{2}" filename - -The following will match both "col" and "cool": - - egrep 'co{1,2}l' filename - -The following will match any row of at least three letters 'c'. - - egrep 'c{3,}' filename - -The following example will match mobile number which is in the following format 91-1234567890 (i.e twodigit-tendigit) - - grep "[[:digit:]]\{2\}[ -]\?[[:digit:]]\{10\}" filename - -### How Do I Hightlight with grep? ### - -Use the following syntax: - - grep --color regex filename - -How Do I Show Only The Matches, Not The Lines? - -Use the following syntax: - - grep -o regex filename - -### Regular Expression Operator ### - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Regex operatorMeaning
.Matches any single character.
?The preceding item is optional and will be matched, at most, once.
*The preceding item will be matched zero or more times.
+The preceding item will be matched one or more times.
{N}The preceding item is matched exactly N times.
{N,}The preceding item is matched N or more times.
{N,M}The preceding item is matched at least N times, but not more than M times.
-Represents the range if it's not first or last in a list or the ending point of a range in a list.
^Matches the empty string at the beginning of a line; also represents the characters not in the range of a list.
$Matches the empty string at the end of a line.
\bMatches the empty string at the edge of a word.
\BMatches the empty string provided it's not at the edge of a word.
\<Match the empty string at the beginning of word.
\> Match the empty string at the end of word.
- -#### grep vs egrep #### - -egrep is the same as **grep -E**. It interpret PATTERN as an extended regular expression. From the grep man page: - - In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, \+, \{, - \|, \(, and \). - Traditional egrep did not support the { meta-character, and some egrep implementations support \{ instead, so portable scripts should avoid { in - grep -E patterns and should use [{] to match a literal {. - GNU grep -E attempts to support traditional usage by assuming that { is not special if it would be the start of an invalid interval specification. - For example, the command grep -E '{1' searches for the two-character string {1 instead of reporting a syntax error in the regular expression. - POSIX.2 allows this behavior as an extension, but portable scripts should avoid it. - -References: - -- man page grep and regex(7) -- info page grep` - --------------------------------------------------------------------------------- - -via: http://www.cyberciti.biz/faq/grep-regular-expressions/ - -作者:Vivek Gite -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md new file mode 100755 index 0000000000..8389f4c339 --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md @@ -0,0 +1,288 @@ +## grep 中的正则表达式 +================================================================================ +在 Linux 、类 Unix 系统中我该如何使用 Grep 命令的正则表达式呢? + +Linux 附带有 GNU grep 命令工具,它支持正则表达式,而且 GNU grep 在所有的 Linux 系统中都是默认有的。Grep 命令被用于搜索定位存储在您服务器或工作站的信息。 + +### 正则表达式 ### + +正则表达式仅仅是对每个输入行的匹配的一种模式,即对字符序列的匹配模式。下面是范例: + + ^w1 + w1|w2 + [^ ] + +#### grep 正则表达式示例 #### + +在 /etc/passswd 目录中搜索 'vivek' + + grep vivek /etc/passwd + +输出例子: + + vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash + vivekgite:x:1001:1001::/home/vivekgite:/bin/sh + gitevivek:x:1002:1002::/home/gitevivek:/bin/sh + +摸索任何情况下的 vivek(即不区分大小写的搜索) + + grep -i -w vivek /etc/passwd + +摸索任何情况下的 vivek 或 raj + + grep -E -i -w 'vivek|raj' /etc/passwd + +上面最后的例子显示的,就是一个正则表达式扩展的模式。 + +### 锚 ### + +你可以分别使用 ^ 和 $ 符号来正则匹配输入行的开始或结尾。下面的例子搜索显示仅仅以 vivek 开始的输入行: + + grep ^vivek /etc/passwd + +输出例子: + + vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash + vivekgite:x:1001:1001::/home/vivekgite:/bin/sh + +你可以仅仅只搜索出以单词 vivek 开始的行,即不显示 vivekgit、vivekg 等 + + grep -w ^vivek /etc/passwd + +找出以单词 word 结尾的行: + + grep 'foo$' 文件名 + +匹配仅仅只包含 foo 的行: + + grep '^foo$' 文件名 + +如下所示的例子可以搜索空行: + + grep '^$' 文件名 + +### 字符类 ### + +匹配 Vivek 或 vivek: + + grep '[vV]ivek' 文件名 + +或者 + + grep '[vV][iI][Vv][Ee][kK]' 文件名 + +也可以匹配数字 (即匹配 vivek1 或 Vivek2 等等): + + grep -w '[vV]ivek[0-9]' 文件名 + +可以匹配两个数字字符(即 foo11、foo12 等): + + grep 'foo[0-9][0-9]' 文件名 + +不仅仅局限于数字,也能匹配至少一个字母的: + + grep '[A-Za-z]' 文件名 + +显示含有"w" 或 "n" 字符的所有行: + + grep [wn] 文件名 + +在括号内的表达式,即包在"[:" 和 ":]" 之间的字符类的名字,它表示的是属于此类的所有字符列表。标准的字符类名称如下: + +- [:alnum:] - 字母数字字符. +- [:alpha:] - 字母字符 +- [:blank:] - 空字符: 空格键符 和 制表符. +- [:digit:] - 数字: '0 1 2 3 4 5 6 7 8 9'. +- [:lower:] - 小写字母: 'a b c d e f g h i j k l m n o p q r s t u v w x y z'. +- [:space:] - 空格字符: 制表符、换行符、垂直制表符、换页符、回车符和空格键符. +- [:upper:] - 大写字母: 'A B C D E F G H I J K L M N O P Q R S T U V W X Y Z'. + +例子所示的是匹配所有大写字母: + + grep '[:upper:]' 文件名 + +### 通配符 ### + +你可以使用 "." 来匹配单个字符。例子中匹配以"b"开头以"t"结尾的3个字符的单词: + + grep '\' 文件名 + +在这儿, + +- \< 匹配单词前面的空字符串 +- \> 匹配单词后面的空字符串 + +打印出只有两个字符的所有行: + + grep '^..$' 文件名 + +显示以一个点和一个数字开头的行: + + grep '^\.[0-9]' 文件名 + +#### 点号转义 #### + +下面要匹配到 IP 地址为 192.168.1.254 的正则式是不会工作的: + + egrep '192.168.1.254' /etc/hosts + +三个点符号都需要转义: + + grep '192\.168\.1\.254' /etc/hosts + +下面的例子仅仅匹配出 IP 地址: + + egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' 文件名 + +下面的例子会匹配任意大小写的 Linux 或 UNIX 这两个单词: + + egrep -i '^(linux|unix)' 文件名 + +### 怎么样搜索以 - 符号开头的匹配模式? ### + +要使用 -e 选项来搜索匹配 '--test--' 字符串,如果不使用 -e 选项,grep 命令会试图把 '--test--' 当作自己的选项参数来解析: + + grep -e '--test--' 文件名 + +### 怎么使用 grep 的 OR 匹配? ### + +使用如下的语法: + + grep 'word1|word2' 文件名 + +或者是 + + grep 'word1\|word2' 文件名 + +### 怎么使用 grep 的 AND 匹配? ### + +使用下面的语法来显示既包含 'word1' 又包含 'word2' 的所有行 + + grep 'word1' 文件名 | grep 'word2' + +### 怎么样使用序列检测? ### + +使用如下的语法,您可以检测一个字符在序列中重复出现次数: + + {N} + {N,} + {min,max} + +要匹配字符 “v" 出现两次: + + egrep "v{2}" 文件名 + +下面的命令能匹配到 "col" 和 "cool" : + + egrep 'co{1,2}l' 文件名 + +下面的命令将会匹配出至少有三个 'c' 字符的所有行。 + + egrep 'c{3,}' 文件名 + +下面的例子会匹配 91-1234567890(即二个数字-十个数字) 这种格式的手机号。 + + grep "[[:digit:]]\{2\}[ -]\?[[:digit:]]\{10\}" 文件名 + +### 怎么样使 grep 命令突出显示?### + +使用如下的语法: + + grep --color regex 文件名 + +### 怎么样仅仅只显示匹配出的字符,而不是匹配出的行? ### + +使用如下语法: + + grep -o regex 文件名 + +### 正则表达式限定符### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
限定符描述
.匹配任意的一个字符.
?匹配前面的子表达式,最多一次。
*匹配前面的子表达式零次或多次。
+匹配前面的子表达式一次或多次。
{N}匹配前面的子表达式 N 次。
{N,}匹配前面的子表达式 N 次到多次。
{N,M}匹配前面的子表达式 N 到 M 次,至少 N 次至多 M 次。
-只要不是在序列开始、结尾或者序列的结束点上,表示序列范围
^匹配一行开始的空字符串;也表示字符不在要匹配的列表中。
$匹配一行末尾的空字符串。
\b匹配一个单词前后的空字符串。
\B匹配一个单词中间的空字符串
\<匹配单词前面的空字符串。
\> 匹配单词后面的空字符串。
+ +#### grep 和 egrep #### + +egrep 跟 **grep -E** 是一样的。他会以正则表达式的模式来解释。下面是 grep 的帮助页(man): + + 基本的正则表达式元字符 ?、+、 {、 |、 ( 和 ) 已经失去了他们特殊的意义,要使用的话用反斜线的版本 \?、\+、\{、\|、\( 和 \) 来代替。 + 传统的 egrep 不支持 { 元字符,一些 egrep 的实现是以 \{ 替代的,所以有 grep -E 的通用脚本应该避免使用 { 符号,要匹配字面的 { 应该使用 [}]。 + GNU grep -E 试图支持传统的用法,如果 { 出在在无效的间隔规范字符串这前,它就会假定 { 不是特殊字符。 + 例如,grep -E '{1' 命令搜索包含 {1 两个字符的串,而不会报出正则表达式语法错误。 + POSIX.2 标准允许对这种操作的扩展,但在可移植脚本文件里应该避免这样使用。 + +引用: + +- grep 和 regex 帮助手册页(7) +- grep 的 info 页` + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-regular-expressions/ + +作者:Vivek Gite +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 3951fe4a0d4ad6da3771fcfde8f9ebed2313c9e0 Mon Sep 17 00:00:00 2001 From: runningwater Date: Tue, 8 Dec 2015 23:22:51 +0800 Subject: [PATCH 156/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E4=B8=AD=20by=20runn?= =?UTF-8?q?ingwater?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rch Multiple Words or String Pattern Using grep Command.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md index bb12d2e1b3..bd58e78535 100644 --- a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md @@ -1,4 +1,4 @@ -Search Multiple Words / String Pattern Using grep Command +(翻译中 by runningwater)Search Multiple Words / String Pattern Using grep Command ================================================================================ How do I search multiple strings or words using the grep command? For example I'd like to search word1, word2, word3 and so on within /path/to/file. How do I force grep to search multiple words? @@ -33,7 +33,7 @@ Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output via: http://www.cyberciti.biz/faq/searching-multiple-words-string-using-grep/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 03c4e1a402e9dd593ba5848b8b34f5bbaa3f7fa8 Mon Sep 17 00:00:00 2001 From: ezio Date: Wed, 9 Dec 2015 09:55:49 +0800 Subject: [PATCH 157/160] translate finished --- ... Doubly linked list in the Linux Kernel.md | 51 ++++++++++++++----- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md index 96a515fe93..00da2d9d00 100644 --- a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md +++ b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md @@ -1,6 +1,4 @@ -translating by Ezio - -Data Structures in the Linux Kernel +Data Structures in the Linux Kernel——Doubly linked list ================================================================================ 双向链表 @@ -43,12 +41,14 @@ struct nmi_desc { ``` Let's look at some examples to understand how `list_head` is used in the kernel. As I already wrote about, there are many, really many different places where lists are used in the kernel. Let's look for an example in miscellaneous character drivers. Misc character drivers API from the [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) is used for writing small drivers for handling simple hardware or virtual devices. Those drivers share same major number: +让我们看几个例子来理解一下在内核里是如何使用`list_head` 的。如上所述,在内核里有实在很多不同的地方用到了链表。我们来看一个在杂项字符驱动里面的使用的例子。在 [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) 的杂项字符驱动API 被用来编写处理小型硬件和虚拟设备的小驱动。这些驱动共享相同的主设备号: ```C #define MISC_MAJOR 10 ``` but have their own minor number. For example you can see it with: +但是都有各自不同的次设备号。比如: ``` ls -l /dev | grep 10 @@ -75,6 +75,7 @@ crw------- 1 root root 10, 137 Mar 21 12:01 vhci ``` Now let's have a close look at how lists are used in the misc device drivers. First of all, let's look on `miscdevice` structure: +现在让我们看看它是如何使用链表的。首先看一下结构体`miscdevice`: ```C struct miscdevice @@ -91,12 +92,14 @@ struct miscdevice ``` We can see the fourth field in the `miscdevice` structure - `list` which is a list of registered devices. In the beginning of the source code file we can see the definition of misc_list: +可以看到结构体的第四个变量`list` 是所有注册过的设备的链表。在源代码文件的开始可以看到这个链表的定义: ```C static LIST_HEAD(misc_list); ``` which expands to the definition of variables with `list_head` type: +它实际上是对用`list_head` 类型定义的变量的扩展。 ```C #define LIST_HEAD(name) \ @@ -104,18 +107,21 @@ which expands to the definition of variables with `list_head` type: ``` and initializes it with the `LIST_HEAD_INIT` macro, which sets previous and next entries with the address of variable - name: +然后使用宏`LIST_HEAD_INIT` 进行初始化,这会使用变量`name` 的地址来填充`prev`和`next` 结构体的两个变量。 ```C #define LIST_HEAD_INIT(name) { &(name), &(name) } ``` Now let's look on the `misc_register` function which registers a miscellaneous device. At the start it initializes `miscdevice->list` with the `INIT_LIST_HEAD` function: +现在来看看注册杂项设备的函数`misc_register`。它在开始就用 `INIT_LIST_HEAD` 初始化了`miscdevice->list`。 ```C INIT_LIST_HEAD(&misc->list); ``` which does the same as the `LIST_HEAD_INIT` macro: +作用和宏`LIST_HEAD_INIT`一样。 ```C static inline void INIT_LIST_HEAD(struct list_head *list) @@ -126,12 +132,15 @@ static inline void INIT_LIST_HEAD(struct list_head *list) ``` In the next step after a device is created by the `device_create` function, we add it to the miscellaneous devices list with: +在函数`device_create` 创建了设备后我们就用下面的语句将设备添加到设备链表: ``` list_add(&misc->list, &misc_list); ``` Kernel `list.h` provides this API for the addition of a new entry to the list. Let's look at its implementation: +内核文件`list.h` 提供了项链表添加新项的API 接口。我们来看看它的实现: + ```C static inline void list_add(struct list_head *new, struct list_head *head) @@ -141,12 +150,14 @@ static inline void list_add(struct list_head *new, struct list_head *head) ``` It just calls internal function `__list_add` with the 3 given parameters: +实际上就是使用3个指定的参数来调用了内部函数`__list_add`: -* new - new entry. -* head - list head after which the new item will be inserted. -* head->next - next item after list head. +* new - 新项。 +* head - 新项将会被添加到`head`之前. +* head->next - `head` 之后的项。 Implementation of the `__list_add` is pretty simple: +`__list_add`的实现非常简单: ```C static inline void __list_add(struct list_head *new, @@ -161,8 +172,10 @@ static inline void __list_add(struct list_head *new, ``` Here we add a new item between `prev` and `next`. So `misc` list which we defined at the start with the `LIST_HEAD_INIT` macro will contain previous and next pointers to the `miscdevice->list`. +我们会在`prev`和`next` 之间添加一个新项。所以我们用宏`LIST_HEAD_INIT`定义的`misc` 链表会包含指向`miscdevice->list` 的向前指针和向后指针。 There is still one question: how to get list's entry. There is a special macro: +这里有一个问题:如何得到列表的内容呢?这里有一个特殊的宏: ```C #define list_entry(ptr, type, member) \ @@ -170,25 +183,29 @@ There is still one question: how to get list's entry. There is a special macro: ``` which gets three parameters: +使用了三个参数: -* ptr - the structure list_head pointer; -* type - structure type; -* member - the name of the list_head within the structure; +* ptr - 指向链表头的指针; +* type - 结构体类型; +* member - 在结构体内类型为`list_head` 的变量的名字; For example: +比如说: ```C const struct miscdevice *p = list_entry(v, struct miscdevice, list) ``` After this we can access to any `miscdevice` field with `p->minor` or `p->name` and etc... Let's look on the `list_entry` implementation: - +然后我们就可以使用`p->minor` 或者 `p->name`来访问`miscdevice`。让我们来看看`list_entry` 的实现: + ```C #define list_entry(ptr, type, member) \ container_of(ptr, type, member) ``` As we can see it just calls `container_of` macro with the same arguments. At first sight, the `container_of` looks strange: +如我们所见,它仅仅使用相同的参数调用了宏`container_of`。初看这个宏挺奇怪的: ```C #define container_of(ptr, type, member) ({ \ @@ -197,8 +214,10 @@ As we can see it just calls `container_of` macro with the same arguments. At fir ``` First of all you can note that it consists of two expressions in curly brackets. The compiler will evaluate the whole block in the curly braces and use the value of the last expression. +首先你可以注意到花括号内包含两个表达式。编译器会执行花括号内的全部语句,然后返回最后的表达式的值。 For example: +举个例子来说: ``` #include @@ -211,8 +230,10 @@ int main() { ``` will print `2`. +最终会打印`2` The next point is `typeof`, it's simple. As you can understand from its name, it just returns the type of the given variable. When I first saw the implementation of the `container_of` macro, the strangest thing I found was the zero in the `((type *)0)` expression. Actually this pointer magic calculates the offset of the given field from the address of the structure, but as we have `0` here, it will be just a zero offset along with the field width. Let's look at a simple example: +下一点就是`typeof`,它也很简单。就如你从名字所理解的,它仅仅返回了给定变量的类型。当我第一次看到宏`container_of`的实现时,让我觉得最奇怪的就是`container_of`中的0.实际上这个指针巧妙的计算了从结构体特定变量的偏移,这里的`0`刚好就是位宽里的零偏移。让我们看一个简单的例子: ```C #include @@ -220,7 +241,7 @@ The next point is `typeof`, it's simple. As you can understand from its name, it struct s { int field1; char field2; - char field3; + char field3; }; int main() { @@ -230,16 +251,20 @@ int main() { ``` will print `0x5`. +结果显示`0x5`。 The next `offsetof` macro calculates offset from the beginning of the structure to the given structure's field. Its implementation is very similar to the previous code: +下一个宏`offsetof` 会计算从结构体的某个变量的相对于结构体起始地址的偏移。它的实现和上面类似: ```C #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) ``` Let's summarize all about `container_of` macro. The `container_of` macro returns the address of the structure by the given address of the structure's field with `list_head` type, the name of the structure field with `list_head` type and type of the container structure. At the first line this macro declares the `__mptr` pointer which points to the field of the structure that `ptr` points to and assigns `ptr` to it. Now `ptr` and `__mptr` point to the same address. Technically we don't need this line but it's useful for type checking. The first line ensures that the given structure (`type` parameter) has a member called `member`. In the second line it calculates offset of the field from the structure with the `offsetof` macro and subtracts it from the structure address. That's all. +现在我们来总结一下宏`container_of`。只需要知道结构体里面类型为`list_head` 的变量的名字和结构体容器的类型,它可以通过结构体的变量`list_head`获得结构体的起始地址。在宏定义的第一行,声明了一个指向结构体成员变量`ptr`的指针`__mptr`,并且把`ptr` 的地址赋给它。现在`ptr` 和`__mptr` 指向了同一个地址。从技术上讲我们并不需要这一行,但是它可以方便的进行类型检查。第一行保证了特定的结构体(参数`type`)包含成员变量`member`。第二行代码会用宏`offsetof`计算成员变量相对于结构体起始地址的偏移,然后从结构体的地址减去这个偏移,最后就得到了结构体。 Of course `list_add` and `list_entry` is not the only functions which `` provides. Implementation of the doubly linked list provides the following API: +当然了`list_add` 和 `list_entry`不是``提供的唯一功能。双向链表的实现还提供了如下API: * list_add * list_add_tail @@ -254,11 +279,11 @@ Of course `list_add` and `list_entry` is not the only functions which ` Date: Wed, 9 Dec 2015 09:58:27 +0800 Subject: [PATCH 158/160] clean --- ... Doubly linked list in the Linux Kernel.md | 37 ++----------------- 1 file changed, 3 insertions(+), 34 deletions(-) diff --git a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md index 00da2d9d00..631d918813 100644 --- a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md +++ b/sources/tech/20151122 Doubly linked list in the Linux Kernel.md @@ -1,13 +1,12 @@ -Data Structures in the Linux Kernel——Doubly linked list +Linux 内核里的数据结构——双向链表 ================================================================================ 双向链表 -------------------------------------------------------------------------------- -Linux kernel provides its own implementation of doubly linked list, which you can find in the [include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h). We will start `Data Structures in the Linux kernel` from the doubly linked list data structure. Why? Because it is very popular in the kernel, just try to [search](http://lxr.free-electrons.com/ident?i=list_head) + Linux 内核自己实现了双向链表,可以在[include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h)找到定义。我们将会从双向链表数据结构开始`内核的数据结构`。为什么?因为它在内核里使用的很广泛,你只需要在[free-electrons.com](http://lxr.free-electrons.com/ident?i=list_head) 检索一下就知道了。 -First of all, let's look on the main structure in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h): 首先让我们看一下在[include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 里的主结构体: ```C @@ -16,7 +15,6 @@ struct list_head { }; ``` -You can note that it is different from many implementations of doubly linked list which you have seen. For example, this doubly linked list structure from the [glib](http://www.gnu.org/software/libc/) library looks like : 你可能注意到这和你以前见过的双向链表的实现方法是不同的。举个例子来说,在[glib](http://www.gnu.org/software/libc/) 库里是这样实现的: ```C @@ -27,10 +25,8 @@ struct GList { }; ``` -Usually a linked list structure contains a pointer to the item. The implementation of linked list in Linux kernel does not. So the main question is - `where does the list store the data?`. The actual implementation of linked list in the kernel is - `Intrusive list`. An intrusive linked list does not contain data in its nodes - A node just contains pointers to the next and previous node and list nodes part of the data that are added to the list. This makes the data structure generic, so it does not care about entry data type anymore. 通常来说一个链表会包含一个指向某个项目的指针。但是内核的实现并没有这样做。所以问题来了:`链表在哪里保存数据呢?`。实际上内核里实现的链表实际上是`侵入式链表`。侵入式链表并不在节点内保存数据-节点仅仅包含指向前后节点的指针,然后把数据是附加到链表的。这就使得这个数据结构是通用的,使用起来就不需要考虑节点数据的类型了。 -For example: 比如: ```C @@ -40,14 +36,12 @@ struct nmi_desc { }; ``` -Let's look at some examples to understand how `list_head` is used in the kernel. As I already wrote about, there are many, really many different places where lists are used in the kernel. Let's look for an example in miscellaneous character drivers. Misc character drivers API from the [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) is used for writing small drivers for handling simple hardware or virtual devices. Those drivers share same major number: 让我们看几个例子来理解一下在内核里是如何使用`list_head` 的。如上所述,在内核里有实在很多不同的地方用到了链表。我们来看一个在杂项字符驱动里面的使用的例子。在 [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) 的杂项字符驱动API 被用来编写处理小型硬件和虚拟设备的小驱动。这些驱动共享相同的主设备号: ```C #define MISC_MAJOR 10 ``` -but have their own minor number. For example you can see it with: 但是都有各自不同的次设备号。比如: ``` @@ -74,7 +68,6 @@ crw------- 1 root root 10, 63 Mar 21 12:01 vga_arbiter crw------- 1 root root 10, 137 Mar 21 12:01 vhci ``` -Now let's have a close look at how lists are used in the misc device drivers. First of all, let's look on `miscdevice` structure: 现在让我们看看它是如何使用链表的。首先看一下结构体`miscdevice`: ```C @@ -91,14 +84,12 @@ struct miscdevice }; ``` -We can see the fourth field in the `miscdevice` structure - `list` which is a list of registered devices. In the beginning of the source code file we can see the definition of misc_list: 可以看到结构体的第四个变量`list` 是所有注册过的设备的链表。在源代码文件的开始可以看到这个链表的定义: ```C static LIST_HEAD(misc_list); ``` -which expands to the definition of variables with `list_head` type: 它实际上是对用`list_head` 类型定义的变量的扩展。 ```C @@ -106,21 +97,18 @@ which expands to the definition of variables with `list_head` type: struct list_head name = LIST_HEAD_INIT(name) ``` -and initializes it with the `LIST_HEAD_INIT` macro, which sets previous and next entries with the address of variable - name: 然后使用宏`LIST_HEAD_INIT` 进行初始化,这会使用变量`name` 的地址来填充`prev`和`next` 结构体的两个变量。 ```C #define LIST_HEAD_INIT(name) { &(name), &(name) } ``` -Now let's look on the `misc_register` function which registers a miscellaneous device. At the start it initializes `miscdevice->list` with the `INIT_LIST_HEAD` function: 现在来看看注册杂项设备的函数`misc_register`。它在开始就用 `INIT_LIST_HEAD` 初始化了`miscdevice->list`。 ```C INIT_LIST_HEAD(&misc->list); ``` -which does the same as the `LIST_HEAD_INIT` macro: 作用和宏`LIST_HEAD_INIT`一样。 ```C @@ -131,14 +119,12 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } ``` -In the next step after a device is created by the `device_create` function, we add it to the miscellaneous devices list with: 在函数`device_create` 创建了设备后我们就用下面的语句将设备添加到设备链表: ``` list_add(&misc->list, &misc_list); ``` -Kernel `list.h` provides this API for the addition of a new entry to the list. Let's look at its implementation: 内核文件`list.h` 提供了项链表添加新项的API 接口。我们来看看它的实现: @@ -149,14 +135,12 @@ static inline void list_add(struct list_head *new, struct list_head *head) } ``` -It just calls internal function `__list_add` with the 3 given parameters: 实际上就是使用3个指定的参数来调用了内部函数`__list_add`: * new - 新项。 * head - 新项将会被添加到`head`之前. * head->next - `head` 之后的项。 -Implementation of the `__list_add` is pretty simple: `__list_add`的实现非常简单: ```C @@ -171,10 +155,8 @@ static inline void __list_add(struct list_head *new, } ``` -Here we add a new item between `prev` and `next`. So `misc` list which we defined at the start with the `LIST_HEAD_INIT` macro will contain previous and next pointers to the `miscdevice->list`. 我们会在`prev`和`next` 之间添加一个新项。所以我们用宏`LIST_HEAD_INIT`定义的`misc` 链表会包含指向`miscdevice->list` 的向前指针和向后指针。 -There is still one question: how to get list's entry. There is a special macro: 这里有一个问题:如何得到列表的内容呢?这里有一个特殊的宏: ```C @@ -182,21 +164,18 @@ There is still one question: how to get list's entry. There is a special macro: container_of(ptr, type, member) ``` -which gets three parameters: 使用了三个参数: * ptr - 指向链表头的指针; * type - 结构体类型; * member - 在结构体内类型为`list_head` 的变量的名字; -For example: 比如说: ```C const struct miscdevice *p = list_entry(v, struct miscdevice, list) ``` -After this we can access to any `miscdevice` field with `p->minor` or `p->name` and etc... Let's look on the `list_entry` implementation: 然后我们就可以使用`p->minor` 或者 `p->name`来访问`miscdevice`。让我们来看看`list_entry` 的实现: ```C @@ -204,7 +183,6 @@ After this we can access to any `miscdevice` field with `p->minor` or `p->name` container_of(ptr, type, member) ``` -As we can see it just calls `container_of` macro with the same arguments. At first sight, the `container_of` looks strange: 如我们所见,它仅仅使用相同的参数调用了宏`container_of`。初看这个宏挺奇怪的: ```C @@ -213,10 +191,8 @@ As we can see it just calls `container_of` macro with the same arguments. At fir (type *)( (char *)__mptr - offsetof(type,member) );}) ``` -First of all you can note that it consists of two expressions in curly brackets. The compiler will evaluate the whole block in the curly braces and use the value of the last expression. 首先你可以注意到花括号内包含两个表达式。编译器会执行花括号内的全部语句,然后返回最后的表达式的值。 -For example: 举个例子来说: ``` @@ -229,10 +205,8 @@ int main() { } ``` -will print `2`. 最终会打印`2` -The next point is `typeof`, it's simple. As you can understand from its name, it just returns the type of the given variable. When I first saw the implementation of the `container_of` macro, the strangest thing I found was the zero in the `((type *)0)` expression. Actually this pointer magic calculates the offset of the given field from the address of the structure, but as we have `0` here, it will be just a zero offset along with the field width. Let's look at a simple example: 下一点就是`typeof`,它也很简单。就如你从名字所理解的,它仅仅返回了给定变量的类型。当我第一次看到宏`container_of`的实现时,让我觉得最奇怪的就是`container_of`中的0.实际上这个指针巧妙的计算了从结构体特定变量的偏移,这里的`0`刚好就是位宽里的零偏移。让我们看一个简单的例子: ```C @@ -250,20 +224,16 @@ int main() { } ``` -will print `0x5`. 结果显示`0x5`。 -The next `offsetof` macro calculates offset from the beginning of the structure to the given structure's field. Its implementation is very similar to the previous code: 下一个宏`offsetof` 会计算从结构体的某个变量的相对于结构体起始地址的偏移。它的实现和上面类似: ```C #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) ``` -Let's summarize all about `container_of` macro. The `container_of` macro returns the address of the structure by the given address of the structure's field with `list_head` type, the name of the structure field with `list_head` type and type of the container structure. At the first line this macro declares the `__mptr` pointer which points to the field of the structure that `ptr` points to and assigns `ptr` to it. Now `ptr` and `__mptr` point to the same address. Technically we don't need this line but it's useful for type checking. The first line ensures that the given structure (`type` parameter) has a member called `member`. In the second line it calculates offset of the field from the structure with the `offsetof` macro and subtracts it from the structure address. That's all. 现在我们来总结一下宏`container_of`。只需要知道结构体里面类型为`list_head` 的变量的名字和结构体容器的类型,它可以通过结构体的变量`list_head`获得结构体的起始地址。在宏定义的第一行,声明了一个指向结构体成员变量`ptr`的指针`__mptr`,并且把`ptr` 的地址赋给它。现在`ptr` 和`__mptr` 指向了同一个地址。从技术上讲我们并不需要这一行,但是它可以方便的进行类型检查。第一行保证了特定的结构体(参数`type`)包含成员变量`member`。第二行代码会用宏`offsetof`计算成员变量相对于结构体起始地址的偏移,然后从结构体的地址减去这个偏移,最后就得到了结构体。 -Of course `list_add` and `list_entry` is not the only functions which `` provides. Implementation of the doubly linked list provides the following API: 当然了`list_add` 和 `list_entry`不是``提供的唯一功能。双向链表的实现还提供了如下API: * list_add @@ -278,8 +248,7 @@ Of course `list_add` and `list_entry` is not the only functions which ` Date: Wed, 9 Dec 2015 09:59:42 +0800 Subject: [PATCH 159/160] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= =?UTF-8?q?=EF=BC=8C=E7=A7=BB=E5=8A=A8=E5=88=B0translated?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20151122 Doubly linked list in the Linux Kernel.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20151122 Doubly linked list in the Linux Kernel.md (100%) diff --git a/sources/tech/20151122 Doubly linked list in the Linux Kernel.md b/translated/tech/20151122 Doubly linked list in the Linux Kernel.md similarity index 100% rename from sources/tech/20151122 Doubly linked list in the Linux Kernel.md rename to translated/tech/20151122 Doubly linked list in the Linux Kernel.md From 6fe786d73564184151d07193f67fb85e4f345cf2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 9 Dec 2015 10:10:55 +0800 Subject: [PATCH 160/160] Delete 20151203 Getting started with Docker by Dockerizing this Blog.md --- ...ed with Docker by Dockerizing this Blog.md | 375 ------------------ 1 file changed, 375 deletions(-) delete mode 100644 sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md diff --git a/sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md b/sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md deleted file mode 100644 index 1f69a4adba..0000000000 --- a/sources/tech/20151203 Getting started with Docker by Dockerizing this Blog.md +++ /dev/null @@ -1,375 +0,0 @@ -Getting started with Docker by Dockerizing this Blog -====================== ->This article covers the basic concepts of Docker and how to Dockerize an application by creating a custom Dockerfile ->Written by Benjamin Cane on 2015-12-01 10:00:00 - -Docker is an interesting technology that over the past 2 years has gone from an idea, to being used by organizations all over the world to deploy applications. In today's article I am going to cover how to get started with Docker by "Dockerizing" an existing application. The application in question is actually this very blog! - -What is Docker -============ -============ - -Before we dive into learning the basics of Docker let's first understand what Docker is and why it is so popular. Docker, is an operating system container management tool that allows you to easily manage and deploy applications by making it easy to package them within operating system containers. - -### Containers vs. Virtual Machines - -Containers may not be as familiar as virtual machines but they are another method to provide Operating System Virtualization. However, they differ quite a bit from standard virtual machines. - -Standard virtual machines generally include a full Operating System, OS Packages and eventually an Application or two. This is made possible by a Hypervisor which provides hardware virtualization to the virtual machine. This allows for a single server to run many standalone operating systems as virtual guests. - -Containers are similar to virtual machines in that they allow a single server to run multiple operating environments, these environments however, are not full operating systems. Containers generally only include the necessary OS Packages and Applications. They do not generally contain a full operating system or hardware virtualization. This also means that containers have a smaller overhead than traditional virtual machines. - -Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding. Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to BSD Jails and chroot'ed processes than full virtual machines. - -### What Docker provides on top of containers - -Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support Solaris Zones and BSD Jails. What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker. - -Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container. - -## Starting with Installation -As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager. - -# apt-get install docker.io -Reading package lists... Done -Building dependency tree -Reading state information... Done -The following extra packages will be installed: - aufs-tools cgroup-lite git git-man liberror-perl -Suggested packages: - btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc - git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki - git-svn -The following NEW packages will be installed: - aufs-tools cgroup-lite docker.io git git-man liberror-perl -0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. -Need to get 7,553 kB of archives. -After this operation, 46.6 MB of additional disk space will be used. -Do you want to continue? [Y/n] y -To check if any containers are running we can execute the docker command using the ps option. - -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -The ps function of the docker command works similar to the Linux ps command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers. - -## Deploying a pre-built nginx Docker container -One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with yum or apt-get. To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the docker command again, however, this time with the run option. - -# docker run -d nginx -Unable to find image 'nginx' locally -Pulling repository nginx -5c82215b03d1: Download complete -e2a4fb18da48: Download complete -58016a5acc80: Download complete -657abfa43d82: Download complete -dcb2fe003d16: Download complete -c79a417d7c6f: Download complete -abb90243122c: Download complete -d6137c9e2964: Download complete -85e566ddc7ef: Download complete -69f100eb42b5: Download complete -cd720b803060: Download complete -7cc81e9a118a: Download complete -The run function of the docker command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute docker run your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the -d (detach) flag. - -By executing docker ps again we can see the nginx container running. - -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande -In the above output we can see the running container desperate_lalande and that this container has been built from the nginx:latest image. - -## Docker Images -Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with yum. To get a better understanding of how this works let's look back at the output of the docker run execution. - -# docker run -d nginx -Unable to find image 'nginx' locally -The first message we see is that docker could not find an image named nginx locally. The reason we see this message is that when we executed docker run we told Docker to startup a container, a container based on an image named nginx. Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name. - -Since this system is brand new there is no Docker image with the name nginx, which means Docker will need to download it from a Docker repository. - -Pulling repository nginx -5c82215b03d1: Download complete -e2a4fb18da48: Download complete -58016a5acc80: Download complete -657abfa43d82: Download complete -dcb2fe003d16: Download complete -c79a417d7c6f: Download complete -abb90243122c: Download complete -d6137c9e2964: Download complete -85e566ddc7ef: Download complete -69f100eb42b5: Download complete -cd720b803060: Download complete -7cc81e9a118a: Download complete -This is exactly what the second part of the output is showing us. By default, Docker uses the Docker Hub repository, which is a repository service that Docker (the company) runs. - -Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as docker run registry. For this article we will not be deploying a custom registry service. - -## Stopping and Removing the Container -Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it. - -To start a container we executed docker with the run option, in order to stop this same container we simply need to execute the docker with the kill option specifying the container name. - -# docker kill desperate_lalande -desperate_lalande -If we execute docker ps again we will see that the container is no longer running. - -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -However, at this point we have only stopped the container; while it may no longer be running it still exists. By default, docker ps will only show running containers, if we add the -a (all) flag it will show all containers running or not. - -# docker ps -a -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande -In order to fully remove the container we can use the docker command with the rm option. - -# docker rm desperate_lalande -desperate_lalande -While this container has been removed; we still have a nginx image available. If we were to re-run docker run -d nginx again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system. - -To see a full list of local images we can simply run the docker command with the images option. - -# docker images -REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE -nginx latest 9fab4090484a 5 days ago 132.8 MB -## Building our own custom image -At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a Dockerfile. - -With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog. - -### Understanding the Application -Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog. - -The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; hamerkop. The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public GitHub repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install Python along with some Python modules and execute the hamerkop application. To serve the generated content we will use nginx; which means we will also need nginx to be installed. - -So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the Dockerfile Syntax. To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; vi in my case. - -# git clone https://github.com/madflojo/blog.git -Cloning into 'blog'... -remote: Counting objects: 622, done. -remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 -Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. -Resolving deltas: 100% (242/242), done. -Checking connectivity... done. -# cd blog/ -# vi Dockerfile -### FROM - Inheriting a Docker image -The first instruction of a Dockerfile is the FROM instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same nginx image we were using before, if we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying ubuntu:latest. - -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane -In addition to the FROM instruction, I also included a MAINTAINER instruction which is used to show the Author of the Dockerfile. - -As Docker supports using # as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile. - -### Running a test build -Since we inherited the nginx Docker image our current Dockerfile also inherited all the instructions within the Dockerfile used to build that nginx image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the nginx image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process. - -In order to start the build from a Dockerfile we can simply execute the docker command with the build option. - -# docker build -t blog /root/blog -Sending build context to Docker daemon 23.6 MB -Sending build context to Docker daemon -Step 0 : FROM nginx:latest - ---> 9fab4090484a -Step 1 : MAINTAINER Benjamin Cane - ---> Running in c97f36450343 - ---> 60a44f78d194 -Removing intermediate container c97f36450343 -Successfully built 60a44f78d194 -In the above example I used the -t (tag) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an Image ID that Docker assigns. In this case the Image ID is 60a44f78d194 which we can see from the docker command's build success message. - -In addition to the -t flag, I also specified the directory /root/blog. This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container. - -Now that we have run through a successful build, let's start customizing this image. - -### Using RUN to execute apt-get -The static site generator used to generate the HTML pages is written in Python and because of this the first custom task we should perform within this Dockerfile is to install Python. To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that apt-get update and apt-get install python-dev are executed; we can do this with the RUN instruction. - -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane - -## Install python and pip -RUN apt-get update -RUN apt-get install -y python-dev python-pip -In the above we are simply using the RUN instruction to tell Docker that when it builds this image it will need to execute the specified apt-get commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though python-dev and python-pip are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the pip command will execute, outside the container, the pip command does not exist. - -It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the RUN instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by RUN require user input. - -### Installing Python modules -With Python installed we now need to install some Python modules. To do this outside of Docker, we would generally use the pip command and reference a file within the blog's Git repository named requirements.txt. In an earlier step we used the git command to "clone" the blog's GitHub repository to the /root/blog directory; this directory also happens to be the directory that we have created the Dockerfile. This is important as it means the contents of the Git repository are accessible to Docker during the build process. - -When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible. - -In order to install the required Python modules we will need to copy the requirements.txt file from the build directory into the container. We can do this using the COPY instruction within the Dockerfile. - -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane - -## Install python and pip -RUN apt-get update -RUN apt-get install -y python-dev python-pip - -## Create a directory for required files -RUN mkdir -p /build/ - -## Add requirements file and run pip -COPY requirements.txt /build/ -RUN pip install -r /build/requirements.txt -Within the Dockerfile we added 3 instructions. The first instruction uses RUN to create a /build/ directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the COPY instruction which copies the requirements.txt file from the "build directory" (/root/blog) into the /build directory within the container. The third is using the RUN instruction to execute the pip command; installing all the modules specified within the requirements.txt file. - -COPY is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies. - -### Re-running a build -Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again. - -# docker build -t blog /root/blog -Sending build context to Docker daemon 19.52 MB -Sending build context to Docker daemon -Step 0 : FROM nginx:latest - ---> 9fab4090484a -Step 1 : MAINTAINER Benjamin Cane - ---> Using cache - ---> 8e0f1899d1eb -Step 2 : RUN apt-get update - ---> Using cache - ---> 78b36ef1a1a2 -Step 3 : RUN apt-get install -y python-dev python-pip - ---> Using cache - ---> ef4f9382658a -Step 4 : RUN mkdir -p /build/ - ---> Running in bde05cf1e8fe - ---> f4b66e09fa61 -Removing intermediate container bde05cf1e8fe -Step 5 : COPY requirements.txt /build/ - ---> cef11c3fb97c -Removing intermediate container 9aa8ff43f4b0 -Step 6 : RUN pip install -r /build/requirements.txt - ---> Running in c50b15ddd8b1 -Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) -Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) - -Successfully installed jinja2 PyYaml mistune markdown MarkupSafe -Cleaning up... - ---> abab55c20962 -Removing intermediate container c50b15ddd8b1 -Successfully built abab55c20962 -From the above build output we can see the build was successful, but we can also see another interesting message; ---> Using cache. What this message is telling us is that Docker was able to use its build cache during the build of this image. - -#### Docker build cache - -When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image. - - Step 5 : COPY requirements.txt /build/ - ---> cef11c3fb97c -The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the Image ID; cef11c3fb97c. The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the blog image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the python-dev and python-pip packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the mkdir command, each subsequent step was executed. - -The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the requirements.txt file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the requirements.txt file. The execution of the apt-get commands however, are another story. If the Apt repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the python-pip package it could be a problem if the installation was caching a package with a known vulnerability. - -For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify --no-cache=True when executing a Docker build. - -### Deploying the rest of the blog -With the Python packages and modules installed this leaves us at the point of copying the required application files and running the hamerkop application. To do this we will simply use more COPY and RUN instructions. - -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane - -## Install python and pip -RUN apt-get update -RUN apt-get install -y python-dev python-pip - -## Create a directory for required files -RUN mkdir -p /build/ - -## Add requirements file and run pip -COPY requirements.txt /build/ -RUN pip install -r /build/requirements.txt - -## Add blog code nd required files -COPY static /build/static -COPY templates /build/templates -COPY hamerkop /build/ -COPY config.yml /build/ -COPY articles /build/articles - -## Run Generator -RUN /build/hamerkop -c /build/config.yml -Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully. - -# docker build -t blog /root/blog/ -Sending build context to Docker daemon 19.52 MB -Sending build context to Docker daemon -Step 0 : FROM nginx:latest - ---> 9fab4090484a -Step 1 : MAINTAINER Benjamin Cane - ---> Using cache - ---> 8e0f1899d1eb -Step 2 : RUN apt-get update - ---> Using cache - ---> 78b36ef1a1a2 -Step 3 : RUN apt-get install -y python-dev python-pip - ---> Using cache - ---> ef4f9382658a -Step 4 : RUN mkdir -p /build/ - ---> Using cache - ---> f4b66e09fa61 -Step 5 : COPY requirements.txt /build/ - ---> Using cache - ---> cef11c3fb97c -Step 6 : RUN pip install -r /build/requirements.txt - ---> Using cache - ---> abab55c20962 -Step 7 : COPY static /build/static - ---> 15cb91531038 -Removing intermediate container d478b42b7906 -Step 8 : COPY templates /build/templates - ---> ecded5d1a52e -Removing intermediate container ac2390607e9f -Step 9 : COPY hamerkop /build/ - ---> 59efd1ca1771 -Removing intermediate container b5fbf7e817b7 -Step 10 : COPY config.yml /build/ - ---> bfa3db6c05b7 -Removing intermediate container 1aebef300933 -Step 11 : COPY articles /build/articles - ---> 6b61cc9dde27 -Removing intermediate container be78d0eb1213 -Step 12 : RUN /build/hamerkop -c /build/config.yml - ---> Running in fbc0b5e574c5 -Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux -Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux - -Successfully created file /usr/share/nginx/html//archive.html -Successfully created file /usr/share/nginx/html//sitemap.xml - ---> 3b25263113e1 -Removing intermediate container fbc0b5e574c5 -Successfully built 3b25263113e1 -### Running a custom container -With a successful build we can now start our custom container by running the docker command with the run option, similar to how we started the nginx container earlier. - -# docker run -d -p 80:80 --name=blog blog -5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1 -Once again the -d (detach) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is --name, which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is -p, this flag allows users to map a port from the host machine to a port within the container. - -The base nginx image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the -p flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax -p 8080:80. - -From the above command it appears that our container was started successfully, we can verify this by executing docker ps. - -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog -## Wrapping up - -At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout Docker's reference page, which explains the instructions very well. - -Another good resource is their Dockerfile Best Practices page which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the COPY instruction for the articles directory as the last COPY instruction. The reason for this is that the articles directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached. - -In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.