Merge pull request #121 from LCTT/master

更新至2016年4月24日
This commit is contained in:
struggling 2016-04-24 09:46:56 +08:00
commit fcef1d07b2
98 changed files with 2889 additions and 4389 deletions

View File

@ -0,0 +1,276 @@
将程序性能提高十倍的10条建议
================================================================================
提高 web 应用的性能从来没有比现在更重要过。网络经济的比重一直在增长;全球经济超过 5% 的价值是在因特网上产生的(数据参见下面的资料)。这个时刻在线的超连接世界意味着用户对其的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的 app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。
举一个例子一份亚马逊十年前做过的研究可以证明甚至在那个时候网页加载时间每减少100毫秒收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有者在调查中承认它们会因为应用程序性能的问题流失用户。
网站到底需要多快呢对于页面加载每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间而这是提供最高舒适度的速度。很明显这种利害关系对于 web 应用来说很高,而且在不断的增加。
想要提高效率很简单但是看到实际结果很难。为了在你的探索之旅上帮助到你这篇文章会给你提供10条最高可以提升10倍网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章包括充分测试的优化技术和一点 NGINX 的帮助。这个系列也给出了潜在的提高安全性的帮助。
### Tip #1: 通过反向代理来提高性能和增加安全性 ###
如果你的 web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要换一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的 WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么解决方法依然很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。)
问题是机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换通过成千上万的连接和用户交互从磁盘访问文件运行代码等等。应用服务器可能会抖动thrashing-比如说内存不足、将内存数据交换到磁盘以及有多个请求要等待某个任务完成如磁盘I/O。
你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和应用服务器的通讯都是通过一个快速的内部网络完成的。
使用反向代理服务器可以将应用服务器从等待用户与 web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要等待客户端的响应,其运行速度可以接近于优化后的性能水平。
添加反向代理服务器还可以给你的 web 服务器安装带来灵活性。比如,一个某种类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易替代一个新的。
因为反向代理带来的灵活性,所以反向代理也是一些性能加速功能的必要前提,比如:
- **负载均衡** (参见 [Tip #2][2]) 负载均衡运行在反向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以添加应用服务器而根本不用修改应用。
- **缓存静态文件** (参见 [Tip #3][3]) 直接读取的文件,比如图片或者客户端代码,可以保存在反向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快。
- **网站安全** 反向代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。
NGINX 软件为用作反向代理服务器而专门设计也包含了上述的多种功能。NGINX 使用事件驱动的方式处理请求这会比传统的服务器更加有效率。NGINX plus 添加了更多高级的反向代理特性,比如应用的[健康度检查][4],专门用来处理请求路由、高级缓冲和相关支持。
![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
### Tip #2: 添加负载平衡 ###
添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。与其将核心 Web 服务器变得越来越大和越来越强,不如使用负载均衡将流量分配到多个服务器。即使程序写的不好,或者在扩容方面有困难,仅是使用负载均衡服务器就可以很好的提高用户体验。
负载均衡服务器首先是一个反向代理服务器(参见[Tip #1][6])——它接受来自互联网的流量,然后转发请求给另一个服务器。特别是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法round robin每个新的请求都会发给列表里的下一个服务器。其它的复制均衡方法包括将请求发给活动连接最少的服务器。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8]。
负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有需要处理的流量。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。
可以进行负载均衡的协议包括 HTTP、HTTPS、SPDY、HTTP/2、WebSocket、[FastCGI][9]、SCGI、uwsgi、 memcached 等,以及几种其它的应用类型,包括基于 TCP 的应用和其它的第4层协议的程序。分析你的 web 应用来决定你要使用哪些以及哪些地方性能不足。
相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如 SSL 末端服务器,支持客户端的 HTTP/1.x 和 HTTP/2 请求,以及缓存静态文件。
NGINX 经常被用于进行负载均衡;要想了解更多的情况,可以下载我们的电子书 [选择软件负载均衡器的五个理由][10]。你也可以从 [使用 NGINX 和 NGINX Plus 配置负载均衡,第一部分][11] 中了解基本的配置指导,在 NGINX Plus 管理员指南中有完整的 [NGINX 负载均衡][12]的文档。。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性如基于服务器响应时间的加载路由和Microsofts NTLM 协议上的负载均衡。
### Tip #3: 缓存静态和动态的内容 ###
缓存可以通过加速内容的传输速度来提高 web 应用的性能。它可以采用以下几种策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者将这几种方法结合起来使用。
有两种不同类型数据的缓冲:
- **静态内容缓存**。不经常变化的文件,比如图像(JPEG、PNG) 和代码(CSS,JavaScript),可以保存在外围服务器上,这样就可以快速的从内存和磁盘上提取。
- **动态内容缓存**。很多 web 应用会针对每次网页请求生成一个新的 HTML 页面。在短时间内简单的缓存生成的 HTML 内容,就可以很好的减少要生成的内容的数量,而且这些页面足够新,可以满足你的需要。
举个例子如果一个页面每秒会被浏览10次你将它缓存 1 秒90%请求的页面都会直接从缓存提取。如果你分开缓存静态内容,甚至新生成的页面可能都是由这些缓存构成的。
下面由是 web 应用发明的三种主要的缓存技术:
- **缩短数据与用户的网络距离**。把一份内容的拷贝放的离用户更近的节点来减少传输时间。
- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。
- **从过载服务器上移走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的性能,而这是因为主机没有被过度使用。
对 web 应用的缓存机制可以在 web 应用服务器内部实现。首先,缓存动态内容是用来减少应用服务器加载动态内容的时间。其次,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。
改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说静态数据比如大图像文件构成了超过一半的内容。如果没有缓存那么这可能会花费几秒的时间来提取和传输这类数据但是采用了缓存之后不到1秒就可以完成。
举一个在实际中缓存是如何使用的例子, NGINX 和 NGINX Plus 使用了两条指令来[设置缓存机制][16]proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小、文件在缓存中保存的最长时间和其它一些参数。使用第三条而且是相当受欢迎的一条指令 proxy_cache_use_stale如果提供新鲜内容的服务器忙碌或者挂掉了你甚至可以让缓存提供较旧的内容这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的可用时间。
NGINX plus 有个[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。
要想获得更多关于 NGINX 的缓存机制的信息可以浏览 NGINX Plus 管理员指南中的 [参考文档][20] 和 [NGINX 内容缓存][21] 。
**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足它们对站点功能性、响应时间、安全性和商业结果(如完成的交易数)等需要。
### Tip #4: 压缩数据 ###
压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片JPEG 和PNG、视频MPEG-4和音乐MP3等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。
文本数据 —— 包括HTML包含了纯文本和 HTML 标签CSS 和代码,比如 Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生更大的影响。
这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。智能的内容压缩可以减少 HTMLJavascriptCSS和其它文本内容对带宽的要求通常可以减少 30% 甚至更多的带宽和相应的页面加载时间。
如果你使用 SSL压缩可以减少需要进行 SSL 编码的的数据量,而这些编码操作会占用一些 CPU 时间而抵消了压缩数据减少的时间。
压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的 [HTTP/2 部分]就对于头数据来特别适合。另一个例子是可以在 NGINX 里打开使用 GZIP 压缩。你在你的服务里[预先压缩文本数据][25]之后,你就可以直接使用 gzip_static 指令来处理压缩过的 .gz 版本。
### Tip #5: 优化 SSL/TLS ###
安全套接字([SSL][26]) 协议和它的下一代版本传输层安全TLS协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是 Google 正在使用 SSL/TLS这在搜索引擎排名上是一个正面的影响因素。
尽管 SSL/TLS 越来越流行但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二:
1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用 HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。
2. 数据在传输过程中需要不断的在服务器端加密、在客户端解密。
为了鼓励使用 SSL/TLSHTTP/2 和 SPDY在[下一章][27]会描述)的作者设计了新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述第一个原因所浪费的时间。然而现在可以用来提高应用程序使用 SSL/TLS 传输数据的性能的方法不止这些。
web 服务器有对应的机制优化 SSL/TLS 传输。举个例子NGINX 使用 [OpenSSL][28] 运行在普通的硬件上提供了接近专用硬件的传输性能。NGINX 的 [SSL 性能][29] 有详细的文档,而且把对 SSL/TLS 数据进行加解密的时间和 CPU 占用率降低了很多。
更进一步,参考这篇[文章][30]了解如何提高 SSL/TLS 性能的更多细节,可以总结为一下几点:
- **会话缓冲**。使用指令 [ssl_session_cache][31] 可以缓存每个新的 SSL/TLS 连接使用的参数。
- **会话票据或者 ID**。把 SSL/TLS 的信息保存在一个票据或者 ID 里可以流畅的复用而不需要重新握手。
- **OCSP 分割**。通过缓存 SSL/TLS 证书信息来减少握手时间。
NGINX 和 NGINX Plus 可以被用作 SSL/TLS 服务端,用于处理客户端流量的加密和解密,而同时以明文方式和其它服务器进行通信。要设置 NGINX 和 NGINX Plus 作为 SSL/TLS 服务端,参看 [HTTPS 连接][32] 和[加密的 TCP 连接][33]
### Tip #6: 使用 HTTP/2 或 SPDY ###
对于已经使用了 SSL/TLS 的站点HTTP/2 和 SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用 SSL/TLS 的站点来说,从响应速度的角度来说 HTTP/2 和 SPDY 将让迁移到 SSL/TLS 没有什么压力(原本会降低效率)。
Google 在2012年开始把 SPDY 作为一个比 HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前 IETF 通过的标准,是基于 SPDY 的。SPDY 已经被广泛的支持了,但是很快就会被 HTTP/2 替代。
SPDY 和 HTTP/2 的关键是用单一连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。
通过使用单一连接,这些协议可以避免像在实现了 HTTP/1.x 的浏览器中一样建立和管理多个连接。单一连接在对 SSL 特别有效,这是因为它可以最小化 SSL/TLS 建立安全链接时的握手时间。
SPDY 协议需要使用 SSL/TLS而 HTTP/2 官方标准并不需要,但是目前所有支持 HTTP/2 的浏览器只有在启用了 SSL/TLS 的情况下才能使用它。这就意味着支持 HTTP/2 的浏览器只有在网站使用了 SSL 并且服务器接收 HTTP/2 流量的情况下才会启用 HTTP/2。否则的话浏览器就会使用 HTTP/1.x 协议。
当你实现 SPDY 或者 HTTP/2 时,你不再需要那些常规的 HTTP 性能优化方案,比如按域分割、资源聚合,以及图像拼合。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解 HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。
![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
作为支持这些协议的一个样例NGINX 已经从一开始就支持了 SPDY而且[大部分使用 SPDY 协议的网站][35]都运行的是 NGINX。NGINX 同时也[很早][36]对 HTTP/2 的提供了支持从2015 年9月开始开源版 NGINX 和 NGINX Plus 就[支持][37]它了。
经过一段时间,我们 NGINX 希望更多的站点完全启用 SSL 并且向 HTTP/2 迁移。这将会提高安全性,同时也会找到并实现新的优化手段,简化的代码表现的会更加优异。
### Tip #7: 升级软件版本 ###
一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决 bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。
稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉 bug以及提高安全性。
一直使用旧版软件也会阻止你利用新的特性。比如上面说到的 HTTP/2目前要求 OpenSSL 1.0.1。在2016 年中期开始将会要求1.0.2 而它是在2015年1月才发布的。
NGINX 用户可以开始迁移到 [NGINX 最新的开源软件][38] 或者 [NGINX Plus][39];它们都包含了最新的能力,如 socket 分割和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把它们升级到你能升级到的最新版本吧。
### Tip #8: Linux 系统性能调优 ###
Linux 是大多数 web 服务器使用的操作系统而且作为你的架构的基础Linux 显然有不少提高性能的可能。默认情况下,很多 Linux 系统都被设置为使用很少的资源,以符合典型的桌面应用使用。这就意味着 web 应用需要一些微调才能达到最大效能。
这里的 Linux 优化是专门针对 web 服务器方面的。以 NGINX 为例,这里有一些在加速 Linux 时需要强调的变化:
- **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加 net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接限制太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数直到错误信息停止出现。
- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接请求你可能就需要提高sys.fs.file_max ,以增加系统对文件描述符数量整体的限制,这样才能支持不断增加的负载需求。
- **临时端口**。当使用代理时NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口号。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过 net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。
对于 NGINX 来说,可以查阅 [NGINX 性能调优指南][40]来学习如果优化你的 Linux 系统,这样它就可以很好的适应大规模网络流量而不会超过工作极限。
### Tip #9: web 服务器性能调优 ###
无论你是用哪种 web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何 web 服务器,但是一些设置是针对 NGINX 的。关键的优化手段包括:
- **访问日志**。不要把每个请求的日志都直接写回磁盘你可以在内存将日志缓存起来然后批量写回磁盘。对于NGINX 来说,给指令 **access_log** 添加参数 **buffer=size** 可以让系统在缓存满了的情况下才把日志写到磁盘。如果你添加了参数 **flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。
- **缓存**。缓存会在内存中存放部分响应,直到满了为止,这可以让与客户端的通信更加高效。内存放不下的响应会写回磁盘,而这就会降低效能。当 NGINX [启用][42]了缓存机制后,你可以使用指令 **proxy_buffer_size****proxy_buffers** 来管理缓存。
- **客户端保活**。保活连接可以减少开销,特别是使用 SSL/TLS 时。对于 NGINX 来说,你可以从 **keepalive_requests** 的默认值 100 开始增加最大连接数,这样一个客户端就可以在一个指定的连接上请求多次,而且你也可以通过增加 **keepalive_timeout** 的值来允许保活连接存活更长时间,这样就可以让后来的请求处理的更快速。
- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会受益于连接保活。对于上游连接来说,你可以增加 **keepalive**,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新连接的次数。更多关于保活连接的信息可以参见[这篇“ HTTP 保活连接和性能”][41]。
- **限制**。限制客户端使用的资源可以提高性能和安全性。对于 NGINX 来说,指令 **limit_conn****limit_conn_zone** 限制了给定来源的连接数量,而 **limit_rate** 限制了带宽。这些限制都可以阻止合法用户*扒取*资源,同时也避免了攻击。指令 **limit_req****limit_req_zone** 限制了客户端请求。对于上游服务器来说,可以在 upstream 的配置块里的 server 指令使用 max_conns 参数来限制连接到上游服务器的连接数。 这样可以避免服务器过载。关联的 queue 指令会创建一个队列来在连接数抵达 **max_conn** 限制时在指定长度的时间内保存特定数量的请求。
- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和操作系统特定的机制来有效的将请求分发给不同的工人进程。这条建议推荐设置 **worker_processes** 为每个 CPU 一个 。worker_connections 的最大数默认512可以在大部分系统上根据需要增加实验性地找到最适合你的系统的值。
- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会为每个工人进程创建一个套接字监听器,这样一来以当套接字监听器可用时,内核就会将连接分配给它。这可以减少锁竞争,并且提高多核系统的性能,要启用[套接字分隔][43]需要在 **listen** 指令里面加上 **reuseport** 参数。
- **线程池**。计算机进程可能被一个单一的缓慢的操作所占用。对于 web 服务器软件来说,磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在 NGINX 里有两个操作——read() 系统调用和 sendfile() ——被分配到了[线程池][44]
![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
**技巧**。当改变任何操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快,那么就改回去。
在[文章“调优 NGINX 性能”][45]里可以看到更详细的 NGINX 调优方法。
### Tip #10: 监视系统活动来解决问题和瓶颈 ###
在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的 web 基础设施上监控程序活动。
监视活动是最积极的——它会告诉你发生了什么,把问题留给你发现和最终解决掉。
监视可以发现几种不同的问题。它们包括:
- 服务器宕机。
- 服务器出问题一直在丢失连接。
- 服务器出现大量的缓存未命中。
- 服务器没有发送正确的内容。
应用的总体性能监控工具,比如 New Relic 和 Dynatrace可以帮助你监控到从远程加载网页的时间而 NGINX 可以帮助你监控到应用交付端。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。
为了帮助开发者快速的发现、解决问题NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,这可以阻止当前任务完成之前接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的进度。当使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。下图就展示了内建的 NGINX Plus 模块[实时活动监视][48]的仪表盘包括了服务器群TCP 连接和缓存信息等 Web 架构信息。
![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
### 总结: 看看10倍性能提升的效果 ###
这些性能提升方案对任何一个 web 应用都可用并且效果都很好而实际效果取决于你的预算、你能花费的时间、目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升
为了指导你了解每种优化手段的潜在影响,这里是上面详述的每个优化方法的关键点,虽然你的情况肯定大不相同:
- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间歇的性能低谷。增加一个反向代理,比如 NGINX ,可以避免 web 应用程序在内存和磁盘之间波动。负载均衡可以将过载服务器的任务转移到空闲的服务器还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升很容易就可以比你现在的实现方案的最差性能提高10倍对于总体性能来说可能提高的不多但是也是有实质性的提升。
- **缓存动态和静态数据**。如果你有一个负担过重的 web 服务器那么毫无疑问肯定是你的应用服务器只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高几倍的性能。
- **压缩数据**。使用媒体文件压缩格式,比如图像格式 JPEG图形格式 PNG视频格式 MPEG-4音乐文件格式 MP3 可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以将初始页面加载速度提高两倍。
- **优化 SSL/TLS**。安全握手会对性能产生巨大的影响对它们的优化可能会对初始响应产生2倍的提升特别是对于大量文本的站点。优化 SSL/TLS 下媒体文件只会产生很小的性能提升。
- **使用 HTTP/2 和 SPDY*。当你使用了 SSL/TLS这些协议就可以提高整个站点的性能。
- **对 Linux 和 web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49]。
我们希望你亲自尝试这些技术。我们希望知道你说取得的各种性能提升案例。请在下面评论栏分享你的结果或者在标签 #NGINX#webperf 下 tweet 你的故事。
### 网上资源 ###
[Statista.com Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
[Load Impact How Bad Performance Impacts Ecommerce Sales][51]
[Kissmetrics How Loading Time Affects Your Bottom Line (infographic)][52]
[Econsultancy Site speed: case studies, tips and tools for improving your conversion rate][53]
--------------------------------------------------------------------------------
via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/
作者:[Floyd Smith][a]
译者:[Ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nginx.com/blog/author/floyd/
[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
[4]:https://www.nginx.com/products/application-health-checks/
[5]:https://www.nginx.com/solutions/load-balancing/
[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
[10]:https://www.nginx.com/resources/library/five-reasons-choose-software-load-balancer/
[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
[12]:https://www.nginx.com/resources/admin-guide/load-balancer//
[15]:https://www.nginx.com/products/
[16]:https://www.nginx.com/blog/nginx-caching-guide/
[17]:https://www.nginx.com/products/content-caching-nginx-plus/
[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
[19]:https://www.nginx.com/products/live-activity-monitoring/
[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
[21]:https://www.nginx.com/resources/admin-guide/content-caching
[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
[26]:https://www.digicert.com/ssl.htm
[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
[28]:http://openssl.org/
[29]:https://www.nginx.com/blog/nginx-ssl-performance/
[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
[38]:http://nginx.org/en/download.html
[39]:https://www.nginx.com/products/
[40]:https://www.nginx.com/blog/tuning-nginx/
[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
[45]:https://www.nginx.com/blog/tuning-nginx/
[46]:https://www.nginx.com/products/application-health-checks/
[47]:https://www.nginx.com/products/session-persistence/#session-draining
[48]:https://www.nginx.com/products/live-activity-monitoring/
[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
[52]:https://blog.kissmetrics.com/loading-time/?wide=1
[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/

View File

@ -1,39 +1,38 @@
如何在Debian中配置Tripewire IDS
如何在 Debian 中配置 Tripewire IDS
================================================================================
本文是一篇关于Debian中安装和配置Tripewire的文章。它是Linux环境下基于主机的入侵检测系统IDS。tripwire的高级功能有检测并报告任何Linux中未授权的更改文件和目录。tripewire安装之后会先创建一个基本的数据库tripewire监控并检测新文件的创建修改和谁修改了它等等。如果修改是合法的你可以接受修改并更新tripwire的数据库。
本文是一篇关于 Debian 中安装和配置 Tripewire 的文章。它是 Linux 环境下基于主机的入侵检测系统IDS。tripwire 的高级功能有检测并报告任何 Linux 中未授权的(文件和目录)的更改。tripewire 安装之后会先创建一个基本的数据库tripewire 监控并检测新文件的创建修改和谁修改了它等等。如果修改是合法的,你可以接受修改并更新 tripwire 的数据库。
### 安装和配置 ###
tripwire在Debian VM中的安装如下。
tripwire Debian VM 中的安装如下。
# apt-get install tripwire
![installation](http://blog.linoxide.com/wp-content/uploads/2015/11/installation.png)
安装中tripwire会有下面的配置提示。
安装中tripwire 会有下面的配置提示。
#### 站点密钥创建 ####
tripwire需要一个站点口令来加密tripwire的配置文件tw.cfg和策略文件tw.pol。tripewire使用指定的密码加密两个文件。一个tripewire实例必须指定站点口令。
tripwire 需要一个站点口令site passphrase来加密 tripwire 的配置文件 tw.cfg 和策略文件 tw.pol。tripewire 使用指定的密码加密两个文件。一个 tripewire 实例必须指定站点口令。
![site key1](http://blog.linoxide.com/wp-content/uploads/2015/11/site-key1.png)
#### 本地密钥口令 ####
本地口令用来保护tripwire数据库和报告文件。本地密钥用于阻止非授权的tripewire数据库修改。
本地口令用来保护 tripwire 数据库和报告文件。本地密钥用于阻止非授权的 tripewire 数据库修改。
![local key1](http://blog.linoxide.com/wp-content/uploads/2015/11/local-key1.png)
#### Tripwire配置路径 ####
#### tripwire 配置路径 ####
tripewire配置存储在/etc/tripwire/twcfg.txt。它用于生成加密的配置文件tw.cfg。
tripewire 配置存储在 /etc/tripwire/twcfg.txt。它用于生成加密的配置文件 tw.cfg。
![configuration file](http://blog.linoxide.com/wp-content/uploads/2015/11/configuration-file.png)
**Tripwire策略路径**
**tripwire 策略路径**
tripwire在/etc/tripwire/twpol.txt中保存策略文件。它用于生成加密的策略文件tw.pol。
tripwire /etc/tripwire/twpol.txt 中保存策略文件。它用于生成加密的策略文件 tw.pol。
![tripwire policy](http://blog.linoxide.com/wp-content/uploads/2015/11/tripwire-policy.png)
@ -41,9 +40,9 @@ tripwire在/etc/tripwire/twpol.txt中保存策略文件。它用于生成加密
![installed tripewire1](http://blog.linoxide.com/wp-content/uploads/2015/11/installed-tripewire1.png)
#### Tripwire配置文件 (twcfg.txt) ####
#### tripwire 配置文件 (twcfg.txt) ####
tripewire配置文件twcfg.txt细节如下图所示。加密策略文件tw.pol,站点密钥site.key和本地密钥hostname-local.key如下所示。
tripewire 配置文件twcfg.txt细节如下图所示。加密策略文件tw.pol、站点密钥site.key和本地密钥hostname-local.key在后面展示。
ROOT =/usr/sbin
@ -79,9 +78,9 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
TEMPDIRECTORY =/tmp
#### Tripwire策略配置 ####
#### tripwire 策略配置 ####
在生成基础数据库之前先配置tripwire配置。有必要经用一些策略如/dev、 /proc 、/root/mail等。详细的twpol.txt策略文件如下所示。
在生成基础数据库之前先配置 tripwire 配置。有必要经用一些策略如 /dev、 /proc 、/root/mail 等。详细的 twpol.txt 策略文件如下所示。
@@section GLOBAL
TWBIN = /usr/sbin;
@ -121,10 +120,10 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
# vulnerability
#
# Tripwire Binaries
# tripwire Binaries
#
(
rulename = "Tripwire Binaries",
rulename = "tripwire Binaries",
severity = $(SIG_HI)
)
{
@ -237,9 +236,9 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
#/proc -> $(Device) ;
}
#### Tripwire 报告 ####
#### tripwire 报告 ####
**tripwire check** 命令检查twpol.txt文件并基于此文件生成tripwire报告如下。如果twpol.txt中有任何错误tripwire不会生成报告。
**tripwire-check** 命令检查 twpol.txt 文件并基于此文件生成 tripwire 报告如下。如果 twpol.txt 中有任何错误tripwire 不会生成报告。
![tripwire report](http://blog.linoxide.com/wp-content/uploads/2015/11/tripwire-report.png)
@ -255,7 +254,7 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
Wrote report file: /var/lib/tripwire/report/VMdebian-20151024-122322.twr
Open Source Tripwire(R) 2.4.2.2 Integrity Check Report
Open Source tripwire(R) 2.4.2.2 Integrity Check Report
Report generated by: root
@ -299,13 +298,13 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
Other binaries 66 0 0 0
Tripwire Binaries 100 0 0 0
tripwire Binaries 100 0 0 0
Other libraries 66 0 0 0
Root file-system executables 100 0 0 0
Tripwire Data Files 100 0 0 0
tripwire Data Files 100 0 0 0
System boot changes 100 0 0 0
@ -351,9 +350,9 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
*** End of report ***
Open Source Tripwire 2.4 Portions copyright 2000 Tripwire, Inc. Tripwire is a registered
Open Source tripwire 2.4 Portions copyright 2000 tripwire, Inc. tripwire is a registered
trademark of Tripwire, Inc. This software comes with ABSOLUTELY NO WARRANTY;
trademark of tripwire, Inc. This software comes with ABSOLUTELY NO WARRANTY;
for details use --version. This is free software which may be redistributed
@ -365,7 +364,7 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
### 总结 ###
本篇中我们学习安装配置开源入侵检测软件tripwire。首先生成基础数据库并通过比较检测出任何改动文件/文件夹。然而tripwire并不是实时监测的IDS。
本篇中,我们学习安装配置开源入侵检测软件 tripwire。首先生成基础数据库并通过比较检测出任何改动文件/文件夹。然而tripwire 并不是实时监测的 IDS。
--------------------------------------------------------------------------------
@ -373,7 +372,7 @@ via: http://linoxide.com/security/configure-tripwire-ids-debian/
作者:[nido][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,317 @@
如何在 linux 上配置持续集成服务 - Drone
==============================================================
如果你对一次又一次的克隆、构建、测试和部署代码感到厌倦了,可以考虑一下持续集成。持续集成简称 CI是一种像我们一样的频繁提交的代码库构建、测试和部署的软件工程实践。CI 可以帮助我们快速的集成新代码到已有的代码库。如果这个过程是自动化进行的,那么就会提高开发的速度,因为这可以减少开发人员手工构建和测试的时间。[Drone][1] 是一个自由开源项目,用来提供一个非常棒的持续集成服务的环境,采用 Apache 2.0 协议发布。它已经集成近很多代码库提供商,比如 Github、Bitbucket 以及 Google Code它可以从代码库提取代码使我们可以对包括 PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA 等等在内的各种语言编译构建。它是如此一个强大的平台,它使用了容器和 docker 技术,这让用户每次构建都可以在保证隔离的条件下完全控制他们自己的构建环境。
### 1. 安装 Docker ###
首先,我们要安装 docker因为这是 Drone 的工作流的最关键的元素。Drone 合理的利用了 docker 来构建和测试应用。容器技术提高了应用部署的效率。要安装 docker ,我们需要在不同的 linux 发行版本运行下面对应的命令,我们这里会说明 Ubuntu 14.04 和 CentOS 7 两个版本。
#### Ubuntu ####
要在 Ubuntu 上安装 Docker ,我们只需要运行下面的命令。
# apt-get update
# apt-get install docker.io
安装之后我们需要使用`service` 命令重启 docker 引擎。
# service docker restart
然后我们让 docker 在系统启动时自动启动。
# update-rc.d docker defaults
Adding system startup for /etc/init.d/docker ...
/etc/rc0.d/K20docker -> ../init.d/docker
/etc/rc1.d/K20docker -> ../init.d/docker
/etc/rc6.d/K20docker -> ../init.d/docker
/etc/rc2.d/S20docker -> ../init.d/docker
/etc/rc3.d/S20docker -> ../init.d/docker
/etc/rc4.d/S20docker -> ../init.d/docker
/etc/rc5.d/S20docker -> ../init.d/docker
#### CentOS ####
第一,我们要更新机器上已经安装的软件包。我们可以使用下面的命令。
# sudo yum update
要在 centos 上安装 docker我们可以简单的运行下面的命令。
# curl -sSL https://get.docker.com/ | sh
安装好 docker 引擎之后我么只需要简单使用下面的`systemd` 命令启动 docker因为 centos 7 的默认初始化系统是 systemd。
# systemctl start docker
然后我们要让 docker 在系统启动时自动启动。
# systemctl enable docker
ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'
### 2. 安装 SQlite 驱动 ###
Drone 默认使用 SQlite3 数据库服务器来保存数据和信息。它会在 /var/lib/drone/ 自动创建名为 drone.sqlite 的数据库来处理数据库模式的创建和迁移。要安装 SQlite3 我们要完成以下几步。
#### Ubuntu 14.04 ####
因为 SQlite3 存在于 Ubuntu 14.04 的默认软件库,我们只需要简单的使用 apt 命令安装它。
# apt-get install libsqlite3-dev
#### CentOS 7 ####
要在 Centos 7 上安装需要使用下面的 yum 命令。
# yum install sqlite-devel
### 3. 安装 Drone ###
最后,我们安装好依赖的软件,我们现在更进一步的接近安装 Drone。在这一步里我们只简单的从官方链接下载对应的二进制软件包然后使用默认软件包管理器安装 Drone。
#### Ubuntu ####
我们将使用 wget 从官方的 [Debian 文件下载链接][2]下载 drone 的 debian 软件包。下面就是下载命令。
# wget downloads.drone.io/master/drone.deb
Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98
Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7722384 (7.4M) [application/x-debian-package]
Saving to: 'drone.deb'
100%[======================================>] 7,722,384 1.38MB/s in 17s
2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384]
下载好之后,我们将使用 dpkg 软件包管理器安装它。
# dpkg -i drone.deb
Selecting previously unselected package drone.
(Reading database ... 28077 files and directories currently installed.)
Preparing to unpack drone.deb ...
Unpacking drone (0.3.0-alpha-1442513246) ...
Setting up drone (0.3.0-alpha-1442513246) ...
Your system ubuntu 14: using upstart to control Drone
drone start/running, process 9512
#### CentOS ####
在 CentOS 机器上我们要使用 wget 命令从[下载链接][3]下载 RPM 包。
# wget downloads.drone.io/master/drone.rpm
--2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm
Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18
Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7763311 (7.4M) [application/x-redhat-package-manager]
Saving to: drone.rpm
100%[======================================>] 7,763,311 1.18MB/s in 20s
2015-11-06 11:07:06 (374 KB/s) - drone.rpm saved [7763311/7763311]
然后我们使用 yum 安装 rpm 包。
# yum localinstall drone.rpm
### 4. 配置端口 ###
安装完成之后我们要先进行配置才能工作起来。drone 的配置文件在**/etc/drone/drone.toml** 。默认情况下 drone 的 web 接口使用的是80而这也是 http 默认的端口,如果我们修改它,请按下面所示的修改配置文件里 server 块对应的值。
[server]
port=":80"
### 5. 集成 Github ###
为了运行 Drone 我们必须设置最少一个和 GitHub、GitHub 企业版GitlabGogsBitbucket 关联的集成点。在本文里我们只集成了 github但是如果我们要集成其他的服务我们可以在配置文件做修改。为了集成 github 我们需要在github 的设置里创建一个新的应用https://github.com/settings/developers 。
![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png)
要创建一个应用,我们需要在 `New Application` 页面点击 `Register`,然后如下所示填表。
![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png)
我们应该保证在应用的配置项里设置了**授权回调链接**,链接看起来类似 `http://drone.linoxide.com/api/auth/github.com`。然后我们点击注册应用。所有都做好之后我们会看到我们需要在我们的 Drone 配置文件里配置的客户端 ID 和客户端密钥。
![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png)
在这些都完成之后我们需要使用文本编辑器编辑 drone 配置文件,比如使用下面的命令。
# nano /etc/drone/drone.toml
然后我们会在 drone 的配置文件里面找到`[github]` 部分,紧接着的是下面所示的配置内容
[github]
client="3dd44b969709c518603c"
secret="4ee261abdb431bdc5e96b19cc3c498403853632a"
# orgs=[]
# open=false
![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png)
### 6. 配置 SMTP 服务器 ###
如果我们想让 drone 使用 email 发送通知,那么我们需要在 SMTP 配置里面设置我们的 SMTP 服务器。如果我们已经有了一个 SMTP 服务,那就只需要简单的使用它的配置文件就行了,但是因为我们没有一个 SMTP 服务器,我们需要安装一个 MTA 比如 Postfix然后在 drone 配置文件里配置好 SMTP。
#### Ubuntu ####
在 ubuntu 里使用下面的 apt 命令安装 postfix。
# apt-get install postfix
#### CentOS ####
在 CentOS 里使用下面的 yum 命令安装 postfix。
# yum install postfix
安装好之后,我们需要编辑我们的 postfix 配置文件。
# nano /etc/postfix/main.cf
然后我们要把 myhostname 的值替换为我们自己的 FQDN比如 drone.linoxide.com。
myhostname = drone.linoxide.com
现在开始配置 drone 配置文件里的 SMTP 部分。
# nano /etc/drone/drone.toml
找到`[smtp]` 部分补充上下面的内容。
[smtp]
host = "drone.linoxide.com"
port = "587"
from = "root@drone.linoxide.com"
user = "root"
pass = "password"
![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png)
注意:这里的 **user****pass** 参数强烈推荐一定要改成某个具体用户的配置。
### 7. 配置 Worker ###
如我们所知的 drone 利用了 docker 完成构建、测试任务,我们需要把 docker 配置为 drone 的 worker。要完成这些需要修改 drone 配置文件里的`[worker]` 部分。
# nano /etc/drone/drone.toml
然后取消底下几行的注释并且补充上下面的内容。
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
这里我们只设置了两个节点这意味着上面的配置文件只能同时执行2 个构建操作。要提高并发性可以增大节点的值。
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
使用上面的配置文件 drone 被配置为使用本地的 docker 守护程序可以同时构建4个任务。
### 8. 重启 Drone ###
最后,当所有的安装和配置都准备好之后,我们现在要在本地的 linux 机器上启动 drone 服务器。
#### Ubuntu ####
因为 ubuntu 14.04 使用了 sysvinit 作为默认的初始化系统,所以只需要简单执行下面的 service 命令就可以启动 drone 了。
# service drone restart
要让 drone 在系统启动时也自动运行,需要运行下面的命令。
# update-rc.d drone defaults
#### CentOS ####
因为 CentOS 7使用 systemd 作为初始化系统,所以只需要运行下面的 systemd 命令就可以重启 drone。
# systemctl restart drone
要让 drone 自动运行只需要运行下面的命令。
# systemctl enable drone
### 9. 添加防火墙例外规则 ###
众所周知 drone 默认使用了80 端口而我们又没有修改它所以我们需要配置防火墙程序允许80 端口http开放并允许其他机器可以通过网络连接。
#### Ubuntu 14.04 ####
iptables 是最流行的防火墙程序,并且 ubuntu 默认安装了它。我们需要修改 iptable 以暴露端口80这样我们才能让 drone 的 web 界面在网络上被大家访问。
# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# /etc/init.d/iptables save
#### CentOS 7 ####
因为 CentOS 7 默认安装了 systemd它使用 firewalld 作为防火墙程序。为了在 firewalld 上打开80端口http 服务),我们需要执行下面的命令。
# firewall-cmd --permanent --add-service=http
success
# firewall-cmd --reload
success
### 10. 访问 web 界面 ###
现在我们将在我们最喜欢的浏览器上通过 web 界面打开 drone。要完成这些我们要把浏览器指向运行 drone 的服务器。因为 drone 默认使用80 端口而我们有没有修改过,所以我们只需要在浏览器里根据我们的配置输入`http://ip-address/` 或 `http://drone.linoxide.com` 就行了。在我们正确的完成了上述操作后,我们就可以看到登录界面了。
![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png)
因为在上面的步骤里配置了 Github我们现在只需要简单的选择 github 然后进入应用授权步骤,这些完成后我们就可以进入工作台了。
![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png)
这里它会同步我们在 github 上的代码库,然后询问我们要在 drone 上构建那个代码库。
![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png)
这一步完成后,它会询问我们在代码库里添加`.drone.yml` 文件的新名称,并且在这个文件里定义构建的过程和配置项,比如使用那个 docker 镜像,执行那些命令和脚本来编译,等等。
我们按照下面的内容来配置我们的`.drone.yml`。
image: python
script:
- python helloworld.py
- echo "Build has been completed."
这一步完成后我们就可以使用 drone 应用里的 YAML 格式的配置文件来构建我们的应用了。所有对代码库的提交和改变此时都会同步到这个仓库。一旦提交完成了drone 就会自动开始构建。
![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png)
所有操作都完成后,我们就能在终端看到构建的结果了。
![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png)
### 总结 ###
在本文中我们学习了如何安装一个可以工作的使用 drone 的持续集成平台。如果我们愿意我们甚至可以从 drone.io 官方提供的服务开始工作。我们可以根据自己的需求从免费的服务或者收费服务开始。它通过漂亮的 web 界面和强大的功能改变了持续集成的世界。它可以集成很多第三方应用和部署平台。如果你有任何问题、建议可以直接反馈给我们,谢谢。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/
作者:[Arun Pyasi][a]
译者:[ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://drone.io/
[2]:http://downloads.drone.io/master/drone.deb
[3]:http://downloads.drone.io/master/drone.rpm
[4]:https://github.com/settings/developers

View File

@ -0,0 +1,77 @@
微软和 Linux :真正的浪漫还是有毒的爱情?
================================================================================
时不时的我们会读到一个能让你喝咖啡呛到或者把热拿铁喷到你显示器上的新闻故事。微软最近宣布的对 Linux 的钟爱就是这样一个鲜明的例子。
从常识来讲微软和自由开源软件FOSS运动就是恒久的敌人。在很多人眼里微软体现了过分的贪婪而这正为自由开源软件运动FOSS所拒绝。另外之前微软就已经给自由开源软件社区贴上了"一伙强盗"的标签。
我们能够理解为什么微软一直以来都害怕免费的操作系统。免费操作系统结合挑战微软核心产品线的开源应用时,就威胁到了微软在台式机和笔记本电脑市场的控制地位。
尽管微软有对在台式机主导地位的担忧,在网络服务器市场 Linux 却有着最高的影响力。今天,大多数的服务器都是 Linux 系统。包括世界上最繁忙的站点服务器。对微软来说,看到这么多无法装到兜里的许可证的营收一定是非常痛苦的。
掌上设备是微软输给自由软件的另一个领域。曾几何时,微软的 Windows CE 和 Pocket PC 操作系统走在移动计算的前沿。Windows PDA 设备是最闪亮的和豪华的产品。但是这一切在苹果公司发布了iphone之后都结束了。从那时起安卓就开始进入公众视野Windows 的移动产品开始被忽略被遗忘。而安卓平台是建立在自由开源的组件的基础上的。
由于安卓平台的开放性,安卓的市场份额在迅速扩大。不像 IOS任何一个手机制造商都可以发布安卓手机。也不像Windows 手机安卓没有许可费用。这对消费者来说是件好事。这也导致了许多强大却又价格低廉的手机制造商在世界各地涌现。这非常明确的证明了自由开源软件FOSS的价值。
在服务器和移动计算的角逐中失利对微软来说是非常惨重的损失。考虑一下服务器和移动计算这两个加起来所占有的市场大小,台式机市场似乎是死水一潭。没有人喜欢失败,尤其是涉及到金钱。并且,微软确实有许多东西正在慢慢失去。你可能期望着微软自尝苦果。在过去,确实如此。
微软使用了各种可以支配的手段来对 Linux 和自由开源软件FOSS进行反击从宣传到专利威胁。尽管这种攻击确实减慢了适配 Linux 的步伐,但却从来没有让 Linux 的脚步停下。
所以当微软在开源大会和重大事件上拿出印有“Microsoft Loves Linux”的T恤和徽章时请原谅我们表现出来的震惊。这是真的吗微软真的爱 Linux
当然公关的口号和免费的T恤并不代表真理。行动胜于雄辩。当你思考一下微软的行动时微软的立场就变得有点模棱两可了。
一方面,微软招募了几百名 Linux 开发者和系统管理员。将 .NET 核心框架作为一个开源的项目进行了发布,并提供了跨平台的支持(这样 .NET 就可以跑在 OS X 和 Linux 上了)。并且,微软与 Linux 公司合作把最流行的发行版本放到了 Azure 平台上。事实上,微软已经走的如此之远以至于要为 Azure 数据中心开发自己的 Linux 发行版了。
另一方面,微软继续直接通过法律或者傀儡公司来对开源项目进行攻击。很明显,微软在与自由软件的所有权较量上并没有发自内心的进行大的道德转变。那为什么要公开申明对 Linux 的钟爱之情呢?
一个显而易见的事实:微软是一个经营性实体。对股东来说是一个投资工具,对雇员来说是收入来源。微软所做的只有一个终极目标:盈利。微软并没有表现出来爱或者恨(尽管这是一个最常见的指控)。
所以问题不应该是"微软真的爱 Linux 吗?"相反,我们应该问,微软是怎么从这一切中获利的。
让我们以 .NET 核心框架的开源发行为例。这一举动使得 .NET 的运行时环境移植到任何平台都很轻松。这使得微软的 .NET 框架所涉及到的范围远远大于 Windows 平台。
开放 .NET 的核心包,最终使得 .NET 开发者开发跨平台的 app 成为可能,比如 OS X、Linux 甚至安卓——都基于同一个核心代码库。
从开发者角度来讲,这使得 .NET 框架比之前更有吸引力了。能够从单一的代码库触及到多个平台,使得使用 .NET 框架开发的任何 app 戏剧性的扩大了潜在的目标市场。
另外,一个强大的开源社区能够提供给开发者一些代码来在他们自己的项目中进行复用。所以,开源项目的可利用性也将会成就 .NET 框架。
更进一步讲,开放 .NET 的核心代码能够减少跨越不同平台所产生的碎片,意味着对消费者来说有对 app 更广的选择。无论是开源软件还是专用的 app都有更多的选择。
从微软的角度来讲,会得到一队开发者大军。微软可以通过销售培训、证书、技术支持、开发者工具(包括 Visual Studio和应用扩展来获利。
我们应该自问的是,这对自由软件社区有利还是有弊?
.NET 框架的大范围适用意味着许多参与竞争的开源项目的消亡,迫使我们会跟着微软的节奏走下去。
先抛开 .NET 不谈,微软正在花费大量的精力在 Azure 云计算平台对 Linux 的支持上。要记得Azure 最初是 Windows 的 Azure。Windows 服务器是唯一能够支持 Azure 的操作系统。今天Azure 也提供了对多个 Linux 发行版的支持。
关于此,有一个原因:付费给需要或者想要 Linux 服务的顾客。如果微软不提供 Linux 虚拟机,那些顾客就会跟别人合作了。
看上去好像是微软意识到“Linux 就在这里”的这样一个现实。微软不能真正的消灭它,所以必须接收它。
这又把我们带回到那个问题:关于微软和 Linux 为什么有这么多的流言?我们在谈论这个问题,因为微软希望我们思考这个问题。毕竟,所有这些谈资都会追溯到微软,不管是在新闻稿、博客还是会议上的公开声明。微软在努力吸引大家对其在 Linux 专业知识方面的注意力。
首席架构师 Kamala Subramaniam 的博文声明 Azure Cloud Switch 背后的其他企图会是什么ACS 是一个定制的 Linux 发行版。微软用它来对 Azure 数据中心的交换机硬件进行自动配置。
ACS 不是公开的。它是用于 Azure 内部使用的。别人也不太可能找到这个发行版其他的用途。事实上Subramaniam 在她的博文中也表述了同样的观点。
所以,微软不会通过卖 ACS 来获利,也不会通过赠送它而增加用户基数。相反,微软在 Linux 和 Azure 上花费精力,以加强其在 Linux 云计算平台方面的地位。
微软最近迷上 Linux 对社区来说是好消息吗?
我们不应该慢慢忘记微软的“拥抱、扩展、消灭Embrace, Extend and Exterminate”的诅咒。现在微软处在拥抱 Linux 的初期阶段。微软会通过定制扩展和专有“标准”来分裂社区吗?
发表评论吧,让我们知道你是怎么想的。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0
作者:[James Darvell][a]
译者:[sonofelice](https://github.com/sonofelice)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/james-darvell

View File

@ -1,11 +1,11 @@
如何在CentOS 7 中添加新磁盘而不用重启系统
如何在 CentOS 7 中添加新磁盘而不用重启系统
================================================================================
对大多数系统管理员来说扩充 Linux 服务器的磁盘空间是日常的工作之一。因此这篇文章会通过使用 Linux 命令,在 CentOS 7 系统上演示一些简单的操作步骤来扩充您的磁盘空间而不需要重启您的生产服务器。关于扩充和增加新的磁盘到 Linux 系统,我们会提及多种方法和多种可行性,所以可按您所需选择最适用的一种。
对大多数系统管理员来说扩充 Linux 服务器的磁盘空间是日常的工作之一。因此这篇文章会通过使用 Linux 命令,在 CentOS 7 系统上演示一些简单的操作步骤来扩充您的磁盘空间而不需要重启您的生产服务器。关于扩充和增加新的磁盘到 Linux 系统,我们会提及多种方法和多种可行性,可按您所需选择最适用的一种。
### 1. 虚拟机客户端扩充磁盘空间: ###
### 1. 虚拟机客户端扩充磁盘空间: ###
在为 Linux 系统增加磁盘卷之前,您需要添加一块新的物理磁盘或是从正使用的 VMware vShere、工作站或着其它的基础虚拟环境软件中进行设置从而扩充一块系统正使用的虚拟磁盘空间
在为 Linux 系统增加磁盘卷之前,您首先需要添加一块新的物理磁盘,或在 VMware vShere、VMware 工作站以及你使用的其它虚拟环境软件中进行设置来增加一块虚拟磁盘的容量
![Increase disk](http://blog.linoxide.com/wp-content/uploads/2016/02/1.png)
@ -22,7 +22,7 @@
### 3. 扩展空间而无需重启虚拟机 ###
现在运行如下命令就可以来扩展操作系统的物理卷磁盘空间,而且不需要重启虚拟机,系统会重新扫描 SCSI Small Computer System Interface 小型计算机系统接口)总线并添加 SCSI 设备。
现在运行如下命令,通过重新扫描 SCSI Small Computer System Interface 小型计算机系统接口)总线并添加 SCSI 设备,系统就可以扩展操作系统的物理卷磁盘空间,而且不需要重启虚拟机
# ls /sys/class/scsi_host/
# echo "- - -" > /sys/class/scsi_host/host0/scan
@ -35,7 +35,7 @@
# echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
# echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
如下图所示,会重新扫描 SCSI 总线,随后我们虚拟机客户端设置的磁盘大小会正常显示。
如下图所示,会重新扫描 SCSI 总线,随后我们虚拟机客户端设置的磁盘大小会正常显示。
![Rescan disk device](http://blog.linoxide.com/wp-content/uploads/2016/02/3.png)
@ -85,7 +85,7 @@
### 5. 创建物理卷: ###
根据提示运行 'partprob' 或 'kpartx' 命令以使分区表被真正使用,然后使用如下的命令来创建新的物理卷。
根据上述提示运行 'partprob' 或 'kpartx' 命令以使分区表生效,然后使用如下的命令来创建新的物理卷。
# partprobe
# pvresize /dev/sda3
@ -107,7 +107,7 @@
# xfs_growfs /dev/mapper/centos-root
'/' 分区的大小已经成功的增加了,可以使用 'df' 命令来检查您磁盘驱动的大小。如图示。
'/' 分区的大小已经成功的增加了,可以使用 'df' 命令来检查您磁盘驱动的大小。如图示。
![Increase disk space](http://blog.linoxide.com/wp-content/uploads/2016/02/3C.png)
@ -129,7 +129,7 @@
# echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan
列出您的 SCSI 设备的名称
列出您的 SCSI 设备的名称
# ls /sys/class/scsi_device/
# echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan
@ -139,7 +139,7 @@
![Scanning new disk](http://blog.linoxide.com/wp-content/uploads/2016/02/3F.png)
一旦新增的磁盘可见就可以运行下面的命令来创建新的物理卷,然后增加到卷组,如下示。
一旦新增的磁盘可见就可以运行下面的命令来创建新的物理卷,然后增加到卷组,如下示。
# pvcreate /dev/sdb
# vgextend centos /dev/sdb
@ -157,16 +157,15 @@
### 结论: ###
在 Linux CentOS 7 系统上,使用这篇文章所述的操作步骤来扩充您的任意逻辑卷的磁盘空间,此管理磁盘分区的操作过程是非常简单的。您不需要重启生产线上的服务器,只是简单的重扫描下 SCSI 设备,和扩展您想要的 LVM逻辑卷管理。我们希望这文章对您有用。可自由的发表有用的评论和建议。
在 Linux CentOS 7 系统上管理磁盘分区的操作过程是非常简单的可以使用这篇文章所述的操作步骤来扩充您的任意逻辑卷的磁盘空间。您不需要重启生产线上的服务器,只是简单的重扫描下 SCSI 设备,和扩展您想要的 LVM逻辑卷管理。我们希望这文章对您有用。请随意的发表有用的评论和建议。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/add-new-disk-centos-7-without-rebooting/
作者:[Kashif S][a]
译者:[runningwater](https://github.com/runningwater
)
校对:[校对者ID](https://github.com/校对者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,58 @@
输错密码?这个 sudo 会“嘲讽”你
===========================================================
你在 Linux 终端中会有很多的乐趣。我今天要讲的不是在[终端中跑火车](http://itsfoss.com/ubuntu-terminal-train/)。
我今天要讲的技巧可以放松你的心情。前面一篇文章中,你学习了[如何在命令行中增加 sudo 命令的超时](http://itsfoss.com/change-sudo-password-timeout-ubuntu/)。今天的文章中,我会向你展示如何让 sudo 在输错密码的时候“嘲讽”你(或者其他人)。
对我讲的感到疑惑?这里,让我们看下这张 gif 来了解下 sudo 是如何在你输错密码之后“嘲讽”你的。
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux.gif)
那么,为什么要这么做?毕竟,“嘲讽”不会让你的一天开心,不是么?
对我来说,一点小技巧都是有趣的,并且要比以前的“密码错误”的错误提示更有趣。另外,我可以向我的朋友展示来逗弄他们(这个例子中是通过自由开源软件)。我很肯定你有你自己的理由来使用这个技巧的。
## 在 sudo 中启用“嘲讽”
你可以在`sudo`配置中增加下面的行来启用“嘲讽”功能:
```
Defaults insults
```
让我们看看该如何做。打开终端并使用下面的命令:
```
sudo visudo
```
这会在 [nano](http://www.nano-editor.org/)中打开配置文件。
> 是的,我知道传统的 visudo 应该在 vi 中打开 `/etc/sudoers` 文件,但是 Ubuntu 及基于它的发行版会使用 nano 打开。由于我们在讨论vi这里有一份 [vi 速查表](http://itsfoss.com/download-vi-cheat-sheet)可以在你决定使用 vi 的时候使用。
回到编辑 sudeors 文件界面,你需要找出 Defaults 所在的行。简单的很,只需要在文件的开头加上`Defaults insults`,就像这样:
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux-Mint.png)
如果你正在使用 nano使用`Ctrl+X`来退出编辑器。在退出的时候它会询问你是否保存更改。要保存更改按下“Y”。
一旦你保存了 sudoers 文件之后,打开终端并使用 sudo 运行各种命令。故意输错密码并享受嘲讽吧:)
sudo 可能会生气的。看见没,他甚至在我再次输错之后威胁我。哈哈。
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux-Mint-1.jpeg)
如果你喜欢这个终端技巧,你也可以查看[其他终端技巧的文章](http://itsfoss.com/category/terminal-tricks/)。如果你有其他有趣的技巧,在评论中分享。
------------------------------------------------------------------------------
via: http://itsfoss.com/sudo-insult-linux/
作者:[ABHISHEK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/

View File

@ -3,25 +3,25 @@
![](http://insidehpc.com/wp-content/uploads/2015/08/beegfs.jpg)
今天(2月23日 ThinkParQ 宣布完整的 [BeeGFS 并行文件系统][1] 的源码现已开源。由于 BeeGFS 是专为要求性能的环境开发的,所以它在开发时十分注重安装的简单以及高度的灵活性,包括融合了在存储服务器同时做计算任务时需要的设置。随着系统中的服务器以及存储设备的增加,文件系统的容量以及性能将是需求的拓展点,无论是小型集群还是多达上千个节点的企业级系统。
2月23日 ThinkParQ 宣布完整的 [BeeGFS 并行文件系统][1] 的源码现已开源。由于 BeeGFS 是专为要求性能的环境开发的,所以它在开发时十分注重安装的简易性以及高度灵活性,包括融合了在存储服务器同时做计算任务时需要的设置。随着系统中的服务器以及存储设备的增加,文件系统的容量以及性能将是需求的拓展点,无论是小型集群还是多达上千个节点的企业级系统。
第一次官方声明开放 BeeGFS 的源码是在 2013 年的国际超级计算大会上发布的。这个声明是在欧洲的百亿亿次级超算项目 [DEEP-ER][2] 的背景下做出的,在这个项目里为了得到更好的 I/O 要求,一些微小的进步被设计并应用。对于运算量高达百亿亿次的系统,不同的软硬件必须有效的协同工作才能得到最佳的拓展性。因此,开源 BeeGFS 是让一个百亿亿次的集群的所有组成部分高效的发挥作用的一步。
官方第一次声明开放 BeeGFS 的源码是在 2013 年的国际超级计算大会上发布的。这个声明是在欧洲的百亿亿次级超算项目 [DEEP-ER][2] 的背景下做出的,在这个项目里为了得到更好的 I/O 要求,做出了一些新的改进。对于运算量高达百亿亿次的系统,不同的软硬件必须有效的协同工作才能得到最佳的拓展性。因此,开源 BeeGFS 是让一个百亿亿次的集群的所有组成部分高效的发挥作用的一步。
“当我们的一些用户对于 BeeGFS 十分容易安装并且不用费心管理而感到高兴时,另外一些用户则想要知道它是如何运行的以便于更好的优化他们的应用,使得他们可以监控它或者把它移植到其他的平台上,比如 BSD” Sven Breuner 说道,他是 ThinkParQ BeeGFS 背后的公司)的 CEO“而且把 BeeGFS 移植到其他的非 X86 架构,比如 ARM 或者 Power也是社区等着要做的一件事。”
“当我们的一些用户对于 BeeGFS 十分容易安装并且不用费心管理而感到高兴时,另外一些用户则想要知道它是如何运行的以便于更好的优化他们的应用,使得他们可以监控它或者把它移植到其他的平台上,比如 BSD” Sven Breuner 说道,他是 ThinkParQ BeeGFS 背后的公司)的 CEO“而且把 BeeGFS 移植到其他的非 X86 架构,比如 ARM 或者 Power也是社区即将要做的一件事。”
对于未来的采购来说ARM 技术的稳步发展确实使得它成为了一个越来越有趣的技术。因此, BeeGFS 的团队也参与了 [ExaNeSt][3],一个来自欧洲的新的百亿亿次级超算计划,这个计划致力于使 ARM 的生态能为高性能的工作负载做好准备。“尽管现在 BeeGFS 在 ARM 处理器上可以算是开箱即用,这个项目也将给我们机会来证明我们在这个架构上也能完全发挥其性能。”, Bernd Lietzow BeeGFS 中 ExaNeSt 的领导者补充道。
作为一个有着 25 K 行 C++ 代码的元数据服务以及约 15 K 行存储服务的项目BeeGFS 相对比较容易理解和拓展,不只是对于大神,对于对文件系统有兴趣的大学生也是这样。在 GitHub 上已经有很多的为 BeeGFS 写的项目,比如基于浏览器的监控或者 Docker 一体化。
作为一个有着 25 K 行 C++ 代码的元数据服务以及约 15 K 行存储服务的项目BeeGFS 相对比较容易理解和拓展,不只是对于大神,对于对文件系统有兴趣的大学生也是这样。在 GitHub 上已经有很多的为 BeeGFS 写的项目,比如基于浏览器的监控或者 Docker 一体化。
有关新闻显示, [BeeGFS 用户大会][4]将于 5 月 18-19 日在德国凯泽斯劳滕举行。
-----------------------------------------------------------------------------------------
via: http://insidehpc.com/2016/02/beegfs-parallel-file-system-now-open-source/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+InsideHPC+%28insideHPC.com%29
via: http://insidehpc.com/2016/02/beegfs-parallel-file-system-now-open-source/
作者:[staff][a]
译者:[name1e5s](https://github.com/name1e5s)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,43 @@
Ubuntu Budgie 将在 Ubuntu 16.10 中成为新官方分支发行版
===
> Budgie-Remix Beta 2 已经就绪以供测试。
上个月我们介绍了一个新 GNU/Linux 发行版 [Budgie-Remix][1],它的终极目标是成为一个 Ubuntu 官方分支发行版,可能会使用 Ubuntu Budgie 这个名字。
![Budgie-Remix 16.04 Beta 2](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-2.jpg)
今天Budgie-Remix 的开发者 David Mohammed 向 Softpedia 通报了项目进度,以及为即将到来的 16.04 发布了第二个 Beta 版本。Cononical 的创始人 [Mark Shuttleworth 说过][2]如果能够有围绕这个发行版的社区,它肯定会得到支持。
自我们[最初的报道][3]以来David Mohammed 似乎与 Ubuntu MATE 项目的领导者 Martin Wimpress 取得了联系,后者敦促他以 Ubuntu 16.10 作为他还未正式命名的 Ubuntu 分支的官方版本目标。这个分支发行版构建于 Budgie 桌面环境之上,该桌面环境是由超赞的 [Solus][4] 开发者团队创建的。
“我们本周完成了 Beta 2 版本的开发以及很多其它东西,而且我们还有 Martin Wimpress Ubuntu MATE 项目领导者的支持”David Mohammed 对 Softpedia 独家爆料。”他还敦促我们以 16.10 作为成为官方版本的目标——那当然是个主要的挑战——并且我们还需要社区的帮助/加入我们来让这一切成为现实!”
### Ubuntu Budgie 16.10 可能在 2016 年 10 月到来 ###
4 月 21 日Canonical 将会发布 Ubuntu Linux 的下一个 LTSLong Term Support长期支持版本Xenial Xerus——好客的非洲地松鼠——也就是 Ubuntu 16.04。我们有可能能够提前体验到 Budgie-Remix 16.04,以后它也许成为了官方分支的 Ubuntu Budgie 。但在那之前,你可以帮助开发者[测试 Beta 2 版本][5]。
在 Ubuntu 16.04 LTSXenial Xerus发布之后Ubuntu 的开发者们就会立即将注意力转移到下一个版本的开发上。下一个版本 Ubuntu 16.10 应该会在 10 月底到来,并且 Ubuntu Budgie 也可能宣布成为 Ubuntu 官方分支发行版。
![Budgie Applications Menu](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-3.jpg)
![Budgie Raven notification and customization center](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-4.jpg)
![Nautilus file manager](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-5.jpg)
----------------------------------
via: http://news.softpedia.com/news/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573.shtml
作者Marius Nestor
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]: https://launchpad.net/budgie-remix
[2]: https://plus.google.com/+programmerslab/posts/CSvbSvgcdcv
[3]: http://news.softpedia.com/news/budgie-remix-could-become-ubuntu-budgie-download-and-test-it-501231.shtml
[4]: https://solus-project.com/
[5]: https://sourceforge.net/projects/budgie-remix/files/beta2/

View File

@ -1,13 +1,11 @@
GHLandy Translated
LFCS 系列第四讲:分区存储设备、格式化文件系统和配置交换分区
LFCS 系列第四讲:对存储设备分区、格式化文件系统和配置交换分区
================================================================================
去年八月份Linux 基金会发起了 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,给所有系统管理员一个展现自己的机会。通过基础考试后,他们可以胜任在 Linux 上的整体运维工作:包括系统支持、一流水平的诊断和监控以及在必要之时向其他支持团队提交帮助请求等。
![Linux Foundation Certified Sysadmin Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png)
LFCS 系列第四讲
*LFCS 系列第四讲*
需要注意的是Linux 基金会认证是非常严格的,通过与否完全要看个人能力。通过在线链接,你可以随时随地参加 Linux 基金会认证考试。所以,你再也不用到考试中心了,只需要不断提高自己的专业技能和经验就可去参加考试了。
@ -16,13 +14,13 @@ LFCS 系列第四讲
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是《十套教程》系列中的第四讲。在本讲中,我们将涵盖分区存储设备、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。
本讲是系列教程中的第四讲。在本讲中,我们将涵盖对存储设备进行分区、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。
### 分区存储设备 ###
### 对存储设备分区 ###
分区是一种将单独的硬盘分成一个或多个区的手段。一个分区只是硬盘的一部分,我们可以认为这部分是独立的磁盘,里边包含一个单一类型的文件系统。分区表则是将硬盘上这些分区与分区标识符联系起来的索引。
在 Linux IBM PC 兼容系统里边用于管理传统 MBR最新到2009年分区的工具是 fdisk。对于 GPT2010年至今分区我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称如 /dev/sdb进行调用。
在 Linux IBM PC 兼容系统里边用于管理传统 MBR到2009年分区的工具是 fdisk。对于 GPT2010年至今分区我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称如 /dev/sdb进行调用。
#### 使用 fdisk 管理 MBR 分区 ####
@ -34,17 +32,17 @@ LFCS 系列第四讲
![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png)
fdisk 帮助菜单
*fdisk 帮助菜单*
上图中,使用频率最高的选项已高亮显示。你可以随时按下 “p” 显示分区表。
![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png)
显示分区表
*显示分区表*
Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id。一个分区类型代表一种文件系统的标识符简单来说包括该分区上数据的访问方法。
请注意,每个分区类型的全面都全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,因能力为主。
请注意,每个分区类型的全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,以考试为主。
**下面列出一些 fdisk 常用选项:**
@ -58,25 +56,25 @@ Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id。一
![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png)
fdisk 命令选项
*fdisk 命令选项*
按下 “n” 后接着按下 “p” 会创建新一个主分区。最后,你可以使用所有的默认值(这将占用所有的可用空间),或者像下面一样自定义分区大小。
![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png)
创建新分区
*创建新分区*
若 fdisk 分配的分区 Id 并不是我们想用的,可以按下 “t” 来更改。
![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png)
更改分区类型
*更改分区类型*
全部设置好分区后,按下 “w” 将更改保存到硬盘分区表上。
![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png)
保存分区更改
*保存分区更改*
#### 使用 gdisk 管理 GPT 分区 ####
@ -88,7 +86,7 @@ fdisk 命令选项
![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png)
创建 GPT 分区
*创建 GPT 分区*
使用 GPT 分区方案,我们可以在同一个硬盘上创建最多 128 个分区,单个分区最大以 PB 为单位,而 MBR 分区方案最大的只能 2TB。
@ -96,7 +94,7 @@ fdisk 命令选项
![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png)
gdisk 命令选项
*gdisk 命令选项*
### 格式化文件系统 ###
@ -106,14 +104,14 @@ gdisk 命令选项
![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png)
检查文件系统类型
*检查文件系统类型*
选择文件系统取决于你的需求。你应该考虑到每个文件系统的优缺点以及其特点。选择文件系统需要看的两个重要属性:
- 日志支持,允许从系统崩溃事件中快速恢复数据。
- 安全增强式 LinuxSELinux支持按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的把握访问控制权限”。
- 安全增强式 LinuxSELinux支持按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的控制访问控制权限”。
在接下来的例子中,我们通过 mkfs 在 /dev/sdb1上创建 ext4 文件系统(支持日志和 SELinux标卷为 Tecmint。mkfs 基本语法如下:
在接下来的例子中,我们通过 mkfs 在 /dev/sdb1 上创建 ext4 文件系统(支持日志和 SELinux标卷为 Tecmint。mkfs 基本语法如下:
# mkfs -t [filesystem] -L [label] device
或者
@ -121,7 +119,7 @@ gdisk 命令选项
![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png)
创建 ext4 文件系统
*创建 ext4 文件系统*
### 创建并启用交换分区 ###
@ -129,7 +127,7 @@ gdisk 命令选项
下面列出选择交换分区大小的经验法则:
物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。
> 物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。
所以,如果:
@ -142,7 +140,7 @@ M为物理内存大小S 为交换分区大小,单位 GB那么
记住,这只是基本的经验。对于作为系统管理员的你,才是决定是否使用交换分区及其大小的关键。
要配置交换分区,首先要划分一个常规分区,大小像我们之前演示的那样来选取。然后添加以下条目到 /etc/fstab 文件中其中的X要更改为对应的 b 或 c
要配置交换分区,首先要划分一个常规分区,大小像我们之前演示的那样来选取。然后添加以下条目到 /etc/fstab 文件中(其中的 X 要更改为对应的 b 或 c
/dev/sdX1 swap swap sw 0 0
@ -163,15 +161,15 @@ M为物理内存大小S 为交换分区大小,单位 GB那么
![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png)
创建交换分区
*创建交换分区*
![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png)
启用交换分区
*启用交换分区*
### 结论 ###
在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。
在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。
参考链接
@ -185,7 +183,7 @@ via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,18 @@
GHLandy Translated
LFCS 系列第五讲:如何在 Linux 中挂载/卸载本地文件系统和网络文件系统Samba 和 NFS
================================================================================
Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。
Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。
![Linux Foundation Certified Sysadmin Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png)
LFCS 系列第五讲
*LFCS 系列第五讲*
请看以下视频,这里边介绍了 Linux 基金会认证程序。
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是《十套教程》系列中的第三讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。
本讲是系列教程中的第五讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。
### 挂载文件系统 ###
@ -26,20 +22,19 @@ LFCS 系列第五讲
换句话说,管理存储设备的第一步就是把设备关联到文件系统树。要完成这一步,通常可以这样:用 mount 命令来进行临时挂载(用完的时候,使用 umount 命令来卸载),或者通过编辑 /etc/fstab 文件之后重启系统来永久性挂载,这样每次开机都会进行挂载。
不带任何选项的 mount 命令,可以显示当前已挂载的文件系统。
# mount
![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png)
检查已挂载的文件系统
*检查已挂载的文件系统*
另外mount 命令通常用来挂载文件系统。其基本语法如下:
# mount -t type device dir -o options
该命令会指引内核在设备上找到的文件系统如已格式化为指定类型的文件系统挂载到指定目录。像这样的形式mount 命令不会再到 /etc/fstab 文件中进行确认。
该命令会指引内核在设备上找到的文件系统如已格式化为指定类型的文件系统挂载到指定目录。像这样的形式mount 命令不会再到 /etc/fstab 文件中进行确认。
除非像下面,挂载指定的目录或者设备:
@ -59,20 +54,17 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
读作:
设备 dev/mapper/debian-home 的格式为 ext4挂载在 /home 下,并且有以下挂载选项: rwrelatimeuser_xattrbarrier=1data=ordered。
设备 dev/mapper/debian-home 挂载在 /home 下,它被格式化为 ext4,并且有以下挂载选项: rwrelatimeuser_xattrbarrier=1data=ordered。
**mount 命令选项**
下面列出 mount 命令的常用选项
- async运许在将要挂载的文件系统上进行异步 I/O 操作
- auto标志文件系统通过 mount -a 命令挂载,与 noauto 相反。
- defaults该选项为 async,auto,dev,exec,nouser,rw,suid 的一个别名。注意多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格mount 命令会把后边的字符解释为另一个参数。
- async允许在将要挂载的文件系统上进行异步 I/O 操作
- auto标示该文件系统通过 mount -a 命令挂载,与 noauto 相反。
- defaults该选项相当于 `async,auto,dev,exec,nouser,rw,suid` 的组合。注意多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格mount 命令会把后边的字符解释为另一个参数。
- loop将镜像文件如 .iso 文件)挂载为 loop 设备。该选项可以用来模拟显示光盘中的文件内容。
- noexec阻止该文件系统中可执行文件的执行。与 exec 选项相反。
- nouser阻止任何用户除 root 用户外) 挂载或卸载文件系统。与 user 选项相反。
- remount重新挂载文件系统。
- ro只读模式挂载。
@ -91,7 +83,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png)
可读写模式挂载设备
*可读写模式挂载设备*
**以默认模式挂载设备**
@ -102,26 +94,25 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png)
挂载设备
*挂载设备*
在这个例子中,我们发现写入文件和命令都完美执行了。
### 卸载设备 ###
使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的文件系统,就会有造成设备损坏和数据丢失的风险。
使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的设备,就会有造成设备损坏和数据丢失的风险。
也就是说,你必须设备的盘符或者挂载点中退出,才能卸载设备。换言之,当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。
也就是说,你必须“离开”设备的块设备描述符或者挂载点,才能卸载设备。换言之,你的当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。
![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png)
卸载设备
*卸载设备*
离开需卸载设备的挂载点最简单的方法就是,运行不带任何选项的 cd 命令,这样会回到当前用户的家目录。
### 挂载常见的网络文件系统 ###
最常用的两种网络文件系统是 SMBServer Message Block服务器消息块和 NFSNetwork File System网络文件系统。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix客户端提供共享服务就需要用到 Samba 了。
最常用的两种网络文件系统是 SMBServer Message Block服务器消息块和 NFSNetwork File System网络文件系统。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix 客户端提供共享服务,就需要用到 Samba 了。
扩展阅读
@ -130,13 +121,13 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
下面的例子中,假设 Samba 和 NFS 已经在地址为 192.168.0.10 的服务器上架设好了(请注意,架设 NFS 服务器也是 LFCS 考试中需要考核的能力,我们会在后边中提到)。
#### 在 Linux 中挂载 Samba 共享 ####
第一步:在 Red Hat 以 Debian 系发行版中安装 samba-client、samba-common 和 cifs-utils 软件包,如下:
# yum update && yum install samba-client samba-common cifs-utils
# aptitude update && aptitude install samba-client samba-common cifs-utils
然后运行下列命令,查看服务器上可用的 Samba 共享。
# smbclient -L 192.168.0.10
@ -145,7 +136,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png)
挂载 Samba 共享
*挂载 Samba 共享*
上图中,已经对可以挂载到我们本地系统上的共享进行高亮显示。你只需要与一个远程服务器上的合法用户名及密码就可以访问共享了。
@ -164,7 +155,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png)
挂载有密码保护的 Samba 共享
*挂载有密码保护的 Samba 共享*
#### 在 Linux 系统中挂载 NFS 共享 ####
@ -185,7 +176,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png)
挂载 NFS 共享
*挂载 NFS 共享*
### 永久性挂载文件系统 ###
@ -197,13 +188,12 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
其中:
- <file system>: 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷label或者 UUID 来指定。这样做可以避免分区号改变是带来的错误。
- <mount point>: 第二字段指定挂载点。
- <type> :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。
- <options>: 一个(或多个)挂载选项。
- <dump>: 你可能把这个字段设置为 0否则设置为 1使得系统启动时禁用 dump 工具dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。
- <pass>: 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1其他所有需要检查的分区则是以数字 2.
- \<file system>: 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷label或者 UUID 来指定。这样做可以避免分区号改变时带来的错误。
- \<mount point>: 第二个字段指定挂载点。
- \<type> :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。
- \<options>: 一个(或多个)挂载选项。
- \<dump>: 你可能把这个字段设置为 0否则设置为 1使得系统启动时禁用 dump 工具dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。
- \<pass>: 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1其他所有需要检查的分区则是以数字 2.
**Mount 命令例示**
@ -211,7 +201,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
LABEL=TECMINT /mnt ext4 rw,noexec 0 0
2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加下语句。
2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加下语句。
/dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0
@ -219,7 +209,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
### 总结 ###
可以放心,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。
不用怀疑,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。
参考链接
@ -234,7 +224,7 @@ via: http://www.tecmint.com/mount-filesystem-in-linux/
作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,283 @@
LFCS 系列第六讲组装分区为RAID设备——创建和管理系统备份
=========================================================
Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中级系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。
![Linux Foundation Certified Sysadmin Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png)
*LFCS 系列第六讲*
以下视频介绍了 Linux 基金会认证程序。
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是系列教程中的第六讲,在这一讲里,我们将会解释如何将分区组装为 RAID 设备——创建和管理系统备份。这些都是 LFCS 认证中的必备知识。
### 了解RAID ###
这种被称为独立磁盘冗余阵列Redundant Array of Independent Disks(RAID)的技术是将多个硬盘组合成一个单独逻辑单元的存储解决方案,它提供了数据冗余功能并且改善硬盘的读写操作性能。
然而,实际的容错和磁盘 I/O 性能硬盘取决于如何将多个硬盘组装成磁盘阵列。根据可用的设备和容错/性能的需求RAID 被分为不同的级别,你可以参考 RAID 系列文章以获得每个 RAID 级别更详细的解释。
- [在 Linux 下使用 RAID介绍 RAID 的级别和概念][1]
我们选择用于创建、组装、管理、监视软件 RAID 的工具,叫做 mdadm (multiple disk admin 的简写)。
```
---------------- Debian 及衍生版 ----------------
# aptitude update && aptitude install mdadm
```
```
---------------- Red Hat 和基于 CentOS 的系统 ----------------
# yum update && yum install mdadm
```
```
---------------- openSUSE 上 ----------------
# zypper refresh && zypper install mdadm #
```
#### 将分区组装成 RAID 设备 ####
组装已有分区作为 RAID 设备的过程由以下步骤组成。
**1. 使用 mdadm 创建阵列**
如果先前其中一个分区已经格式化,或者作为了另一个 RAID 阵列的一部分,你会被提示以确认创建一个新的阵列。假设你已经采取了必要的预防措施以避免丢失重要数据,那么可以安全地输入 Y 并且按下回车。
```
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
```
![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png)
*创建 RAID 阵列*
**2. 检查阵列的创建状态**
在创建了 RAID 阵列之后,你可以检查使用以下命令检查阵列的状态。
# cat /proc/mdstat
or
# mdadm --detail /dev/md0 [More detailed summary]
![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png)
*检查 RAID 阵列的状态*
**3. 格式化 RAID 设备**
如本系列[第四讲][2]所介绍的,按照你的需求/要求采用某种文件系统格式化你的设备。
**4. 监控 RAID 阵列服务**
让监控服务时刻监视你的 RAID 阵列。把`# mdadm --detail --scan`命令输出结果添加到 `/etc/mdadm/mdadm.conf`(Debian及其衍生版)或者`/etc/mdadm.conf`(Cent0S/openSUSE),如下。
# mdadm --detail --scan
![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png)
*监控 RAID 阵列*
# mdadm --assemble --scan [Assemble the array]
为了确保服务能够开机启动,需要以 root 权限运行以下命令。
**Debian 及其衍生版**
Debian 及其衍生版能够通过下面步骤使服务默认开机启动:
# update-rc.d mdadm defaults
`/etc/default/mdadm` 文件中添加下面这一行
AUTOSTART=true
**CentOS 和 openSUSE(systemd-based)**
# systemctl start mdmonitor
# systemctl enable mdmonitor
**CentOS 和 openSUSE(SysVinit-based)**
# service mdmonitor start
# chkconfig mdmonitor on
**5. 检查RAID磁盘故障**
在支持冗余的的 RAID 级别中,在需要时会替换故障的驱动器。当磁盘阵列中的设备出现故障时,仅当存在我们第一次创建阵列时预留的备用设备时,磁盘阵列会将自动启动重建。
![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png)
*检查 RAID 故障磁盘*
否则,我们需要手动将一个额外的物理驱动器插入到我们的系统,并且运行。
# mdadm /dev/md0 --add /dev/sdX1
/dev/md0 是出现了问题的阵列,而 /dev/sdx1 是新添加的设备。
**6. 拆解一个工作阵列**
如果你需要使用工作阵列的设备创建一个新的阵列,你可能不得不去拆解已有工作阵列——(可选步骤)
# mdadm --stop /dev/md0 # Stop the array
# mdadm --remove /dev/md0 # Remove the RAID device
# mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes
**7. 设置邮件通知**
你可以配置一个用于发送通知的有效邮件地址或者系统账号(确保在 mdadm.conf 文件中有下面这一行)。——(可选步骤)
MAILADDR root
在这种情况下,来自 RAID 后台监控程序所有的通知将会发送到你的本地 root 账号的邮件箱中。其中一个类似的通知如下。
说明此次通知事件和第5步中的例子相关。此处一个设备被标志为错误并且一个空闲的设备自动地被 mdadm 加入到阵列。我们用完了所有“健康的”空闲设备,因此我们得到了通知。
![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png)
*RAID 监控通知*
#### 了解 RAID 级别 ####
**RAID 0**
阵列总大小是最小分区大小的 n 倍n 是阵列中独立磁盘的个数(你至少需要两个驱动器/磁盘)。运行下面命令,使用 /dev/sdb1 和 /dev/sdc1 分区组装一个 RAID 0 阵列。
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
常见用途:用于支持性能比容错更重要的实时应用程序的设置
**RAID 1 (又名镜像)**
阵列总大小等于最小分区大小(你至少需要两个驱动器/磁盘)。运行下面命令,使用 /dev/sdb1 和 /dev/sdc1 分区组装一个 RAID 1 阵列。
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
常见用途:操作系统的安装或者重要的子文件夹,例如 /home
**RAID 5 (又名奇偶校验码盘)**
阵列总大小将是最小分区大小的 (n-1) 倍。所减少的大小用于奇偶校验(冗余)计算(你至少需要3个驱动器/磁盘)。
说明:你可以指定一个空闲设备 (/dev/sde1) 替换问题出现时的故障部分(分区)。运行下面命令,使用 /dev/sdb1, /dev/sdc1, /dev/sdd1/dev/sde1 组装一个 RAID 5 阵列,其中 /dev/sde1 作为空闲分区。
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1
常见用途Web 和文件服务
**RAID 6 (又名双重奇偶校验码盘)**
阵列总大小为(n*s)-2*s其中n为阵列中独立磁盘的个数s为最小磁盘大小。
说明:你可以指定一个空闲分区(在这个例子为 /dev/sdf1)替换问题出现时的故障部分(分区)。
运行下面命令,使用 /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1 和 /dev/sdf1 组装 RAID 6 阵列,其中 /dev/sdf1 作为空闲分区。
# mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1
常见用途:大容量、高可用性要求的文件服务器和备份服务器。
**RAID 1+0 (又名镜像条带)**
因为 RAID 1+0 是 RAID 0 和 RAID 1 的组合,所以阵列总大小是基于两者的公式计算的。首先,计算每一个镜像的大小,然后再计算条带的大小。
# mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1
常见用途:需要快速 IO 操作的数据库和应用服务器
#### 创建和管理系统备份 ####
记住, RAID 其所有的价值不是在于备份的替换者在黑板上写上1000次如果你需要的话但无论何时一定要记住它。在我们开始前我们必须注意的是没有一个放之四海皆准的针对所有系统备份的解决方案但这里有一些东西是你在规划一个备份策略时需要考虑的。
- 你的系统将用于什么?(桌面或者服务器?如果系统是应用于后者,那么最重要的服务是什么?哪个配置是痛点?)
- 你每隔多久备份你的系统?
- 你需要备份的数据是什么(比如文件/文件夹/数据库转储)?你还可以考虑是否需要备份大型文件(比如音频和视频文件)。
- 这些备份将会存储在哪里(物理位置和媒体)
**备份你的数据**
方法1使用 dd 命令备份整个磁盘。你可以在任意时间点通过创建一个准确的镜像来备份一整个硬盘或者是分区。注意当设备是离线时,这种方法效果最好,也就是说它没有被挂载并且没有任何进程的 I/O 操作访问它。
这种备份方法的缺点是镜像将具有和磁盘或分区一样的大小即使实际数据占用的是一个很小的比例。比如如果你想要为只使用了10%的20GB的分区创建镜像那么镜像文件将仍旧是20GB。换句话来讲它不仅包含了备份的实际数据而且也包含了整个分区。如果你想完整备份你的设备那么你可以考虑使用这个方法。
**从现有的设备创建一个镜像文件**
# dd if=/dev/sda of=/system_images/sda.img
或者
--------------------- 可选地,你可以压缩镜像文件 -------------------
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
**从镜像文件恢复备份**
# dd if=/system_images/sda.img of=/dev/sda
或者
--------------------- 根据你创建镜像文件时的选择(译者注:比如压缩) ----------------
# gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
方法2使用 tar 命令备份确定的文件/文件夹——已经在本系列[第三讲][3]中讲了。如果你想要备份指定的文件/文件夹(配置文件,用户主目录等等),你可以使用这种方法。
方法3使用 rsync 命令同步文件。rsync 是一种多功能远程和本地文件复制工具。如果你想要从网络设备备份或同步文件rsync 是一种选择。
无论是你是正在同步两个本地文件夹还是本地 < — > 挂载在本地文件系统的远程文件夹,其基本语法是一样的。
# rsync -av source_directory destination_directory
在这里,-a 递归遍历子目录(如果它们存在的话),维持符号链接、时间戳、权限以及原本的属主/属组,-v 显示详细过程。
![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png)
*rsync 同步文件*
除此之外,如果你想增加在网络上传输数据的安全性,你可以通过 ssh 协议使用 rsync。
**通过 ssh 同步本地到远程文件夹**
# rsync -avzhe ssh backups root@remote_host:/remote_directory/
这个示例,本地主机上的 backups 文件夹将与远程主机上的 /root/remote_directory 的内容同步。
在这里,-h 选项以易读的格式显示文件的大小,-e 标志用于表示一个 ssh 连接。
![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png)
*rsync 同步远程文件*
**通过ssh同步远程到本地文件夹**
在这种情况下,交换前面示例中的 source 和 destination 文件夹。
# rsync -avzhe ssh root@remote_host:/remote_directory/ backups
请注意这些只是 rsync 用法的三个示例而已(你可能遇到的最常见的情形)。对于更多有关 rsync 命令的示例和用法 ,你可以查看下面的文章。
- [在 Linux 下同步文件的10个 rsync命令][4]
### 总结 ###
作为一个系统管理员,你需要确保你的系统表现得尽可能好。如果你做好了充分准备,并且如果你的数据完整性能被诸如 RAID 和系统日常备份的存储技术支持,那你将是安全的。
如果你有有关完善这篇文章的问题、评论或者进一步的想法,可以在下面畅所欲言。除此之外,请考虑通过你的社交网络简介分享这系列文章。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/
作者:[Gabriel Cánepa][a]
译者:[cpsoture](https://github.com/cposture)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://linux.cn/article-6085-1.html
[2]:https://linux.cn/article-7187-1.html
[3]:https://linux.cn/article-7171-1.html
[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/

View File

@ -1,53 +0,0 @@
# Recognizing correct code
Automatic bug-repair system fixes 10 times as many errors as its predecessors.
------
DongShuaike is translating.
MIT researchers have developed a machine-learning system that can comb through repairs to open-source computer programs and learn their general properties, in order to produce new repairs for a different set of programs.
The researchers tested their system on a set of programming errors, culled from real open-source applications, that had been compiled to evaluate automatic bug-repair systems. Where those earlier systems were able to repair one or two of the bugs, the MIT system repaired between 15 and 18, depending on whether it settled on the first solution it found or was allowed to run longer.
While an automatic bug-repair tool would be useful in its own right, professor of electrical engineering and computer science Martin Rinard, whose group developed the new system, believes that the work could have broader ramifications.
“One of the most intriguing aspects of this research is that weve found that there are indeed universal properties of correct code that you can learn from one set of applications and apply to another set of applications,” Rinard says. “If you can recognize correct code, that has enormous implications across all software engineering. This is just the first application of what we hope will be a brand-new, fabulous technique.”
Fan Long, a graduate student in electrical engineering and computer science at MIT, presented a paper describing the new system at the Symposium on Principles of Programming Languages last week. He and Rinard, his advisor, are co-authors.
Users of open-source programs catalogue bugs they encounter on project websites, and contributors to the projects post code corrections, or “patches,” to the same sites. So Long was able to write a computer script that automatically extracted both the uncorrected code and patches for 777 errors in eight common open-source applications stored in the online repository GitHub.
**Feature performance**
As with [all][1] machine-learning systems, the crucial aspect of Long and Rinards design was the selection of a “[feature set][2]” that the system would analyze. The researchers concentrated on values stored in memory — either variables, which can be modified during a programs execution, or constants, which cant. They identified 30 prime characteristics of a given value: It might be involved in an operation, such as addition or multiplication, or a comparison, such as greater than or equal to; it might be local, meaning it occurs only within a single block of code, or global, meaning that its accessible to the program as a whole; it might be the variable that represents the final result of a calculation; and so on.
Long and Rinard wrote a computer program that evaluated all the possible relationships between these characteristics in successive lines of code. More than 3,500 such relationships constitute their feature set. Their machine-learning algorithm then tried to determine what combination of features most consistently predicted the success of a patch.
“All the features were trying to look at are relationships between the patch you insert and the code you are trying to patch,” Long says. “Typically, there will be good connections in the correct patches, corresponding to useful or productive program logic. And there will be bad patterns that mean disconnections in program logic or redundant program logic that are less likely to be successful.”
**Ranking candidates**
In earlier work, Long had developed an algorithm that attempts to repair program bugs by systematically modifying program code. The modified code is then subjected to a suite of tests designed to elicit the buggy behavior. This approach may find a modification that passes the tests, but it could take a prohibitively long time. Moreover, the modified code may still contain errors that the tests dont trigger.
Long and Rinards machine-learning system works in conjunction with this earlier algorithm, ranking proposed modifications according to the probability that they are correct before subjecting them to time-consuming tests.
The researchers tested their system, which they call Prophet, on a set of 69 program errors that had cropped up in eight popular open-source programs. Of those, 19 are amenable to the type of modifications that Longs algorithm uses; the other 50 have more complicated problems that involve logical inconsistencies across larger swaths of code.
When Long and Rinard configured their system to settle for the first solution that passed the bug-eliciting tests, it was able to correctly repair 15 of the 19 errors; when they allowed it to run for 12 hours per problem, it repaired 18.
Of course, that still leaves the other 50 errors in the test set untouched. In ongoing work, Long is working on a machine-learning system that will look at more coarse-grained manipulation of program values across larger stretches of code, in the hope of producing a bug-repair system that can handle more complex errors.
“A revolutionary aspect of Prophet is how it leverages past successful patches to learn new ones,” says Eran Yahav, an associate professor of computer science at the Technion in Israel. “It relies on the insight that despite differences between software projects, fixes — patches — applied to projects often have commonalities that can be learned from. Using machine learning to learn from big code holds the promise to revolutionize many programming tasks — code completion, reverse-engineering, et cetera.”
--------------------------------------------------------------------------------
via: http://news.mit.edu/2016/faster-automatic-bug-repair-code-errors-0129
作者Larry Hardesty
译者:[译者ID](https://github.com/翻译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://news.mit.edu/2013/teaching-computers-to-see-by-learning-to-see-like-computers-0919
[2]:http://news.mit.edu/2015/automating-big-data-analysis-1016

View File

@ -1,72 +0,0 @@
Zephyr Project for Internet of Things, releases from Facebook, IBM, Yahoo, and more news
===========================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/weekly_news_roundup_tv.png?itok=eqUoW1gU)
In this week's edition of our open source news roundup, we take a look at the new IoT project from the Linux Foundation, three big corporations releasing open source, and more.
**News roundup for February 21 - 26, 2016**
### Linux Foundation unveils the Zephyr Project
The Internet of Things (IoT) is shaping up to be the next big thing in consumer technology. At the moment, most IoT solutions are proprietary and closed source. Open source is making numerous in-roads into the IoT world, and that's undoubtedly going to accelerate now that the Linux Foundation has [announced the Zephyr Project][1].
The Zephyr Project, according to ZDNet, "hopes to bring vendors and developers together under a single operating system which could make the development of connected devices an easier, less expensive and more stable process." The Project "aims to incorporate input from the open source and embedded developer communities and to encourage collaboration on the RTOS (real-time operating system)," according to the [Linux Foundation's press release][2].
Currently, Intel Corporation, NXP Semiconductors N.V., Synopsys, Inc., and UbiquiOS Technology Limited are the main supporters of the project. The Linux Foundation intends to attract other IoT vendors to this effort as well.
### Releases from Facebook, IBM, Yahoo
As we all know, open source isn't just about individuals or small groups hacking on code and hardware. Quite a few large corporations have significant investments in open source. This past week, three of them affirmed their commitment to open source.
Yahoo again waded into open source waters this week with the [release of CaffeOnSpark][3] artificial intelligence software under an Apache 2.0 license. CaffeOnSpark performs "a popular type of AI called 'deep learning' on the vast swaths of data kept in its Hadoop open-source file system for storing big data," according to VentureBeat. If you're curious, you can [find the source code on GitHub][4].
Earlier this week, Facebook "[unveiled a new project that seeks not only to accelerate the evolution of technologies that drive our mobile networks, but to freely share this work with the worlds telecoms][5]," according to Wired. The company plans to build "everything from new wireless radios to nee optical fiber equipment." The designs, according to Facebook, will be open source so any telecom firm can use them.
As part of the [Open Mainframe Project][6], IBM has open sourced the code for its Anomaly Detection Engine (ADE) for Linux logs. [According to IBM][7], "ADE detects anomalous time slices and messages in Linux logs using statistical learning" to detect suspicious behaviour. You can grab the [source code for ADE][8] from GitHub.
### European Union to fund research
The European Research Council, the European Union's science and technology funding body, is [funding four open source research projects][9] to the tune of about €2 million. According to joinup.ec.europa.eu, the projects being funded are:
- A code audit of Mozilla's open source Rust programming language
- An initiative at INRIA (France's national computer science research center) studying secure programming
- A project at Austria's Technische Universitat Graz testing "ways to secure code against attacks that exploit certain properties of the computer hardware"
- The "development of techniques to prove popular cryptographic protocols and schemes" at IST Austria
### In other news
- [Infosys' newest weapon: open source][10]
- [Intel demonstrates Android smartphone running a Linux desktop][11]
- [BeeGFS file system goes open source][12]
A big thanks, as always, to the Opensource.com moderators and staff for their help this week.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/2/weekly-news-feb-26
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[1]: http://www.zdnet.com/article/the-linux-foundations-zephyr-project-building-an-operating-system-for-iot-devices/
[2]: http://www.linuxfoundation.org/news-media/announcements/2016/02/linux-foundation-announces-project-build-real-time-operating-system
[3]: http://venturebeat.com/2016/02/24/yahoo-open-sources-caffeonspark-deep-learning-framework-for-hadoop/
[4]: https://github.com/yahoo/CaffeOnSpark
[5]: http://www.wired.com/2016/02/facebook-open-source-wireless-gear-forge-5g-world/
[6]: https://www.openmainframeproject.org/
[7]: http://openmainframeproject.github.io/ade/
[8]: https://github.com/openmainframeproject/ade
[9]: https://joinup.ec.europa.eu/node/149541
[10]: http://www.businessinsider.in/Exclusive-Infosys-is-using-Open-Source-as-its-mostlethal-weapon-yet/articleshow/51109129.cms
[11]: http://www.theregister.co.uk/2016/02/23/move_over_continuum_intel_shows_android_smartphone_powering_bigscreen_linux/
[12]: http://insidehpc.com/2016/02/beegfs-parallel-file-system-now-open-source/

View File

@ -1,42 +0,0 @@
Node.js 5.7 released ahead of impending OpenSSL updates
=============================================================
![](http://images.techhive.com/images/article/2014/09/nodejs-100449932-primary.idge.jpg)
>Once again, OpenSSL fixes must be evaluated by keepers of the popular server-side JavaScript platform
The Node.js Foundation is gearing up this week for fixes to OpenSSL that could mean updates to Node.js itself.
Releases to OpenSSL due on Tuesday will fix defects deemed to be of "high" severity, Rod Vagg, foundation technical steering committee director, said [in a blog post][1] on Monday. Within a day of the OpenSSL releases, the Node.js crypto team will assess their impacts, saying, "Please be prepared for the possibility of important updates to Node.js v0.10, v0.12, v4 and v5 soon after Tuesday, the 1st of March."
[ Deep Dive: [How to rethink security for the new world of IT][2]. | Discover how to secure your systems with InfoWorld's [Security newsletter][3]. ]
The high severity status actually means the issues are of lower risks than critical, perhaps affecting less-common configurations or less likely to be exploitable. Due to an embargo, the exact nature of these fixes and their impact on Node.js remain uncertain, said Vagg. "Node.js v0.10 and v0.12 both use OpenSSL v1.0.1, and Node.js v4 and v5 both use OpenSSL v1.0.2, and releases from nodejs.org and some other popular distribution sources are statically compiled. Therefore, all active release lines are impacted by this update." OpenSSL also impacted Node.js in December, [when two critical vulnerabilities were fixed][4].
The latest OpenSSL developments follow [the release of Node.js 5.7.0][5], which is clearing a path for the upcoming Node.js 6. Version 5 is the main focus for active development, said foundation representative Mikeal Rogers, "However, v5 won't be supported long-term, and most users will want to wait for v6, which will be released by the end of April, for the new features that are landing in v5."
Release 5.7 has more predictability for C++ add-ons' interactions with JavaScript. Node.js can invoke JavaScript code from C++ code, and in version 5.7, the C++ node::MakeCallback() API is now re-entrant; calling it from inside another MakeCallback() call no longer causes the nextTick queue or Promises microtask queue to be processed out of order, [according to release notes][6].
Also fixed is an HTTP bug where handling headers mistakenly trigger an "upgrade" event where the server just advertises protocols. The bug can prevent HTTP clients from communicating with HTTP2-enabled servers. Version 5.7 performance improvements are featured in the path, querystring, streams, and process.nextTick modules.
This story, "Node.js 5.7 released ahead of impending OpenSSL updates" was originally published by [InfoWorld][7].
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3039005/security/nodejs-57-released-ahead-of-impending-openssl-updates.html
作者:[Paul Krill][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Paul-Krill/
[1]: https://nodejs.org/en/blog/vulnerability/openssl-march-2016/
[2]: http://www.infoworld.com/resources/17273/security-management/how-to-rethink-security-for-the-new-world-of-it#tk.ifw-infsb
[3]: http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
[4]: http://www.infoworld.com/article/3012157/security/why-nodejs-waited-for-openssl-security-update-before-patching.html
[5]: https://nodejs.org/en/blog/release/v5.7.0/
[6]: https://nodejs.org/en/blog/release/v5.7.0/
[7]: http://www.infoworld.com/

View File

@ -1,40 +0,0 @@
Robolinux 8.4 LTS "Raptor" Series Announced, Based on Debian GNU/Linux 8 Jessie
====================================================================================
keyword : Robolinux 8.4 LTS , Robolinux 8.4 Cinnamon , Robolinux 8.4 MATE , Robolinux 8.4 Xfce , Debian 8
> It runs Windows 7 and 10 virus-free in stealth VMs
### The developer of the Robolinux project has announced the release of his latest Robolinux 8.4 LTS "Raptor" series of Debian-based operating systems, which includes numerous software updates and performance improvements.
Usually, the Robolinux developer [announces][1] only one edition at a time for a new major release of the GNU/Linux distribution, but today's announcement includes details about the availability for download of the Robolinux 8.4 LTS Cinnamon, MATE, Xfce, and LXDE editions, as both 64-bit and 32-bit variants.
The long-term supported Robolinux 8.4 series of distributions has been in development for the last three and a half months, during which it has been synchronized with the upstream Debian GNU/Linux 8 (Jessie) repositories, thus adding all the latest security patches and software updates.
"Three and a half months of hard work went into finding every way possible to optimize and speed up our series 8 Robolinux 'Raptor' operating systems," say the devs. "The result is we have significantly decreased the time it takes to load applications, bootup and shutdown all four of our upgraded Robolinux Raptor series versions.
### The Raptor series is supported until 2020
Powered by Debian GNU/Linux 8's Linux 3.16 kernel, all the Robolinux 8.4 LTS "Raptor" editions have been rebased on the current stable Debian 8.3 source code, including over 180 upstream security and application updates. As Google ended support for the 32-bit version of its Google Chrome web browser, Robolinux now switches to Chromium.
Other important software updates include the Mozilla Firefox 45.0 web browser, Mozilla Thunderbird 38.7.0 email and news client, Tor Browser 5.5, and VirtualBox 5.0. As usual, all Robolinux flavors come with numerous popular apps, including but not limited to Google Earth, Skype, Tor, I2P, Kazam, and a collection of useful security and privacy apps.
Robolinux is a distribution targeted at new Linux users, so as expected, it includes the stealth virtual machine technology that lets them run the Microsoft Windows XP, Windows 7, and Windows 10 operating systems virus-free. Best of all, the Robolinux 8 "Raptor" LTS series is supported with software updates and security patches until the year 2020.
While newcomers can [download the Robolinux 8.4 LTS Cinnamon, MATE][2], [Xfce][3], and [LXDE][4] editions right now from our website, current Robolinux 8 users can upgrade to the 8.4 release using the built-in "Robolinux Auto Upgrade" button in the Applications Menu.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/robolinux-8-4-lts-raptor-series-announced-based-on-debian-gnu-linux-8-jessie-501899.shtml
作者:[Marius Nestor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://news.softpedia.com/editors/browse/marius-nestor
[1]: https://robolinux.org/downloads/v8.4-details.html
[2]: http://linux.softpedia.com/get/System/Operating-Systems/Linux-Distributions/Robolinux-102332.shtml
[3]: http://linux.softpedia.com/get/Linux-Distributions/Robolinux-Xfce-103540.shtml
[4]: http://linux.softpedia.com/get/Linux-Distributions/Robolinux-LXDE-103691.shtml

View File

@ -0,0 +1,32 @@
ownCloud 9.0 Enterprise Edition Arrives with Extensive File Control Capabilities
==================================================================================
>ownCloud, Inc. has had the great pleasure of [announcing][1] the availability of the Enterprise Edition (EE) of its powerful ownCloud 9.0 self-hosting cloud server solution.
Engineered exclusively for small- and medium-sized business, as well as major organizations and enterprises, [ownCloud 9.0 Enterprise Edition][2] is now available with extensive file control capabilities and all the cool new features that made the open-source version of the project famous amongst Linux users.
Prominent new features in ownCloud 9.0 Enterprise Edition are built-in Auto-Tagging and File Firewall apps, which have been based on some of the new features of ownCloud 9.0, such as file tags, file comments, as well as notifications and activities enhancements. This offers system administrators the ability to set rules and classifications for shared documents based on user- or system-applied tags.
"To illustrate how this can work, imagine working in a publicly traded company which has to be very careful not to release financial information ahead of official disclosure," reads the [announcement][3]. "While sharing this information internally it could end up in a folder which is shared through a public link. By assigning a special system tag, admins can configure the system to ensure the files are not available for download despite this mistake."
### Used by over 8 million people around the globe
ownCloud 9.0 is the best release of the open-source self-hosting cloud server software so far, which is currently used by over 8 million users around the globe. The release has brought a huge number of new features, such as code signing, as well as dozens of under-the-hood improvements and cosmetic changes. ownCloud 9.0 also got its first point release, version 9.0.1, last week, which [introduced even more enhancements][4].
And now, enterprises can take advantage of ownCloud 9.0's new features to differentiate between storage type and location, as well as the location of users, groups, and clients. ownCloud 9.0 Enterprise Edition gives them extensive access control, which is perfect if they have strict company guidelines or work with all sorts of regulations and rules. Below, you can see the File Firewall and Auto-Tagging apps in action.
------------------------------------------------------------------------------
via: http://news.softpedia.com/news/owncloud-9-0-enterprise-edition-arrives-with-extensive-file-control-capabilities-502985.shtml
作者:[Marius Nestor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://news.softpedia.com/editors/browse/marius-nestor
[1]: https://owncloud.com/blog-introducing-owncloud-9-enterprise-edition/
[2]: https://owncloud.com/
[3]: https://owncloud.org/blog/owncloud-9-0-enterprise-edition-is-now-available/
[4]: http://news.softpedia.com/news/owncloud-9-0-gets-its-first-point-release-over-120-improvements-introduced-502698.shtml

View File

@ -1,194 +0,0 @@
5 best open source board games to play online
================================================================================
I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons.
I had a panache for abstract strategy games such as chess and draughts, as well as word games. I can still never resist a game of Escape from Colditz, a strategy card and dice-based board game, or Risk; two timeless multi-player strategy board games. But Catan remains my favourite board game.
Board games have seen a resurgence in recent years, and Linux has a good range of board games to choose from. There is a credible implementation of Catan called Pioneers. But for my favourite implementations of classic board games to play online, check out the recommendations below.
----------
### TripleA ###
![TripleA in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-TripleA.png)
TripleA is an open source online turn based strategy game. It allows people to implement and play various strategy board games (ie. Axis & Allies). The TripleA engine has full networking support for online play, support for sounds, XML support for game files, and has its own imaging subsystem that allows for customized user editable maps to be used. TripleA is versatile, scalable and robust.
TripleA started out as a World War II simulation, but now includes different conflicts, as well as variations and mods of popular games and maps. TripleA comes with multiple games and over 100 more games can be downloaded from the user community.
Features include:
- Good interface and attractive graphics
- Optional scenarios
- Multiplayer games
- TripleA comes with the following supported games that uses its game engine (just to name a few):
- Axis & Allies : Classic edition (2nd, 3rd with options enabled)
- Axis & Allies : Revised Edition
- Pact of Steel A&A Variant
- Big World 1942 A&A Variant
- Four if by Sea
- Battle Ship Row
- Capture The Flag
- Minimap
- Hot-seat
- Play By EMail mode allows persons to play a game via EMail without having to be connected to each other online
- More time to think out moves
- Only need to come online to send your turn to the next player
- Dice rolls are done by a dedicated dice server that is independent of TripleA
- All dice rolls are PGP Verified and email to every player
- Every move and every dice roll is logged and saved in TripleA's History Window
- An online game can be later continued under PBEM mode
- Hard for others to cheat
- Hosted online lobby
- Utilities for editing maps
- Website: [triplea.sourceforge.net][1]
- Developer: Sean Bridges (original developer), Mark Christopher Duncan
- License: GNU GPL v2
- Version Number: 1.8.0.7
----------
### Domination ###
![Domination in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Domination.png)
Domination is an open source game that shares common themes with the hugely popular Risk board game. It has many game options and includes many maps.
In the classic “World Domination” game of military strategy, you are battling to conquer the world. To win, you must launch daring attacks, defend yourself to all fronts, and sweep across vast continents with boldness and cunning. But remember, the dangers, as well as the rewards, are high. Just when the world is within your grasp, your opponent might strike and take it all away!
Features include:
- Simple to learn
- Domination - you must occupy all countries on the map, and thereby eliminate all opponents. These can be long, drawn out games
- Capital - each player has a country they have selected as a Capital. To win the game, you must occupy all Capitals
- Mission - each player draws a random mission. The first to complete their mission wins. Missions may include the elimination of a certain colour, occupation of a particular continent, or a mix of both
- Map editor
- Simple map format
- Multiplayer network play
- Single player
- Hotseat
- 5 user interfaces
- Game types:
- Play online
- Website: [domination.sourceforge.net][2]
- Developer: Yura Mamyrin, Christian Weiske, Mike Chaten, and many others
- License: GNU GPL v3
- Version Number: 1.1.1.5
----------
### PyChess ###
![Micro-Max in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-Pychess.jpg)
PyChess is a Gnome inspired chess client written in Python.
The goal of PyChess, is to provide a fully featured, nice looking, easy to use chess client for the gnome-desktop.
The client should be usable both to those totally new to chess, those who want to play an occasional game, and those who wants to use the computer to further enhance their play.
Features include:
- Attractive interface
- Chess Engine Communication Protocol (CECP) and Univeral Chess Interface (UCI) Engine support
- Free online play on the Free Internet Chess Server (FICS)
- Read and writes PGN, EPD and FEN chess file formats
- Built-in Python based engine
- Undo and pause functions
- Board and piece animation
- Drag and drop
- Tabbed interface
- Hints and spyarrows
- Opening book sidepanel using sqlite
- Score plot sidepanel
- "Enter game" in pgn dialog
- Optional sounds
- Legal move highlighting
- Internationalised or figure pieces in notation
- Website: [www.pychess.org][3]
- Developer: Thomas Dybdahl Ahle
- License: GNU GPL v2
- Version Number: 0.12 Anderssen rc4
----------
### Scrabble ###
![Scrabble in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Scrabble3D.png)
Scrabble3D is a highly customizable Scrabble game that not only supports Classic Scrabble and Superscrabble but also 3D games and own boards. You can play local against the computer or connect to a game server to find other players.
Scrabble is a board game with the goal to place letters crossword like. Up to four players take part and get a limited amount of letters (usually 7 or 8). Consecutively, each player tries to compose his letters to one or more word combining with the placed words on the game array. The value of the move depends on the letters (rare letter get more points) and bonus fields which multiply the value of a letter or the whole word. The player with most points win.
This idea is extended with Scrabble3D to the third dimension. Of course, a classic game with 15x15 fields or Superscrabble with 21x21 fields can be played and you may configure any field setting by yourself. The game can be played by the provided freeware program against Computer, other local players or via internet. Last but not least it's possible to connect to a game server to find other players and to obtain a rating. Most options are configurable, including the number and valuation of letters, the used dictionary, the language of dialogs and certainly colors, fonts etc.
Features include:
- Configurable board, letterset and design
- Board in OpenGL graphics with user-definable wavefront model
- Game against computer with support of multithreading
- Post-hoc game analysis with calculation of best move by computer
- Match with other players connected on a game server
- NSA rating and highscore at game server
- Time limit of games
- Localization; use of non-standard digraphs like CH, RR, LL and right to left reading
- Multilanguage help / wiki
- Network games are buffered and asynchronous games are possible
- Running games can be kibitzed
- International rules including italian "Cambio Secco"
- Challenge mode, What-if-variant, CLABBERS, etc
- Website: [sourceforge.net/projects/scrabble][4]
- Developer: Heiko Tietze
- License: GNU GPL v3
- Version Number: 3.1.3
----------
### Backgammon ###
![Backgammon in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-gnubg.png)
GNU Backgammon (gnubg) is a strong backgammon program (world-class with a bearoff database installed) usable either as an engine by other programs or as a standalone backgammon game. It is able to play and analyze both money games and tournament matches, evaluate and roll out positions, and more.
In addition to supporting simple play, it also has extensive analysis features, a tutor mode, adjustable difficulty, and support for exporting annotated games.
It currently plays at about the level of a championship flight tournament player and is gradually improving.
gnubg can be played on numerous on-line backgammon servers, such as the First Internet Backgammon Server (FIBS).
Features include:
- A command line interface (with full command editing features if GNU readline is available) that lets you play matches and sessions against GNU Backgammon with a rough ASCII representation of the board on text terminals
- Support for a GTK+ interface with a graphical board window. Both 2D and 3D graphics are available
- Tournament match and money session cube handling and cubeful play
- Support for both 1-sided and 2-sided bearoff databases: 1-sided bearoff database for 15 checkers on the first 6 points and optional 2-sided database kept in memory. Optional larger 1-sided and 2-sided databases stored on disk
- Automated rollouts of positions, with lookahead and race variance reduction where appropriate. Rollouts may be extended
- Functions to generate legal moves and evaluate positions at varying search depths
- Neural net functions for giving cubeless evaluations of all other contact and race positions
- Automatic and manual annotation (analysis and commentary) of games and matches
- Record keeping of statistics of players in games and matches (both native inside GNU Backgammon and externally using relational databases and Python)
- Loading and saving analyzed games and matches as .sgf files (Smart Game Format)
- Exporting positions, games and matches to: (.eps) Encapsulated Postscript, (.gam) Jellyfish Game, (.html) HTML, (.mat) Jellyfish Match, (.pdf) PDF, (.png) Portable Network Graphics, (.pos) Jellyfish Position, (.ps) PostScript, (.sgf) Gnu Backgammon File, (.tex) LaTeX, (.txt) Plain Text, (.txt) Snowie Text
- Import of matches and positions from a number of file formats: (.bkg) Hans Berliner's BKG Format, (.gam) GammonEmpire Game, (.gam) PartyGammon Game, (.mat) Jellyfish Match, (.pos) Jellyfish Position, (.sgf) Gnu Backgammon File, (.sgg) GamesGrid Save Game, (.tmg) TrueMoneyGames, (.txt) Snowie Text
- Python Scripting
- Native language support; 10 languages complete or in progress
- Website: [www.gnubg.org][5]
- Developer: Joseph Heled, Oystein Johansen, Jonathan Kinsey, David Montgomery, Jim Segrave, Joern Thyssen, Gary Wong and contributors
- License: GPL v2
- Version Number: 1.05.000
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150830011533893/BoardGames.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://triplea.sourceforge.net/
[2]:http://domination.sourceforge.net/
[3]:http://www.pychess.org/
[4]:http://sourceforge.net/projects/scrabble/
[5]:http://www.gnubg.org/

View File

@ -1,336 +0,0 @@
Bossie Awards 2015: The best open source application development tools
================================================================================
InfoWorld's top picks among platforms, frameworks, databases, and all the other tools that programmers use
![](http://images.techhive.com/images/article/2015/09/bossies-2015-app-dev-100613767-orig.jpg)
### The best open source development tools ###
There must be a better way, right? The developers are the ones who find it. This year's winning projects in the application development category include client-side frameworks, server-side frameworks, mobile frameworks, databases, languages, libraries, editors, and yeah, Docker. These are our top picks among all of the tools that make it faster and easier to build better applications.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613773-orig.jpg)
### Docker ###
The darling of container fans almost everywhere, [Docker][2] provides a low-overhead way to isolate an application or services environment, which serves its stated goal of being an open platform for building, shipping, and running distributed applications. Docker has been widely supported, even among those seeking to replace the Docker container format with an alternative, more secure runtime and format, specifically Rkt and AppC. Heck, Microsoft Visual Studio now supports deploying into a Docker container too.
Dockers biggest impact has been on virtual machine environments. Since Docker containers run inside the operating system, many more Docker containers than virtual machines can run in a given amount of RAM. This is important because RAM is usually the scarcest and most expensive resource in a virtualized environment.
There are hundreds of thousands of runnable public images on Docker Hub, of which a few hundred are official, and the rest are from the community. You describe Docker images with a Dockerfile and build images locally from the Docker command line. You can add both public and private image repositories to Docker Hub.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613778-orig.jpg)
### Node.js and io.js ###
[Node.js][2] -- and its recently reunited fork [io.js][3] -- is a platform built on [Google Chrome's V8 JavaScript runtime][4] for building fast, scalable, network applications. Node uses an event-driven, nonblocking I/O model without threads. In general, Node tends to take less memory and CPU resources than other runtime engines, such as Java and the .Net Framework. For example, a typical Node.js Web server can run well in a 512MB instance on Cloud Foundry or a 512MB Docker container.
The Node repository on GitHub has more than 35,000 stars and more than 8,000 forks. The project, sponsored primarily by Joyent, has more than 600 contributors. Some of the more famous Node applications are 37Signals, [Ancestry.com][5], Chomp, the Wall Street Journal online, FeedHenry, [GE.com][6], Mockingbird, [Pearson.com][7], Shutterstock, and Uber. The popular IoT back-end Node-RED is built on Node, as are many client apps, such as Brackets and Nuclide.
-- Martin Heller
![](rticle/2015/09/bossies-2015-angularjs-100613766-orig.jpg)
### AngularJS ###
[AngularJS][8] (or simply Angular, among friends) is a Model-View-Whatever (MVW) JavaScript AJAX framework that extends HTML with markup for dynamic views and data binding. Angular is especially good for developing single-page Web applications and linking HTML forms to models and JavaScript controllers.
The weird sounding Model-View-Whatever pattern is an attempt to include the Model-View-Controller, Model-View-ViewModel, and Model-View-Presenter patterns under one moniker. The differences among these three closely related patterns are the sorts of topics that programmers love to argue about fiercely; the Angular developers decided to opt out of the discussion.
Basically, Angular automatically synchronizes data from your UI (view) with your JavaScript objects (model) through two-way data binding. To help you structure your application better and make it easy to test, AngularJS teaches the browser how to do dependency injection and inversion of control.
Angular was created by Google and open-sourced under the MIT license; there are currently more than 1,200 contributors to the project on GitHub, and the repository has more than 40,000 stars and 18,000 forks. The Angular site lists [210 “neat things” built with Angular][9].
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-react-100613782-orig.jpg)
### React ###
[React][10] is a JavaScript library for building a UI or view, typically for single-page applications. Note that React does not implement anything having to do with a model or controller. React pages can render on the server or the client; rendering on the server (with Node.js) is typically much faster. People often combine React with AngularJS to create complete applications.
React combines JavaScript and HTML in a single file, optionally a JSX component. React fans like the way JSX components combine views and their related functionality in one file, though that flies in the face of the last decade of Web development trends, which were all about separating the markup and the code. React fans also claim that you cant understand it until youve tried it. Perhaps you should; the React repository on GitHub has 26,000 stars.
[React Native][11] implements React with native iOS controls; the React Native command line uses Node and Xcode. [ReactJS.Net][12] integrates React with [ASP.Net][13] and C#. React is available under a BSD license with a patent license grant from Facebook.
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-atom-100613768-orig.jpg)
### Atom ###
[Atom][14] is an open source, hackable desktop editor from GitHub, based on Web technologies. Its a full-featured tool with a fuzzy finder; fast projectwide search and replace; multiple cursors and selections; multiple panes, snippets, code folding; and the ability to import TextMate grammars and themes. Out of the box, Atom displayed proper syntax highlighting for every programming language on which I tried it, except for F# and C#; I fixed that easily by loading those packages from within Atom. Not surprising, Atom has tight integration with GitHub.
The skeleton of Atom has been separated from the guts and called the Electron shell, providing an open source way to build cross-platform desktop apps with Web technologies. Visual Studio Code is built on the Electron shell, as are a number of proprietary and open source apps, including Slack and Kitematic. Facebook Nuclide adds significant functionality to Atom, including remote development and support for Flow, Hack, and Mercurial.
On the downside, updating Atom packages can become painful, especially if you have many of them installed. The Nuclide packages seem to be the worst offenders -- they not only take a long time to update, they run CPU-intensive Node processes to do so.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-brackets-100613769-orig.jpg)
### Brackets ###
[Brackets][15] is a lightweight editor for Web design that Adobe developed and open-sourced, drawing heavily on other open source projects. The idea is to build better tooling for JavaScript, HTML, CSS, and related open Web technologies. Brackets itself is written in JavaScript, HTML, and CSS, and the developers use Brackets to build Brackets. The editor portion is based on another open source project, CodeMirror, and the Brackets native shell is based on Googles Chromium Embedded Framework.
Brackets features a clean UI, with the ability to open a quick inline editor that displays all of the related CSS for some HTML, or all of the related JavaScript for some scripting, and a live preview for Web pages that you are editing. New in Brackets 1.4 is instant search in files, easier preferences editing, the ability to enable and disable extensions individually, improved text rendering on Macs, and Greek and Cyrillic character support. Last November, Adobe started shipping a preview version of Extract for Brackets, which can pull out design information from Photoshop files, as part of the default download for Brackets.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-typescript-100613786-orig.jpg)
### TypeScript ###
[TypeScript][16] is a portable, duck-typed superset of JavaScript that compiles to plain JavaScript. The goal of the project is to make JavaScript usable for large applications. In pursuit of that goal, TypeScript adds optional types, classes, and modules to JavaScript, and it supports tools for large-scale JavaScript applications. Typing gets rid of some of the nonsensical and potentially buggy default behavior in JavaScript, for example:
> 1 + "1"
'11'
“Duck” typing means that the type checking focuses on the shape of the data values; TypeScript describes basic types, interfaces, and classes. While the current version of JavaScript does not support traditional, class-based, object-oriented programming, the ECMAScript 6 specification does. TypeScript compiles ES6 classes into plain, compatible JavaScript, with prototype-based objects, unless you enable ES6 output using the `--target` compiler option.
Visual Studio includes TypeScript in the box, starting with Visual Studio 2013 Update 2. You can also edit TypeScript in Visual Studio Code, WebStorm, Atom, Sublime Text, and Eclipse.
When using an external JavaScript library, or new host API, you'll need to use a declaration file (.d.ts) to describe the shape of the library. You can often find declaration files in the [DefinitelyTyped][17] repository, either by browsing, using the [TSD definition manager][18], or using NuGet.
TypeScripts GitHub repository has more than 6,000 stars.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-swagger-100613785-orig.jpg)
### Swagger ###
[Swagger][19] is a language-agnostic interface to RESTful APIs, with tooling that gives you interactive documentation, client SDK generation, and discoverability. Its one of several recent attempts to codify the description of RESTful APIs, in the spirit of WSDL for XML Web Services (2000) and CORBA for distributed object interfaces (1991).
The tooling makes Swagger especially interesting. [Swagger-UI][20] automatically generates beautiful documentation and a live API sandbox from a Swagger-compliant API. The [Swagger codegen][21] project allows generation of client libraries automatically from a Swagger-compliant server.
[Swagger Editor][22] lets you edit Swagger API specifications in YAML inside your browser and preview documentations in real time. Valid Swagger JSON descriptions can then be generated and used with the full Swagger tooling.
The [Swagger JS][23] library is a fast way to enable a JavaScript client to communicate with a Swagger-enabled server. Additional clients exist for Clojure, Go, Java, .Net, Node.js, Perl, PHP, Python, Ruby, and Scala.
The [Amazon API Gateway][24] is a managed service for API management at scale. It can import Swagger specifications using an open source [Swagger Importer][25] tool.
Swagger and friends use the Apache 2.0 license.
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-polymer-100613781-orig.jpg)
### Polymer ###
The [Polymer][26] library is a lightweight, “sugaring” layer on top of the Web components APIs to help in building your own Web components. It adds several features for greater ease in building complex elements, such as creating custom element registration, adding markup to your element, configuring properties on your element, setting the properties with attributes, data binding with mustache syntax, and internal styling of elements.
Polymer also includes libraries of prebuilt elements. The Iron library includes elements for working with layout, user input, selection, and scaffolding apps. The Paper elements implement Google's Material Design. The Gold library includes elements for credit card input fields for e-commerce, the Neon elements implement animations, the Platinum library implements push messages and offline caching, and the Google Web Components library is exactly what it says; it includes wrappers for YouTube, Firebase, Google Docs, Hangouts, Google Maps, and Google Charts.
Polymer Molecules are elements that wrap other JavaScript libraries. The only Molecule currently implemented is for marked, a Markdown library. The Polymer repository on GitHub currently has 12,000 stars. The software is distributed under a BSD-style license.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-ionic-100613775-orig.jpg)
### Ionic ###
The [Ionic][27] framework is a front-end SDK for building hybrid mobile apps, using Angular.js and Cordova, PhoneGap, or Trigger.io. Ionic was designed to be similar in spirit to the Android and iOS SDKs, and to do a minimum of DOM manipulation and use hardware-accelerated transitions to keep the rendering speed high. Ionic is focused mainly on the look and feel and UI interaction of your app.
In addition to the framework, Ionic encompasses an ecosystem of mobile development tools and resources. These include Chrome-based tools, Angular extensions for Cordova capabilities, back-end services, a development server, and a shell View App to enable testers to use your Ionic code on their devices without the need for you to distribute beta apps through the App Store or Google Play.
Appery.io integrated Ionic into its low-code builder in July 2015. Ionics GitHub repository has more than 18,000 stars and more than 3,000 forks. Ionic is distributed under an MIT license and currently runs in UIWebView for iOS 7 and later, and in Android 4.1 and up.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-cordova-100613771-orig.jpg)
### Cordova ###
[Apache Cordova][28] is the open source project spun off when Adobe acquired PhoneGap from Nitobi. Cordova is a set of device APIs, plus some tooling, that allows a mobile app developer to access native device functionality like the camera and accelerometer from JavaScript. When combined with a UI framework like Angular, it allows a smartphone app to be developed with only HTML, CSS, and JavaScript. By using Cordova plug-ins for multiple devices, you can generate hybrid apps that share a large portion of their code but also have access to a wide range of platform capabilities. The HTML5 markup and code runs in a WebView hosted by the Cordova shell.
Cordova is one of the cross-platform mobile app options supported by Visual Studio 2015. Several companies offer online builders for Cordova apps, similar to the Adobe PhoneGap Build service. Online builders save you from having to install and maintain most of the device SDKs on which Cordova relies.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-famous-100613774-orig.jpg)
### Famous Engine ###
The high-performance Famo.us JavaScript framework introduced last year has become the [Famous Engine][29] and [Famous Framework][30]. The Famous Engine runs in a mixed mode, with the DOM and WebGL under a single coordinate system. As before, Famous structures applications in a scene graph hierarchy, but now it produces very little garbage (reducing the garbage collector overhead) and sustains 60FPS animations.
The Famous Physics engine has been refactored to its own, fine-grained module so that you can load only the features you need. Other improvements since last year include streamlined eventing, improved sizing, decoupling the scene graph from the rendering pipeline by using a draw command buffer, and switching to a fully open MIT license.
The new Famous Framework is an alpha-stage developer preview built on the Famous Engine; its goal is creating reusable, composable, and interchangeable UI widgets and applications. Eventually, Famous hopes to replace the jQuery UI widgets with Famous Framework widgets, but while it's promising, the Famous Framework is nowhere near production-ready.
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-mongodb-rev-100614248-orig.jpg)
### MongoDB ###
[MongoDB][31] is no stranger to the Bossies or to the ever-growing and ever-competitive NoSQL market. If you still aren't familiar with this very popular technology, here's a brief overview: MongoDB is a cross-platform document-oriented database, favoring JSON-like documents with dynamic schemas that make data integration easier and faster.
MongoDB has attractive features, including but not limited to ad hoc queries, flexible indexing, replication, high availability, automatic sharding, load balancing, and aggregation.
The big, bold move with [version 3.0 this year][32] was the new WiredTiger storage engine. We can now have document-level locking. This makes “normal” applications a whole lot more scalable and makes MongoDB available to more use cases.
MongoDB has a growing open source ecosystem with such offerings as the [TokuMX engine][33], from the famous MySQL bad boys Percona. The long list of MongoDB customers includes heavy hitters such as Craigslist, eBay, Facebook, Foursquare, Viacom, and the New York Times.
-- Andrew Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-couchbase-100614851-orig.jpg)
### Couchbase ###
[Couchbase][34] is another distributed, document-oriented database that has been making waves in the NoSQL world for quite some time now. Couchbase and MongoDB often compete, but they each have their sweet spots. Couchbase tends to outperform MongoDB when doing more in memory is possible.
Additionally, Couchbases mobile features allow you to disconnect and ship a database in compact format. This allows you to scale down as well as up. This is useful not just for mobile devices but also for specialized applications, like shipping medical records across radio waves in Africa.
This year Couchbase added N1QL, a SQL-based query language that did away with Couchbases biggest obstacle, requiring static views. The new release also introduced multidimensional scaling. This allows individual scaling of services such as querying, indexing, and data storage to improve performance, instead of adding an entire, duplicate node.
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-cassandra-100614852-orig.jpg)
### Cassandra ###
[Cassandra][35] is the other white meat of column family databases. HBase might be included with your favorite Hadoop distribution, but Cassandra is the one people deliberately deploy for specialized applications. There are good reasons for this.
Cassandra was designed for high workloads of both writes and reads where millisecond consistency isn't as important as throughput. HBase is optimized for reads and greater write consistency. To a large degree, Cassandra tends to be used for operational systems and HBase more for data warehouse and batch-system-type use cases.
While Cassandra has not received as much attention as other NoSQL databases and slipped into a quiet period a couple years back, it is widely used and deployed, and it's a great fit for time series, product catalog, recommendations, and other applications. If you want to keep a cluster up “no matter what” with multiple masters and multiple data centers, and you need to scale with lots of reads and lots of writes, Cassandra might just be your Huckleberry.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orientdb-100613780-orig.jpg)
### OrientDB ###
[OrientDB][36] is an interesting hybrid in the NoSQL world, combining features from a document database, where individual documents can have multiple fields without necessarily defining a schema, and a graph database, which consists of a set of nodes and edges. At a basic level, OrientDB considers the document as a vertex, and relationships between fields as graph edges. Because the relationships between elements are part of the record, no costly joins are required when querying data.
Like most databases today, OrientDB offers linear scalability via a distributed architecture. Adding capacity is a matter of simply adding more nodes to the cluster. Queries are written in a variant of SQL that is extended to support graph concepts. It's not exactly SQL, but data analysts shouldn't have too much trouble adapting. Language bindings are available for most commonly used languages, such as R, Scala, .Net, and C, and those integrating OrientDB into their applications will find an active user community to get help from.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-rethinkdb-100613783-orig.jpg)
### RethinkDB ###
[RethinkDB][37] is a scalable, real-time JSON database with the ability to continuously push updated query results to applications that subscribe to changes. There are official RethinkDB drivers for Ruby, Python, and JavaScript/Node.js, and community-supported drivers for more than a dozen other languages, including C#, Go, and PHP.
Its temping to confuse RethinkDB with real-time sync APIs, such as Firebase and PubNub. RethinkDB can be run as a cloud service like Firebase and PubNub, but you can also install it on your own hardware or Docker containers. RethinkDB does more than synchronize: You can run arbitrary RethinkDB queries, including table joins, subqueries, geospatial queries, and aggregation. Finally, RethinkDB is designed to be accessed from an application server, not a browser.
Where MongoDB requires you to poll the database to see changes, RethinkDB lets you subscribe to a stream of changes to a query result. You can shard and scale RethinkDB easily, unlike MongoDB. Also unlike relational databases, RethinkDB does not give you full ACID support or strong schema enforcement, although it can perform joins.
The RethinkDB repository has 10,000 stars on GitHub, a remarkably high number for a database. It is licensed with the Affero GPL 3.0; the drivers are licensed with Apache 2.0.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rust-100613784-orig.jpg)
### Rust ###
[Rust][38] is a syntactically C-like systems programming language from Mozilla Research that guarantees memory safety and offers painless concurrency (that is, no data races). It does not have a garbage collector and has minimal runtime overhead. Rust is strongly typed with type inference. This is all promising.
Rust was designed for performance. It doesnt yet demonstrate great performance, however, so now the mantra seems to be that it runs as fast as C++ code that implements all the safety checks built into Rust. Im not sure whether I believe that, as in many cases the strictest safety checks for C/C++ code are done by static and dynamic analysis and testing, which dont add any runtime overhead. Perhaps Rust performance will come with time.
So far, the only tools for Rust are the Cargo package manager and the rustdoc documentation generator, plus a couple of simple Rust plug-ins for programming editors. As far as we have heard, there is no shipping software that was actually built with Rust. Now that Rust has reached the 1.0 milestone, we might expect that to change.
Rust is distributed with a dual Apache 2.0 and MIT license. With 13,000 stars on its GitHub repository, Rust is certainly attracting attention, but when and how it will deliver real benefits remains to be seen.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opencv-100613779-orig.jpg)
### OpenCV ###
[OpenCV][39] (Open Source Computer Vision Library) is a computer vision and machine learning library that contains about 500 algorithms, such as face detection, moving object tracking, image stitching, red-eye removal, machine learning, and eye movement tracking. It runs on Windows, Mac OS X, Linux, Android, and iOS.
OpenCV has official C++, C, Python, Java, and MATLAB interfaces, and wrappers in other languages such as C#, Perl, and Ruby. CUDA and OpenCL interfaces are under active development. OpenCV was originally (1999) an Intel Research project in Russia; from there it moved to the robotics research lab Willow Garage (2008) and finally to [OpenCV.org][39] (2012) with a core team at Itseez, current source on GitHub, and stable snapshots on SourceForge.
Users of OpenCV include Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, and Toyota. There are currently more than 6,000 stars and 5,000 forks on the GitHub repository. The project uses a BSD license.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-llvm-100613777-orig.jpg)
### LLVM ###
The [LLVM Project][40] is a collection of modular and reusable compiler and tool chain technologies, which originated at the University of Illinois. LLVM has grown to include a number of subprojects, several of which are interesting in their own right. LLVM is distributed with Debian, Ubuntu, and Apple Xcode, among others, and its used in commercial products from the likes of Adobe (including After Effects), Apple (including Objective-C and Swift), Cray, Intel, NVIDIA, and Siemens. A few of the open source projects that depend on LLVM are PyPy, Mono, Rubinius, Pure, Emscripten, Rust, and Julia. Microsoft has recently contributed LLILC, a new LLVM-based compiler for .Net, to the .Net Foundation.
The main LLVM subprojects are the core libraries, which provide optimization and code generation; Clang, a C/C++/Objective-C compiler thats about three times faster than GCC; LLDB, a much faster debugger than GDB; libc++, an implementation of the C++ 11 Standard Library; and OpenMP, for parallel programming.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613823-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][41]
[Bossie Awards 2015: The best open source application development tools][42]
[Bossie Awards 2015: The best open source big data tools][43]
[Bossie Awards 2015: The best open source data center and cloud software][44]
[Bossie Awards 2015: The best open source desktop and mobile software][45]
[Bossie Awards 2015: The best open source networking and security software][46]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982920/open-source-tools/bossie-awards-2015-the-best-open-source-application-development-tools.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.docker.com/
[2]:https://nodejs.org/en/
[3]:https://iojs.org/en/
[4]:https://developers.google.com/v8/?hl=en
[5]:http://www.ancestry.com/
[6]:http://www.ge.com/
[7]:https://www.pearson.com/
[8]:https://angularjs.org/
[9]:https://builtwith.angularjs.org/
[10]:https://facebook.github.io/react/
[11]:https://facebook.github.io/react-native/
[12]:http://reactjs.net/
[13]:http://asp.net/
[14]:https://atom.io/
[15]:http://brackets.io/
[16]:http://www.typescriptlang.org/
[17]:http://definitelytyped.org/
[18]:http://definitelytyped.org/tsd/
[19]:http://swagger.io/
[20]:https://github.com/swagger-api/swagger-ui
[21]:https://github.com/swagger-api/swagger-codegen
[22]:https://github.com/swagger-api/swagger-editor
[23]:https://github.com/swagger-api/swagger-js
[24]:http://aws.amazon.com/cn/api-gateway/
[25]:https://github.com/awslabs/aws-apigateway-importer
[26]:https://www.polymer-project.org/
[27]:http://ionicframework.com/
[28]:https://cordova.apache.org/
[29]:http://famous.org/
[30]:http://famous.org/framework/
[31]:https://www.mongodb.org/
[32]:http://www.infoworld.com/article/2878738/nosql/first-look-mongodb-30-for-mature-audiences.html
[33]:http://www.infoworld.com/article/2929772/nosql/mongodb-crossroads-growth-or-openness.html
[34]:http://www.couchbase.com/nosql-databases/couchbase-server
[35]:https://cassandra.apache.org/
[36]:http://orientdb.com/
[37]:http://rethinkdb.com/
[38]:https://www.rust-lang.org/
[39]:http://opencv.org/
[40]:http://llvm.org/
[41]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[42]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[43]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[44]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[45]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[46]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,238 +0,0 @@
Bossie Awards 2015: The best open source applications
================================================================================
InfoWorld's top picks in open source business applications, enterprise integration, and middleware
![](http://images.techhive.com/images/article/2015/09/bossies-2015-applications-100614669-orig.jpg)
### The best open source applications ###
Applications -- ERP, CRM, HRM, CMS, BPM -- are not only fertile ground for three-letter acronyms, they're the engines behind every modern business. Our top picks in the category include back- and front-office solutions, marketing automation, lightweight middleware, heavyweight middleware, and other tools for moving data around, mixing it together, and magically transforming it into smarter business decisions.
![](http://images.techhive.com/images/article/2015/09/bossies-2015-xtuple-100614684-orig.jpg)
### xTuple ###
Small and midsize companies with light manufacturing or distribution needs have a friend in [xTuple][1]. This modular ERP/CRM combo bundles operations and financial control, product and inventory management, and CRM and sales support. Its relatively simple install lets you deploy all of the modules or only what you need today -- helping trim support costs without sacrificing customization later.
This summers release brought usability improvements to the UI and a generous number of bug fixes. Recent updates also yielded barcode scanning and label printing for mobile warehouse workers, an enhanced workflow module (built with Plv8, a wrapper around Googles V8 JavaScript engine that lets you write stored procedures for PostgreSQL in JavaScript), and quality management tools that are sure to get mileage on shop floors.
The xTuple codebase is JavaScript from stem to stern. The server components can all be installed locally, in xTuples cloud, or deployed as an appliance. A mobile Web client, and mobile CRM features, augment a good native desktop client.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-odoo-100614678-orig.jpg)
### Odoo ###
[Odoo][2] used to be known as OpenERP. Last year the company raised private capital and broadened its scope. Today Odoo is a one-stop shop for back office and customer-facing applications -- replete with content management, business intelligence, and e-commerce modules.
Odoo 8 fronts accounting, invoicing, project management, resource planning, and customer relationship management tools with a flexible Web interface that can be tailored to your companys workflow. Add-on modules for warehouse management and HR, as well as for live chat and analytics, round out the solution.
This year saw Odoo focused primarily on usability updates. A recently released sales planner helps sales groups track KPIs, and a new tips feature lends in-context help. Odoo 9 is right around the corner with alpha builds showing customer portals, Web form creation tools, mobile and VoIP services, and integration hooks to eBay and Amazon.
Available for Windows and Linux, and as a SaaS offering, Odoo gives small and midsized companies an accessible set of tools to manage virtually every aspect of their business.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-idempiere-100614673-orig.jpg)
### iDempiere ###
Small and midsize companies have great choices in Odoo and xTuple. Larger manufacturing and distribution companies will need something more. For them, theres [iDempiere][3] -- a well maintained offshoot of ADempiere with OSGi modularity.
iDempiere implements a fully loaded ERP, supply chain, and CRM suite right out of the box. Built with Java, iDempiere supports both PostgreSQL and Oracle Database, and it can be customized extensively through modules built to the OSGi specification. iDempiere is perfectly suited to managing complex business scenarios involving multiple partners, requiring dynamic reporting, or employing point-of-sale and warehouse services.
Being enterprise-ready comes with a price. iDempieres feature-rich tools and complexity impose a steep learning curve and require a commitment to integration support. Of course, those costs are offset by savings from the softwares free GPL2 licensing. iDempieres easy install script, small resource footprint, and clean interface also help alleviate some of the startup pains. Theres even a virtual appliance available on Sourceforge to get you started.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-suitecrm-100614680-orig.jpg)
### SuiteCRM ###
SugarCRM held the sweet spot in open source CRM since, well, forever. Then last year Sugar announced it would no longer contribute to the open source Community Edition. Into the ensuing vacuum rushed [SuiteCRM][4] a fork of the final Sugar code.
SuiteCRM 7.2 creates an experience on a par with SugarCRM Professionals marketing, sales, and service tools. With add-on modules for workflow, reporting, and security, as well as new innovations like Lucene-driven search, taps for social media, and a beta reveal of new desktop notifications, SuiteCRM is on solid footing.
The Advanced Open Sales module provides a familiar migration path from Sugar, while commercial support is available from the likes of [SalesAgility][5], the company that forked SuiteCRM in the first place. In little more than a year, SuiteCRM rescued the code, rallied an inspired community, and emerged as a new leader in open source CRM. Who needs Sugar?
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-civicrm-100614671-orig.jpg)
### CiviCRM ###
We typically focus attention on CRM vis-à-vis small and midsize business requirements. But nonprofit and advocacy groups need to engage with their “customers” too. Enter [CiviCRM][6].
CiviCRM addresses the needs of nonprofits with tools for fundraising and donation processing, membership management, email tracking, and event planning. Granular access control and security bring role-based permissions to views, keeping paid staff and volunteers partitioned and productive. This year CiviCRM continued to develop with new features like simple A/B testing and monitoring for email campaigns.
CiviCRM deploys as a plug-in to your WordPress, Drupal, or Joomla content management system -- a dead-simple install if you already have one of these systems in place. If you dont, CiviCRM is an excellent reason to deploy the CMS. Its a niche-filling solution that allows nonprofits to start using smarter, tailored tools for managing constituencies, without steep hurdles and training costs.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mautic-100614677-orig.jpg)
### Mautic ###
For marketers, the Internet -- Web, email, social, all of it -- is the stuff dreams are made on. [Mautic][7] allows you to create Web and email campaigns that track and nurture customer engagement, then roll all of the data into detailed reports to gain insight into customer needs and wants and how to meet them.
Open source options in marketing automation are few, but Mautics extensibility stands out even against closed solutions like IBMs Silverpop. Mautic even integrates with popular third-party email marketing solutions (MailChimp, Constant Contact) and social media platforms (Facebook, Twitter, Google+, Instagram) with quick-connect widgets.
The developers of Mautic could stand to broaden the features for list segmentation and improve the navigability of their UI. Usability is also hindered by sparse documentation. But if youre willing to rough it out long enough to learn your way, youll find a gem -- and possibly even gold -- in Mautic.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-orangehrm-100614679-orig.jpg)
### OrangeHRM ###
The commercial software market in the human resource management space is rather fragmented, with Talent, HR, and Workforce Management startups all vying for a slice of the pie. Its little wonder the open source world hasnt found much direction either, with the most ambitious HRM solutions often locked inside larger ERP distributions. [OrangeHRM][8] is a standout.
OrangeHRM tackles employee administration from recruitment and applicant tracking to performance reviews, with good audit trails throughout. An employee portal provides self-serve access to personal employment information, time cards, leave requests, and personnel documents, helping reduce demands on HR staff.
OrangeHRM doesnt yet address niche aspects like talent management (social media, collaboration, knowledge banks), but its remarkably full-featured. Professional and Enterprise options offer more advanced functionality (in areas such as recruitment, training, on/off-boarding, document management, and mobile device access), while community modules are available for the likes of Active Directory/LDAP integration, advanced reporting, and even insurance benefit management.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-libreoffice-100614675-orig.jpg)
### LibreOffice ###
[LibreOffice][9] is the easy choice for best open source office productivity suite. Originally forked from OpenOffice, Libre has been moving at a faster clip than OpenOffice ever since, drawing more developers and producing more new features than its rival.
LibreOffice 5.0, released only last month, offers UX improvements that truly enhance usability (like visual previews to style changes in the sidebar), brings document editing to Android devices (previously a view-only prospect), and finally delivers on a 64-bit Windows codebase.
LibreOffice still lacks a built-in email client and a personal information manager, not to mention the real-time collaborative document editing available in Microsoft Office. But Libre can run off of a USB flash disk for portability, natively supports a greater number of graphic and file formats, and creates hybrid PDFs with embedded ODF files for full-on editing. Libre even imports Apple Pages documents, in addition to opening and saving all Microsoft Office formats.
LibreOffice has done a solid job of tightening its codebase and delivering enhancements at a regular clip. With a new cloud version under development, LibreOffice will soon be more liberating than ever.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-bonita-100614672-orig.jpg)
### Bonita BPM ###
Open source BPM has become a mature, cost-effective alternative to the top proprietary solutions. Having led the charge since 2009, Bonitasoft continues to raise the bar. The new [Bonita BPM 7][10] release impresses with innovative features that simplify code generation and shorten development cycles for BPM app creation.
Most important to the new version, though, is better abstraction of underlying core business logic from UI and data components, allowing UIs and processes to be developed independently. This new MVC approach reduces downtime for live upgrades (no more recompilation!) and eases application maintenance.
Bonita contains a winning set of connectors to a broad range of enterprise systems (ERP, CRM, databases) as well as to Web services. Complementing its process weaving tools, a new form designer (built on AngularJS/Bootstrap) goes a long way toward improving UI creation for the Web-centric and mobile workforce.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-camunda-100614670-orig.jpg)
### Camunda BPM ###
Many open source solutions, like Bonita BPM, offer solid, drop-in functionality. Dig into the code base, though, and you may find its not the cleanest to build upon. Enterprise Java developers who hang out under the hood should check out [Camunda BPM][11].
Forked from Alfresco Activiti (a creation of former Red Hat jBPM developers), Camunda BPM delivers a tight, Java-based BPMN 2.0 engine in support of human workflow activities, case management, and systems process automation that can be embedded in your Java apps or run as a container service in Tomcat. Camundas ecosystem offers an Eclipse plug-in for process modeling and the Cockpit dashboard brings real-time monitoring and management over running processes.
The Enterprise version adds WebSphere and WebLogic Server support. Additional incentives for the Enterprise upgrade include Saxon-driven XSLT templating (sidestepping the scripting engine) and add-ons to improve process management and exception handling.
Camunda is a solid BPM engine ready for build-out and one of the first open source process managers to introduce DMN (Decision Model and Notation) support, which helps to simplify complex rules-based modeling alongside BPMN. DMN support is currently at the alpha stage.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-talend-100614681-orig.jpg)
### Talend Open Studio ###
No open source ETL or EAI solution comes close to [Talend Open Studio][12] in functionality, performance, or support of modern integration trends. This year Talend unleashed Open Studio 6, a new version with a streamlined UI and smarter tooling that brings it more in line with Talends cloud-based offering.
Using Open Studio you can visually design, test, and debug orchestrations that connect, transform, and synchronize data across a broad range of real-time applications and data resources. Talends wealth of connectors provides support for most any endpoint -- from flat files to Hadoop to Amazon S3. Packaged editions focus on specific scenarios such as big data integration, ESB, and data integrity monitoring.
New support for Java 8 brings a speed boost. The addition of support for MariaDB and for in-memory processing with MemSQL, as well as updates to the ESB engine, keep Talend in step with the communitys needs. Version 6 was a long time coming, but no less welcome for that. Talend Open Studio is still first in managing complex data integration -- in-house, in the cloud, or increasingly, a combination of the two.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-warewolf-100614683-orig.jpg)
### Warewolf ESB ###
Complex integration patterns may demand the strengths of a Talend to get the job done. But for many lightweight microservices, the overhead of a full-fledged enterprise integration solution is extreme overkill.
[Warewolf ESB][13] combines a streamlined .Net-based process engine with visual development tools to provide for dead simple messaging and application payload routing in a native Windows environment. The Warewolf ESB is an “easy service bus,” not an enterprise service bus.
Drag-and-drop tooling in the design studio makes quick work of configuring connections and logic flows. Built-in wizardry handles Web services definitions and database calls, and it can even tap Windows DLLs and the command line directly. Using the visual debugger, you can inspect execution streams (if not yet actually step through them), then package everything for remote deployment.
Warewolf is still a .40.5 release and undergoing major code changes. It also lacks native connectors, easy transforms, and any means of scalability management. Be aware that the precompiled install demands collection of some usage statistics (I wish they would stop that). But Warewolf ESB is fast, free, and extensible. Its a quirky, upstart project that offers definite benefits to Windows integration architects.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-knime-100614674-orig.jpg)
### KNIME ###
[KNIME][14] takes a code-free approach to predictive analytics. Using a graphical workbench, you wire together workflows from an abundant library of processing nodes, which handle data access, transformation, analysis, and visualization. With KNIME, you can pull data from databases and big data platforms, run ETL transformations, perform data mining with R, and produce custom reports in the end.
The company was busy this year rolling out the KNIME 2.12 update. The new release introduces MongoDB support, XPath nodes with autoquery creation, and a new view controller (based on the D3 JavaScript library) that creates interactive data visualizations on the fly. It also includes additional statistical nodes and a REST interface (KNIME Server edition) that provides services-based access to workflows.
KNIMEs core analytics engine is free open source. The company offers several fee-based extensions for clustering and collaboration. (A portion of your licensing fee actually funds the open source project.) KNIME Server (on-premise or cloud) ups the ante with security, collaboration, and workflow repositories -- all serving to inject analytics more productively throughout your business lines.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-teiid-100614682-orig.jpg)
### Teiid ###
[Teiid][15] is a data virtualization system that allows applications to use data from multiple, heterogeneous data stores. Currently a JBoss project, Teiid is backed by years of development from MetaMatrix and a long history of addressing the data access needs of the largest enterprise environments. I even see [uses for Teiid in Hadoop and big data environments][16].
In essence, Teiid allows you to connect all of your data sources into a “virtual” mega data source. You can define caching semantics, transforms, and other “configuration not code” transforms to load from multiple data sources using plain old SQL, XQuery, or procedural queries.
Teiid is primarily accessible through JBDC and has built-in support for Web services. Red Hat sells Teiid as [JBoss Data Virtualization][17].
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614676-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][18]
[Bossie Awards 2015: The best open source application development tools][19]
[Bossie Awards 2015: The best open source big data tools][20]
[Bossie Awards 2015: The best open source data center and cloud software][21]
[Bossie Awards 2015: The best open source desktop and mobile software][22]
[Bossie Awards 2015: The best open source networking and security software][23]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982622/open-source-tools/bossie-awards-2015-the-best-open-source-applications.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:http://xtuple.org/
[2]:http://odoo.com/
[3]:http://idempiere.org/
[4]:http://suitecrm.com/
[5]:http://salesagility.com/
[6]:http://civicrm.org/
[7]:https://www.mautic.org/
[8]:http://www.orangehrm.com/
[9]:http://libreoffice.org/
[10]:http://www.bonitasoft.com/
[11]:http://camunda.com/
[12]:http://talend.com/
[13]:http://warewolf.io/
[14]:http://www.knime.org/
[15]:http://teiid.jboss.org/
[16]:http://www.infoworld.com/article/2922180/application-development/database-virtualization-or-i-dont-want-to-do-etl-anymore.html
[17]:http://www.jboss.org/products/datavirt/overview/
[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,287 +0,0 @@
Bossie Awards 2015: The best open source big data tools
================================================================================
InfoWorld's top picks in distributed data processing, streaming analytics, machine learning, and other corners of large-scale data analytics
![](http://images.techhive.com/images/article/2015/09/bossies-2015-big-data-100613944-orig.jpg)
### The best open source big data tools ###
How many Apache projects can sit on a pile of big data? Fire up your Hadoop cluster, and you might be able to count them. Among this year's Bossies in big data, you'll find the fastest, widest, and deepest newfangled solutions for large-scale SQL, stream processing, sort-of stream processing, and in-memory analytics, not to mention our favorite maturing members of the Hadoop ecosystem. It seems everyone has a nail to drive into MapReduce's coffin.
![](http://images.techhive.com/images/article/2015/09/bossies-2015-spark-100613962-orig.jpg)
### Spark ###
With hundreds of contributors, [Spark][1] is one of the most active and fastest-growing Apache projects, and with heavyweights like IBM throwing their weight behind the project and major corporations bringing applications into large-scale production, the momentum shows no signs of letting up.
The sweet spot for Spark continues to be machine learning. Highlights since last year include the replacement of the SchemaRDD with a Dataframes API, similar to those found in R and Pandas, making data access much simpler than with the raw RDD interface. Also new are ML pipelines for building repeatable machine learning workflows, expanded and optimized support for various storage formats, simpler interfaces to machine learning algorithms, improvements in the display of cluster resources usage, and task tracking.
On by default in Spark 1.5 is the off-heap memory manager, Tungsten, which offers much faster processing by fine-tuning data structure layout in memory. Finally, the new website, [spark-packages.org][2], with more than 100 third-party libraries, adds many useful features from the community.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-storm-100614149-orig.jpg)
### Storm ###
[Apache Storm][3] is a Clojure-based distributed computation framework primarily for streaming real-time analytics. Storm is based on the [disruptor pattern][4] for low-latency complex event processing created LMAX. Unlike Spark, Storm can do single events as opposed to “micro-batches,” and it has a lower memory footprint. In my experience, it scales better for streaming, especially when youre mainly streaming to ingest data into other data sources.
Storms profile has been eclipsed by Spark, but Spark is inappropriate for many streaming applications. Storm is frequently used with Apache Kafka.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-h2o-100613950-orig.jpg)
### H2O ###
[H2O][5] is a distributed, in-memory processing engine for machine learning that boasts an impressive array of algorithms. Previously only available for R users, version 3.0 adds Python and Java language bindings, as well as a Spark execution engine for the back end. The best way to view H20 is as a very large memory extension of your R environment. Instead of working directly on large data sets, the R extensions communicate via a REST API with the H2O cluster, where H2O does the heavy lifting.
Several useful R packages such as ddply have been wrapped, allowing you to use them on data sets larger than the amount of RAM on the local machine. You can run H2O on EC2, on a Hadoop/YARN cluster, and on Docker containers. With Sparkling Water (Spark plus H2O) you can access Spark RDDs on the cluster side by side to, for example, process a data frame with Spark before passing it to an H2O machine learning algorithm.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-apex-100613943-orig.jpg)
### Apex ###
[Apex][6] is an enterprise-grade, big data-in-motion platform that unifies stream processing as well as batch processing. A native YARN application, Apex processes streaming data in a scalable, fault-tolerant manner and provides all the common stream operators out of the box. One of the best things about Apex is that it natively supports the common event processing guarantees (exactly once, at least once, at most once). Formerly a commercial product by DataTorrent, Apex's roots show in the quality of the documentation, examples, code, and design. Devops and application development are cleanly separated, and user code generally doesn't have to be aware that it is running in a streaming cluster.
A related project, [Malhar][7], offers more than 300 commonly used operators and application templates that implement common business logic. The Malhar libraries significantly reduce the time it takes to develop an Apex application, and there are connectors (operators) for storage, file systems, messaging systems, databases, and nearly anything else you might want to connect to from an application. The operators can all be extended or customized to meet individual business's requirements. All Malhar components are available under the Apache license.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-druid-100613947-orig.jpg)
### Druid ###
[Druid][8], which moved to a commercially friendly Apache license in February of this year, is best described as a hybrid, “event streams meet OLAP” solution. Originally developed to analyze online events for ad markets, Druid allows users to do arbitrary and interactive exploration of time series data. Some of the key features include low-latency ingest of events, fast aggregations, and approximate and exact calculations.
At the heart of Druid is a custom data store that uses specialized nodes to handle each part of the problem. Real-time ingest is managed by real-time nodes (JVMs) that eventually flush data to historical nodes that are responsible for data that has aged. Broker nodes direct queries in a scatter-gather fashion to both real-time and historical nodes to give the user a complete picture of events. Benchmarked at a sustained 500K events per second and 1 million events per second peak, Druid is ideal as a real-time dashboard for ad-tech, network traffic, and other activity streams.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-flink-100613949-orig.jpg)
### Flink ###
At its core, [Flink][9] is a data flow engine for event streams. Although superficially similar to Spark, Flink takes a different approach to in-memory processing. First, Flink was designed from the start as a stream processor. Batch is simply a special case of a stream with a beginning and an end, and Flink offers APIs for dealing with each case, the DataSet API (batch) and the DataStream API. Developers coming from the MapReduce world should feel right at home working with the DataSet API, and porting applications to Flink should be straightforward. In many ways Flink mirrors the simplicity and consistency that helped make Spark so popular. Like Spark, Flink is written in Scala.
The developers of Flink clearly thought out usage and operations too: Flink works natively with YARN and Tez, and it uses an off-heap memory management scheme to work around some of the JVM limitations. A peek at the Flink JIRA site shows a healthy pace of development, and youll find an active community on the mailing lists and on StackOverflow as well.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-elastic-100613948-orig.jpg)
### Elasticsearch ###
[Elasticsearch][10] is a distributed document search server based on [Apache Lucene][11]. At its heart, Elasticsearch builds indices on JSON-formatted documents in nearly real time, enabling fast, full-text, schema-free queries. Combined with the open source Kibana dashboard, you can create impressive visualizations of your real-time data in a simple point-and-click fashion.
Elasticsearch is easy to set up and easy to scale, automatically making use of new hardware by rebalancing shards as required. The query syntax isn't at all SQL-like, but it is intuitive enough for anyone familiar with JSON. Most users won't be interacting at that level anyway. Developers can use the native JSON-over-HTTP interface or one of the several language bindings available, including Ruby, Python, PHP, Perl, .Net, Java, and JavaScript.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-slamdata-100613961-orig.jpg)
### SlamData ###
If you are seeking a user-friendly tool to visualize and understand your newfangled NoSQL data, take a look at [SlamData][12]. SlamData allows you to query nested JSON data using familiar SQL syntax, without relocation or transformation.
One of the technologys main features is its connectors. From MongoDB to HBase, Cassandra, and Apache Spark, SlamData taps external data sources with the industry's most advanced “pushdown” processing technology, performing transformations and analytics close to the data.
While you might ask, “Wouldnt I be better off building a data lake or data warehouse?” consider the companies that were born in NoSQL. Skipping the ETL and simply connecting a visualization tool to a replica offers distinct advantages -- not only in terms of how up-to-date the data is, but in how many moving parts you have to maintain.
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-drill-100613946-orig.jpg)
### Drill ###
[Drill][13] is a distributed system for interactive analysis of large-scale data sets, inspired by [Google's Dremel][14]. Designed for low-latency analysis of nested data, Drill has a stated design goal of scaling to 10,000 servers and querying petabytes of data and trillions of records.
Nested data can be obtained from a variety of data sources (such as HDFS, HBase, Amazon S3, and Azure Blobs) and in multiple formats (including JSON, Avro, and protocol buffers), and you don't need to specify a schema up front (“schema on read”).
Drill uses ANSI SQL:2003 for its query language, so there's no learning curve for data engineers to overcome, and it allows you to join data across multiple data sources (for example, joining a table in HBase with logs in HDFS). Finally, Drill offers ODBC and JDBC interfaces to connect your favorite BI tools.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-hbase-100613951-orig.jpg)
### HBase ###
[HBase][15] reached the 1.x milestone this year and continues to improve. Like other nonrelational distributed datastores, HBase excels at returning search results very quickly and for this reason is often used to back search engines, such as the ones at eBay, Bloomberg, and Yahoo. As a stable and mature software offering, HBase does not get fresh features as frequently as newer projects, but that's often good for enterprises.
Recent improvements include the addition of high-availability region servers, support for rolling upgrades, and YARN compatibility. Features in the works include scanner updates that promise to improve performance and the ability to use HBase as a persistent store for streaming applications like Storm and Spark. HBase can also be queried SQL style via the [Phoenix][16] project, now out of incubation, whose SQL compatibility is steadily improving. Phoenix recently added a Spark connector and the ability to add custom user-defined functions.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-hive-100613952-orig.jpg)
### Hive ###
Although stable and mature for several years, [Hive][17] reached the 1.0 version milestone this year and continues to be the best solution when really heavy SQL lifting (many petabytes) is required. The community continues to focus on improving the speed, scale, and SQL compliance of Hive. Currently at version 1.2, significant improvements since its last Bossie include full ACID semantics, cross-data center replication, and a cost-based optimizer.
Hive 1.2 also brought improved SQL compliance, making it easier for organizations to use it to off-load ETL jobs from their existing data warehouses. In the pipeline are speed improvements with an in-memory cache called LLAP (which, from the looks of the JIRAs, is about ready for release), the integration of Spark machine learning libraries, and improved SQL constructs like nonequi joins, interval types, and subqueries.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kylin-100613955-orig.jpg)
### Kylin ###
[Kylin][18] is an application developed at eBay for processing very large OLAP cubes via ANSI SQL, a task familiar to most data analysts. If you think about how many items are on sale now and in the past at eBay, and all the ways eBay might want to slice and dice data related to those items, you will begin to understand the types of queries Kylin was designed for.
Like most other analysis applications, Kylin supports multiple access methods, including JDBC, ODBC, and a REST API for programmatic access. Although Kylin is still in incubation at Apache, and the community nascent, the project is well documented and the developers are responsive and eager to understand customer use cases. Getting up and running with a starter cube was a snap. If you have a need for analysis of extremely large cubes, you should take a look at Kylin.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-cdap-100613945-orig.jpg)
### CDAP ###
[CDAP][19] (Cask Data Access Platform) is a framework running on top of Hadoop that abstracts away the complexity of building and running big data applications. CDAP is organized around two core abstractions: data and applications. CDAP Datasets are logical representations of data that behave uniformly regardless of the underlying storage layer; CDAP Streams provide similar support for real-time data.
Applications use CDAP services for things such as distributed transactions and service discovery to shield developers from the low-level details of Hadoop. CDAP comes with a data ingestion framework and a few prebuilt applications and “packs” for common tasks like ETL and website analytics, along with support for testing, debugging, and security. Like most formerly commercial (closed source) projects, CDAP benefits from good documentation, tutorials, and examples.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-ranger-100613960-orig.jpg)
### Ranger ###
Security has long been a sore spot with Hadoop. It isnt (as is frequently reported) that Hadoop is “insecure” or “has no security.” Rather, the truth was more that Hadoop had too much security, though not in a good way. I mean that every component had its own authentication and authorization implementation that wasnt integrated with the rest of platform.
Hortonworks acquired XA/Secure in May, and [a few renames later][20] we have [Ranger][21]. Ranger pulls many of the key components of Hadoop together under one security umbrella, allowing you to set a “policy” that ties your Hadoop security to your existing ACL-based Active Directory authentication and authorization. Ranger gives you one place to manage Hadoop access control, one place to audit, one place to manage the encryption, and a pretty Web page to do it from.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mesos-100613957-orig.jpg)
### Mesos ###
[Mesos][22], developed at the [AMPLab][23] at U.C. Berkeley that also brought us Spark, takes a different approach to managing cluster computing resources. The best way to describe Mesos is as a distributed microkernel for the data center. Mesos provides a minimal set of operating system mechanisms like inter-process communications, disk access, and memory to higher-level applications, called “frameworks” in Mesos-speak, that run in what is analogous to user space. Popular frameworks for Mesos include [Chronos][24] and [Aurora][25] for building ETL pipelines and job scheduling, and a few big data processing applications including Hadoop, Storm, and Spark, which have been ported to run as Mesos frameworks.
Mesos applications (frameworks) negotiate for cluster resources using a two-level scheduling mechanism, so writing a Mesos application is unlikely to feel like a familiar experience to most developers. Although Mesos is a young project, momentum is growing, and with Spark being an exceptionally good fit for Mesos, we're likely to see more from Mesos in the coming years.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-nifi-100613958-orig.jpg)
### NiFi ###
[NiFi][26] is an incubating Apache project to automate the flow of data between systems. It doesn't operate in the traditional space that Kafka and Storm do, but rather in the space between external devices and the data center. NiFi was originally developed by the NSA and donated to the open source community in 2014. It has a strong community of developers and users within various government agencies.
NiFi isn't like anything else in the current big data ecosystem. It is much closer to a tradition EAI (enterprise application integration) tool than a data processing platform, although simple transformations are possible. One interesting feature is the ability to debug and change data flows in real time. Although not quite a REPL (read, eval, print loop), this kind of paradigm dramatically shortens the development cycle by not requiring a compile-deploy-test-debug workflow. Other interesting features include a strong “chain of custody,” where each piece of data can be tracked from beginning to end, along with any changes made along the way. You can also prioritize data flows so that time-sensitive information can be received as quickly as possible, bypassing less time-critical events.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kafka-100613954-orig.jpg)
### Kafka ###
[Kafka][27] has emerged as the de-facto standard for distributed publish-subscribe messaging in the big data space. Its design allows brokers to support thousands of clients at high rates of sustained message throughput, while maintaining durability through a distributed commit log. Kafka does this by maintaining what is essentially a single log file in HDFS. Since HDFS is a distributed storage system that keeps redundant copies, Kafka is protected.
When consumers want to read messages, Kafka looks up their offset in the central log and sends them. Because messages are not deleted immediately, adding consumers or replaying historical messages does not impose additional costs. Kafka has been benchmarked at 2 million writes per second by its developers at LinkedIn. Despite Kafkas sub-1.0 version number, Kafka is a mature and stable product, in use in some of the largest clusters in the world.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opentsdb-100613959-orig.jpg)
### OpenTSDB ###
[OpenTSDB][28] is a time series database built on HBase. It was designed specifically for analyzing data collected from applications, mobile devices, networking equipment, and other hardware devices. The custom HBase schema used to store the time series data has been designed for fast aggregations and minimal storage requirements.
By using HBase as the underlying storage layer, OpenTSDB gains the distributed and reliable characteristics of that system. Users don't interact with HBase directly; instead events are written to the system via the time series daemon (TSD), which can be scaled out as required to handle high-throughput situations. There are a number of prebuilt connectors to publish data to OpenTSDB, and clients to read data from Ruby, Python, and other languages. OpenTSDB isn't strong on creating interactive graphics, but several third-party tools fill that gap. If you are already using HBase and want a simple way to store event data, OpenTSDB might be just the thing.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-jupyter-100613953-orig.jpg)
### Jupyter ###
Everybody's favorite notebook application went generic. [Jupyter][29] is “the language-agnostic parts of IPython” spun out into an independent package. Although Jupyter itself is written in Python, the system is modular. Now you can have an IPython-like interface, along with notebooks for sharing code, documentation, and data visualizations, for nearly any language you like.
At least [50 language][30] kernels are already supported, including LISP, R, Ruby, F#, Perl, and Scala. In fact, even IPython itself is simply a Python module for Jupyter. Communication with the language kernel is via a REPL (read, eval, print loop) protocol, similar to [nREPL][31] or [Slime][32]. It is nice to see such a useful piece of software receiving significant [nonprofit funding][33] to further its development, such as parallel execution and multi-user notebooks. Behold, open source at its best.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zeppelin-100613963-orig.jpg)
### Zeppelin ###
While still in incubation, [Apache Zeppelin][34] is nevertheless stirring the data analytics and visualization pot. The Web-based notebook enables users to ingest, discover, analyze, and visualize their data. The notebook also allows you to collaborate with others to make data-driven, interactive documents incorporating a growing number of programming languages.
This technology also boasts an integration with Spark and an interpreter concept allowing any language or data processing back end to be plugged into Zeppelin. Currently Zeppelin supports interpreters such as Scala, Python, SparkSQL, Hive, Markdown, and Shell.
Zeppelin is still immature. I wanted to put a demo up but couldnt find an easy way to disable “shell” as an execution option (among other things). However, it already looks better visually than IPython Notebook, which is the popular incumbent in this space. If you dont want to spring for DataBricks Cloud or need something open source and extensible, this is the most promising distributed computing notebook around -- especially if youre a Sparky type.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613956-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][35]
[Bossie Awards 2015: The best open source application development tools][36]
[Bossie Awards 2015: The best open source big data tools][37]
[Bossie Awards 2015: The best open source data center and cloud software][38]
[Bossie Awards 2015: The best open source desktop and mobile software][39]
[Bossie Awards 2015: The best open source networking and security software][40]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982429/open-source-tools/bossie-awards-2015-the-best-open-source-big-data-tools.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://spark.apache.org/
[2]:http://spark-packages.org/
[3]:https://storm.apache.org/
[4]:https://lmax-exchange.github.io/disruptor/
[5]:http://h2o.ai/product/
[6]:https://www.datatorrent.com/apex/
[7]:https://github.com/DataTorrent/Malhar
[8]:https://druid.io/
[9]:https://flink.apache.org/
[10]:https://www.elastic.co/products/elasticsearch
[11]:http://lucene.apache.org/
[12]:http://teiid.jboss.org/
[13]:https://drill.apache.org/
[14]:http://research.google.com/pubs/pub36632.html
[15]:http://hbase.apache.org/
[16]:http://phoenix.apache.org/
[17]:https://hive.apache.org/
[18]:https://kylin.incubator.apache.org/
[19]:http://cdap.io/
[20]:http://www.infoworld.com/article/2973381/application-development/apache-ranger-chuck-norris-hadoop-security.html
[21]:https://ranger.incubator.apache.org/
[22]:http://mesos.apache.org/
[23]:https://amplab.cs.berkeley.edu/
[24]:http://nerds.airbnb.com/introducing-chronos/
[25]:http://aurora.apache.org/
[26]:http://nifi.apache.org/
[27]:https://kafka.apache.org/
[28]:http://opentsdb.net/
[29]:http://jupyter.org/
[30]:http://https//github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages
[31]:https://github.com/clojure/tools.nrepl
[32]:https://github.com/slime/slime
[33]:http://blog.jupyter.org/2015/07/07/jupyter-funding-2015/
[34]:https://zeppelin.incubator.apache.org/
[35]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[36]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[37]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[38]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[39]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[40]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,261 +0,0 @@
Bossie Awards 2015: The best open source data center and cloud software
================================================================================
InfoWorld's top picks of the year in open source platforms, infrastructure, management, and orchestration software
![](http://images.techhive.com/images/article/2015/09/bossies-2015-data-center-cloud-100613986-orig.jpg)
### The best open source data center and cloud software ###
You might have heard about this new thing called Docker containers. Developers love them because you can build them with a script, add services in layers, and push them right from your MacBook Pro to a server for testing. It works because they're superlightweight, unlike those now-archaic virtual machines. Containers -- and other lightweight approaches to deliver services -- are changing the shape of operating systems, applications, and the tools to manage them. Our Bossie winners in data center and cloud are leading the charge.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613987-orig.jpg)
### Docker Machine, Compose, and Swarm ###
Dockers open source container technology has been adopted by the major public clouds and is being built into the next version of Windows Server. Allowing developers and operations teams to separate applications from infrastructure, Docker is a powerful data center automation tool.
However, containers are only part of the Docker story. Docker also provides a series of tools that allow you to use the Docker API to automate the entire container lifecycle, as well as handling application design and orchestration.
[Machine][1] allows you to automate the provisioning of Docker Containers. Starting with a command line, you can use a single line of code to target one or more hosts, deploy the Docker engine, and even join it to a Swarm cluster. Theres support for most hypervisors and cloud platforms all you need are your access credentials.
[Swarm][2] handles clustering and scheduling, and it can be integrated with Mesos for more advanced scheduling capabilities. You can use Swarm to build a pool of container hosts, allowing your apps to scale out as demand increases. Applications and all of their dependencies can be defined with [Compose][3], which lets you link containers together into a distributed application and launch them as a group. Compose descriptions work across platforms, so you can take a developer configuration and quickly deploy in production.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-coreos-rkt-100613985-orig.jpg)
### CoreOS and Rkt ###
A thin, lightweight server OS, [CoreOS][4] is based on Googles Chromium OS. Instead of using a package manager to install functions, its designed to be used with Linux containers. By using containers to extend a thin core, CoreOS allows you to quickly deploy applications, working well on cloud infrastructures.
CoreOSs container management tooling, fleet, is designed to treat a cluster of CoreOS servers as a single unit, with tools for managing high availability and for deploying containers to the cluster based on resource availability. A cross-cluster key/value store, etcd, handles device management and supports service discovery. If a node fails, etcd can quickly restore state on a new replica, giving you a distributed configuration management platform thats linked to CoreOSs automated update service.
While CoreOS is perhaps best known for its Docker support, the CoreOS team is developing its own container runtime, rkt, with its own container format, the App Container Image. Also compatible with Docker containers, rkt has a modular architecture that allows different containerization systems (even hardware virtualization, in a proof of concept from Intel) to be plugged in. However, rkt is still in the early stages of development, so isnt quite production ready.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rancheros-100613997-orig.jpg)
### RancherOS ###
As we abstract more and more services away from the underlying operating system using containers, we can start thinking about what tomorrows operating system will look like. Similar to our applications, its going to be a modular set of services running on a thin kernel, self-configuring to offer only the services our applications need.
[RancherOS][5] is a glimpse of what that OS might look like. Blending the Linux kernel with Docker, RancherOS is a minimal OS suitable for hosting container-based applications in cloud infrastructures. Instead of using standard Linux packaging techniques, RancherOS leverages Docker to host Linux user-space services and applications in separate container layers. A low-level Docker instance is first to boot, hosting system services in their own containers. Users' applications run in a higher-level Docker instance, separate from the system containers. If one of your containers crashes, the host keeps running.
RancherOS is only 20MB in size, so it's easy to replicate across a data center. Its also designed to be managed using automation tools, not manually, with API-level access that works with Dockers management tools as well as with Rancher Labs own cloud infrastructure and management tools.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-kubernetes-100613991-orig.jpg)
### Kubernetes ###
Googles [Kubernetes][6] container orchestration system is designed to manage and run applications built in Docker and Rocket containers. Focused on managing microservice applications, Kubernetes lets you distribute your containers across a cluster of hosts, while handling scaling and ensuring managed services run reliably.
With containers providing an application abstraction layer, Kubernetes is an application-centric management service that supports many modern development paradigms, with a focus on user intent. That means you launch applications, and Kubernetes will manage the containers to run within the parameters you set, using the Kubernetes scheduler to make sure it gets the resources it needs. Containers are grouped into pods and managed by a replication engine that can recover failed containers or add more pods as applications scale.
Kubernetes powers Googles own Container Engine, and it runs on a range of other cloud and data center services, including AWS and Azure, as well as vSphere and Mesos. Containers can be either loosely or tightly coupled, so applications not designed for cloud PaaS operations can be migrated to the cloud as a tightly coupled set of containers. Kubernetes also supports rapid deployment of applications to a cluster, giving you an endpoint for a continuous delivery process.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-mesos-100613993-orig.jpg)
### Mesos ###
Turning a data center into a private or public cloud requires more than a hypervisor. It requires a new operating layer that can manage the data center resources as if they were a single computer, handling resources and scheduling. Described as a “distributed systems kernel,” [Apache Mesos][7] allows you to manage thousands of servers, using containers to host applications and APIs to support parallel application development.
At the heart of Mesos is a set of daemons that expose resources to a central scheduler. Tasks are distributed across nodes, taking advantage of available CPU and memory. One key approach is the ability for applications to reject offered resources if they dont meet requirements. Its an approach that works well for big data applications, and you can use Mesos to run Hadoop and Cassandra distributed databases, as well as Apaches own Spark data processing engine. Theres also support for the Jenkins continuous integration server, allowing you to run build and test workers in parallel on a cluster of servers, dynamically adjusting the tasks depending on workload.
Designed to run on Linux and Mac OS X, Mesos has also recently been ported to Windows to support the development of scalable parallel applications on Azure.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-smartos-100614849-orig.jpg)
### SmartOS and SmartDataCenter ###
Joyents [SmartDataCenter][8] is the software that runs its public cloud, adding a management platform on top of its [SmartOS][9] thin server OS. A descendent of OpenSolaris that combines Zones containers and the KVM hypervisor, SmartOS is an in-memory operating system, quick to boot from a USB stick and run on bare-metal servers.
Using SmartOS, you can quickly deploy a set of lightweight servers that can be programmatically managed via a set of JSON APIs, with functionality delivered via virtual machines, downloaded by built-in image management tools. Through the use of VMs, all userland operations are isolated from the underlying OS, reducing the security exposure of both the host and guests.
SmartDataCenter runs on SmartOS servers, with one server running as a dedicated management node, and the rest of a cluster operating as compute nodes. You can get started with a Cloud On A Laptop build (available as a VMware virtual appliance) that lets you experiment with the management server. In a live data center, youll deploy SmartOS on your servers, using ZFS to handle storage which includes your local image library. Services are deployed as images, with components stored in an object repository.
The combination of SmartDataCenter and SmartOS builds on the experience of Joyents public cloud, giving you a tried and tested set of tools that can help you bootstrap your own cloud data center. Its an infrastructure focused on virtual machines today, but laying the groundwork for tomorrow. A related Joyent project, [sdc-docker][10], exposes an entire SmartDataCenter cluster as a single Docker host, driven by native Docker commands.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-sensu-100614850-orig.jpg)
### Sensu ###
Managing large-scale data centers isnt about working with server GUIs, its about automating scripts based on information from monitoring tools and services, routing information from sensors and logs, and then delivering actions to applications. One tool thats beginning to offer this functionality is [Sensu][11], often described as a “monitoring router.”
Scripts running across your data center deliver information to Sensu, which then routes it to the appropriate handler, using a publish-and-subscribe architecture based on RabbitMQ. Servers can be distributed, delivering published check results to handler code. You might see results in email, or in a Slack room, or in Sensus own dashboards. Message formats are defined in JSON files, or mutators used to format data on the fly, and messages can be filtered to one or more event handlers.
Sensu is still a relatively young tool, but its one that shows a lot of promise. If youre going to automate your data center, youre going to need a tool like this not only to show you whats happening, but to deliver that information where its most needed. A commercial option adds support for integration with third-party applications, but much of what you need to manage a data center is in the open source release.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-prometheus-100613996-orig.jpg)
### Prometheus ###
Managing a modern data center is a complex task. Racks of servers need to be treated like cattle rather than pets, and you need a monitoring system designed to handle hundreds and thousands of nodes. Monitoring applications presents special challenges, and thats where [Prometheus][12] comes in to play. A service monitoring system designed to deliver alerts to operators, Prometheus can run on everything from a single laptop to a highly available cluster of monitoring servers.
Time series data is captured and stored, then compared against patterns to identify faults and problems. Youll need to expose data on HTTP endpoints, using a YAML file to configure the server. A browser-based reporting tool handles displaying data, with an expression console where you can experiment with queries. Dashboards can be created with a GUI builder, or written using a series of templates, letting you deliver application consoles that can be managed using version control systems such as Git.
Captured data can be managed using expressions, which make it easy to aggregate data from several sources -- for example, letting you bring performance data from a series of Web endpoints into one store. An experimental alert manager module delivers alerts to common collaboration and devops tools, including Slack and PagerDuty. Official client libraries for common languages like Go and Java mean its easy to add Prometheus support to your applications and services, while third-party options extend Prometheus to Node.js and .Net.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-elk-100613988-orig.jpg)
### Elasticsearch, Logstash, and Kibana ###
Running a modern data center generates a lot of data, and it requires tools to get information out of that data. Thats where the combination of Elasticsearch, Logstash, and Kibana, often referred to as the ELK stack, comes into play.
Designed to handle scalable search across a mix of content types, including structured and unstructured documents, [Elasticsearch][13] builds on Apaches Lucene information retrieval tools, with a RESTful JSON API. Its used to provide search for sites like Wikipedia and GitHub, using a distributed index with automated load balancing and routing.
Under the fabric of a modern cloud is a physical array of servers, running as VM hosts. Monitoring many thousands of servers needs centralized logs. [Logstash][14] harvests and filters the logs generated by those servers (and by the applications running on them), using a forwarder on each physical and virtual machine. Logstash-formatted data is then delivered to Elasticsearch, giving you a search index that can be quickly scaled as you add more servers.
At a higher level, [Kibana][15] adds a visualization layer to Elasticsearch, providing a Web dashboard for exploring and analyzing the data. Dashboards can be created around custom searches and shared with your team, providing a quick, easy-to-digest devops information feed.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-ansible-100613984-orig.jpg)
### Ansible ###
Managing server configuration is a key element of any devops approach to managing a modern data center or a cloud infrastructure. Configuration management tooling that takes a desired state approach to simplifies systems management at cloud scale, using server and application descriptions to handle server and application deployment.
[Ansible][16] offers a minimal management service, using SSH to manage Unix nodes and PowerShell to work with Windows servers, with no need to deploy agents. An Ansible Playbook describes the state of a server or service in YAML, deploying Ansible modules to servers that handle configuration and removing them once the service is running. You can use Playbooks to orchestrate tasks -- for example, deploying several Web endpoints with a single script.
Its possible to make module creation and Playbook delivery part of a continuous delivery process, using build tools to deliver configurations and automate deployment. Ansible can pull in information from cloud service providers, simplifying management of virtual machines and networks. Monitoring tools in Ansible are able to trigger additional deployments automatically, helping manage and control cloud services, as well as working to manage resources used by large-scale data platforms like Hadoop.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-jenkins-100613990-orig.jpg)
### Jenkins ###
Getting continuous delivery right requires more than a structured way of handling development; it also requires tools for managing test and build. Thats where the [Jenkins][17] continuous integration server comes in. Jenkins works with your choice of source control, your test harnesses, and your build server. Its a flexible tool, initially designed for working with Java but now extended to support Web and mobile development and even to build Windows applications.
Jenkins is perhaps best thought of as a switching network, shunting files through a test and build process, and responding to signals from the various tools youre using thanks to a library of more than 1,000 plug-ins. These include tools for integrating Jenkins with both local Git instances and GitHub so that it's possible to extend a continuous development model into your build and delivery processes.
Using an automation tool like Jenkins is as much about adopting a philosophy as it is about implementing a build process. Once you commit to continuous integration as part of a continuous delivery model, youll be running test and build cycles as soon as code is delivered to your source control release branch and delivering it to users as soon as its in the main branch.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613995-orig.jpg)
### Node.js and io.js ###
Modern cloud applications are built using different design patterns from the familiar n-tier enterprise and Web apps. Theyre distributed, event-driven collections of services that can be quickly scaled and can support many thousands of simultaneous users. One key technology in this new paradigm is [Node.js][18], used by many major cloud platforms and easy to install as part of a thin server or container on cloud infrastructure.
Key to the success of Node.js is the Npm package format, which allows you to quickly install extensions to the core Node.js service. These include frameworks like Express and Seneca, which help build scalable applications. A central registry handles package distribution, and dependencies are automatically installed.
While the [io.js][19] fork exposed issues with project governance, it also allowed a group of developers to push forward adding ECMAScript 6 support to an Npm-compatible engine. After reconciliation between the two teams, the Node.js and io.js codebases have been merged, with new releases now coming from the io.js code repository.
Other forks, like Microsofts io.js fork to add support for its 64-bit Chakra JavaScript engine alongside Googles V8, are likely to be merged back into the main branch over the next year, keeping the Node.js platform evolving and cementing its role as the preferred host for cloud-scale microservices.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-seneca-100613998-orig.jpg)
### Seneca ###
The developers of the [Seneca][20] microservice framework have a motto: “Build it now, scale it later!” Its an apt maxim for anyone thinking about developing microservices, as it allows you to start small, then add functionality as your service grows.
Seneca is at heart an implementation of the [actor/message design pattern][21], focused on using Node.js as a switching engine that takes in messages, processes their contents, and sends an appropriate response, either to the message originator or to another service. By focusing on the message patterns that map to business use cases, its relatively easy to take Seneca and quickly build a minimum viable product for your application. A plug-in architecture makes it easy to integrate Seneca with other tools and to quickly add functionality to your services.
You can easily add new patterns to your codebase or break existing patterns into separate services as the needs of your application grow or change. One pattern can also call another, allowing quick code reuse. Its also easy to add Seneca to a message bus, so you can use it as a framework for working with data from Internet of things devices, as all you need to do is define a listening port where JSON data is delivered.
Services may not be persistent, and Seneca gives you the option of using a built-in object relational mapping layer to handle data abstraction, with plug-ins for common databases.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-netcore-aspnet-100613994-orig.jpg)
### .Net Core and ASP.Net vNext ###
Microsofts [open-sourcing of .Net][22] is bringing much of the companys Web platform into the open. The new [.Net Core][23] release runs on Windows, on OS X, and on Linux. Currently migrating from Microsofts Codeplex repository to GitHub, .Net Core offers a more modular approach to .Net, allowing you to install the functions you need as you need them.
Currently under development is [ASP.Net 5][24], an open source version of the Web platform, which runs on .Net Core. You can work with it as the basis of Web apps using Microsofts MVC 6 framework. Theres also support for the new SignalR libraries, which add support for WebSockets and other real-time communications protocols.
If youre planning on using Microsofts new Nano server, youll be writing code against .Net Core, as its designed for thin environments. The new DNX, the .Net Execution environment, simplifies deployment of ASP.Net applications on a wide range of platforms, with tools for packaging code and for booting a runtime on a host. Features are added using the NuGet package manager, letting you use only the libraries you want.
Microsofts open source .Net is still very young, but theres a commitment in Redmond to ensure its successful. Support in Microsofts own next-generation server operating systems means it has a place in both the data center and the cloud.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-glusterfs-100613989-orig.jpg)
### GlusterFS ###
[GlusterFS][25] is a distributed file system. Gluster aggregates various storage servers into one large parallel network file system. You can [even use it in place of HDFS in a Hadoop cluster][26] or in place of an expensive SAN system -- or both. While HDFS is great for Hadoop, having a general-purpose distributed file system that doesnt require you to transfer data to another location to analyze it is a key advantage.
In an era of commoditized hardware, commoditized computing, and increased performance and latency requirements, buying a big, fat expensive EMC SAN and hoping it fits all of your needs (it wont) is no longer your sole viable option. GlusterFS was acquired by Red Hat in 2011.
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100613992-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][27]
[Bossie Awards 2015: The best open source application development tools][28]
[Bossie Awards 2015: The best open source big data tools][29]
[Bossie Awards 2015: The best open source data center and cloud software][30]
[Bossie Awards 2015: The best open source desktop and mobile software][31]
[Bossie Awards 2015: The best open source networking and security software][32]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982923/open-source-tools/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.docker.com/docker-machine
[2]:https://www.docker.com/docker-swarm
[3]:https://www.docker.com/docker-compose
[4]:https://coreos.com/
[5]:http://rancher.com/rancher-os/
[6]:http://kubernetes.io/
[7]:https://mesos.apache.org/
[8]:https://github.com/joyent/sdc
[9]:https://smartos.org/
[10]:https://github.com/joyent/sdc-docker
[11]:https://sensuapp.org/
[12]:http://prometheus.io/
[13]:https://www.elastic.co/products/elasticsearch
[14]:https://www.elastic.co/products/logstash
[15]:https://www.elastic.co/products/kibana
[16]:http://www.ansible.com/home
[17]:https://jenkins-ci.org/
[18]:https://nodejs.org/en/
[19]:https://iojs.org/en/
[20]:http://senecajs.org/
[21]:http://www.infoworld.com/article/2976422/application-development/how-to-use-actors-in-distributed-applications.html
[22]:http://www.infoworld.com/article/2846450/microsoft-net/microsoft-open-sources-server-side-net-launches-visual-studio-2015-preview.html
[23]:https://dotnet.github.io/core/
[24]:http://www.asp.net/vnext
[25]:http://www.gluster.org/
[26]:http://www.gluster.org/community/documentation/index.php/Hadoop
[27]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[28]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[29]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[30]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[31]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[32]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,223 +0,0 @@
Bossie Awards 2015: The best open source desktop and mobile software
================================================================================
InfoWorld's top picks in open source productivity tools, desktop utilities, and mobile apps
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-desktop-mobile-100614439-orig.jpg)
### The best open source desktop and mobile software ###
Open source on the desktop has a long and distinguished history, and many of our Bossie winners in this category go back many years. Packed with features and still improving, some of these tools offer compelling alternatives to pricey commercial software. Others are utilities that we lean on daily for one reason or another -- the can openers and potato peelers of desktop productivity. One or two of them either plug holes in Windows, or they go the distance where Windows falls short.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-libreoffice-100614436-orig.jpg)
### LibreOffice ###
With the major release of version 5 in August, the Document Foundations [LibreOffice][1] offers a completely redesigned user interface, better compatibility with Microsoft Office (including good-but-not-great DOCX, XLSX, and PPTX file format support), and significant improvements to Calc, the spreadsheet application.
Set against a turbulent background, the LibreOffice effort split from OpenOffice.org in 2010. In 2011, Oracle announced it would no longer support OpenOffice.org, and handed the trademark to the Apache Software Foundation. Since then, it has become [increasingly clear][2] that LibreOffice is winning the race for developers, features, and users.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-firefox-100614426-orig.jpg)
### Firefox ###
In the battle of the big browsers, [Firefox][3] gets our vote over its longtime open source rival Chromium for two important reasons:
&bull; **Memory use**. Chromium, like its commercial cousin Chrome, has a nasty propensity to glom onto massive amounts of memory.
&bull; **Privacy**. Witness the [recent controversy][4] over Chromium automatically downloading a microphone snooping program to respond to “OK, Google.”
Firefox may not have the most features or the down-to-the-millisecond fastest rendering engine. But its solid, stingy with resources, highly extensible, and most of all, it comes with no strings attached. Theres no ulterior data-gathering motive.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-thunderbird-100614433-orig.jpg)
### Thunderbird ###
A longtime favorite email client, Mozillas [Thunderbird][5], may be getting a bit long in the tooth, but its still supported and showing signs of life. The latest version, 38.2, arrived in August, and there are plans for more development.
Mozilla officially pulled its people off the project back in July 2012, but a hardcore group of volunteers, led by Kent James and the all-volunteer Thunderbird Council, continues to toil away. While you wont find the latest email innovations in Thunderbird, you will find a solid core of basic functions based on local storage. If having mail in the cloud spooks you, its a good, private alternative. And if James goes ahead with his idea of encrypting Thunderbird mail end-to-end, there may be significant new life in the old bird.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-notepad-100614432-orig.jpg)
### Notepad++ ###
If Windows Notepad handles all of your text editing (and source code editing and HTML editing) needs, more power to ya. For Windows users who yearn for a little bit more in a text editor, theres Don Hos [Notepad++][6], which is the editor I turn to, over and over again.
With tabbed views, drag-and-drop, color-coded hints for completing HTML commands, bookmarks, macro recording, shortcut keys, and every text encoding format youre likely to encounter, Notepad++ takes text to a new level. We get frequent updates, too, with the latest in August.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-vlc-100614435-orig.jpg)
### VLC ###
The stalwart [VLC][7] (formerly known as VideoLan Client) runs almost any kind of media file on almost any platform. Yes, it even works as a remote control on Apple Watch.
The tiled Universal app version for Windows 10, in the Windows Store, draws some criticism for instability and lack of control, but in most cases VLC works, and it works well -- without external codecs. It even supports Blu-ray formats with two new libraries.
The desktop version is a must-have for Windows 10, unless youre ready to run the advertising gauntlets that are the Universal Groove Music and Movies & TV apps from Microsoft. VLC received a major [feature update][8] in February and a comprehensive bug fix in April.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-7-zip-100614429-orig.jpg)
### 7-Zip ###
Long recognized as the preeminent open source ZIP archive manager for Windows, [7-Zip][9] works like a champ, even on the Windows 10 desktop. Full coverage for RAR files, which can be problematic in Windows, combine with password-protected file creation and support for self-extracting ZIPs. Its one of those programs that just works.
Yes, it would be nice to get a more modern file picker. Yes, it would be interesting to see a tiled Universal app version. But even without the fancy bells and whistles, 7-Zip deserves a place on every Windows desktop.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-handbrake-100614427-orig.jpg)
### Handbrake ###
If you want to convert your DVDs (or video files in any commonly used format) into a file in some other format, or simply scrape them off a silver coaster, [Handbrake][10] is the way to do it. If youre a Windows user, Handbrake is almost indispensible, since Microsoft doesnt believe in ripping DVDs.
Handbrake presents a number of handy presets for optimizing conversions for your target device (iPod, iPad, Android Tablet, and so on) Its simple, and its fast. With the latest round of bug fixes released in June, Handbrakes keeping up on maintenance -- and it works fine on the Windows 10 desktop.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-keepass-100614430-orig.jpg)
### KeePass ###
Ill confess that I almost gave up on [KeePass][11] because the primary download site goes to Sourceforge. That means you have to be extremely careful which boxes are checked and what you click on (and when) as you attempt to download and install the software. While KeePass itself is 100 percent clean open source (GNU GPL), Sourceforge doesnt feel so constrained, and its [installers reek of crapware][12].
One of many local-file password storage programs, KeePass distinguishes itself with broad scope, as well as its ability to run on all sorts of platforms, no installation required. KeePass will save not only passwords, but also credit card information and freely structured information. It provides a strong random password generator, and the database itself is locked with AES and Twofish, so nobodys going to crack it. And its kept up to date, with a new stable release last month.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-virtualbox-100614434-orig.jpg)
### VirtualBox ###
With a major release published in July, Oracles open source [VirtualBox][13] -- available for Windows, OS X, Linux, even Solaris --continues to give commercial counterparts VMware Workstation, VMware Fusion, Parallels Desktop, and Microsofts Hyper-V a hard run for their money. The Oracle team is still getting the final Windows 10 bugs ironed out, but come to think of it, so is Microsoft.
VirtualBox doesnt quite match the performance or polish of the VMware and Parallels products, but its getting closer. Version 5 brought long-awaited drag-and-drop support, making it easier to move files between VMs and host.
I prefer VirtualBox over Hyper-V because its easy to control external devices. In Hyper-V, for example, getting sound to work is a pain in the neck, but in VirtualBox it only takes a click in setup. The shared clipboard between VM and host works wonders. Running speed on both is roughly the same, with a slight advantage to Hyper-V. But managing VirtualBox machines is much easier.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-inkscape-100614428-orig.jpg)
### Inkscape ###
If you stand in awe of the designs created with Adobe Illustrator (or even CorelDraw), take a close look at [Inkscape][14]. Scalable vector images never looked so good.
Version 0.91, released in January, uses a new internal graphics rendering engine called Cairo, sponsored by Google, to make the app run faster and allow for more accurate rendering. Inkscape will read and write SVG, PNG, PDF, even EPS, and many other formats. It can export Flash XML Graphics, HTML5 Canvas, and XAML, among others.
Theres a strong community around Inkscape, and its built for easy extensibility. Its available for Windows, OS X, and Linux.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-keepassdroid-100614431-orig.jpg)
### KeePassDroid ###
Trying to remember all of the passwords we need today is impossible, and creating new ones to meet stringent password policy requirements can be agonizing. A port of KeePass for Android, [KeePassDroid][15] brings sanity preserving password management to mobile devices.
Like KeyPass, KeyPassDroid makes creating and accessing passwords easy, requiring you to recall only a single master password. It supports both DES and Twofish algorithms for encrypting all passwords, and it goes a step further by encrypting the entire password database, not only the password fields. Notes and other password pertinent information are encrypted too.
While KeePassDroid's interface is minimal -- dated, some would say -- it gets the job done with bare-bones efficiency. Need to generate passwords that have certain character sets and lengths? KeePassDroid can do that with ease. With more than a million downloads on the Google Play Store, you could say this app definitely fills a need.
-- Victor R. Garza
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-prey-100615300-orig.jpg)
### Prey ###
Loss or theft of mobile devices is all too common these days. While there are many tools in the enterprise to manage and erase data either misplaced or stolen from an organization, [Prey][16] facilitates the recovery of the phone, laptop, or tablet, and not just the wiping of potentially sensitive information from the device.
Prey is a Web service that works with an open source installed agent for Linux, OS X, Windows, Android, and iOS devices. Prey tracks your lost or stolen device by using either the device's GPS, the native geolocation provided by newer operating systems, or an associated Wi-Fi hotspot to home in on the location.
If your smartphone is lost or stolen, send a text message to the device to activate Prey. For stolen tablets or laptops, use the Prey Project's cloud-based control panel to select the device as missing. The Prey agent on any device can then take a screenshot of the active applications, turn on the camera to catch a thief's image, reset the device to the factory settings, or fully lock down the device.
Should you want to retrieve your lost items, the Prey Project strongly suggests you contact your local police to have them assist you.
-- Victor R. Garza
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orbot-100615299-orig.jpg)
### Orbot ###
The premiere proxy application for Android, [Orbot][17] leverages the volunteer-operated network of virtual tunnels called Tor (The Onion Router) to keep all communications private. Orbot works with companion applications [Orweb][18] for secure Web browsing and [ChatSecure][19] for secure chat. In fact, any Android app that allows its proxy settings to be changed can be secured with Orbot.
One thing to remember about the Tor network is that it's designed for secure, lightweight communications, not for pulling down torrents or watching YouTube videos. Surfing media-rich sites like Facebook can be painfully slow. Your Orbot communications won't be blazing fast, but they will stay private and confidential.
-- Victor R. Garza
![](http://images.techhive.com/images/article/2015/09/bossies-2015-tails-100615301-orig.jpg)
### Tails ###
[Tails][20], or The Amnesic Incognito Live System, is a Linux Live OS that can be booted from a USB stick, DVD, or SD card. Its often used covertly in the Deep Web to secure traffic when purchasing illicit substances, but it can also be used to avoid tracking, support freedom of speech, circumvent censorship, and promote liberty.
Leveraging Tor (The Onion Router), Tails keeps all communications secure and private and promises to leave no trace on any computer after its used. It performs disk encryption with LUKS, protects instant messages with OTR, encrypts Web traffic with the Tor Browser and HTTPS Everywhere, and securely deletes files via Nautilus Wipe. Tails even has an office suite, image editor, and the like.
Now, it's always possible to be traced while using any system if you're not careful, so be vigilant when using Tails and follow good privacy practices, like turning off JavaScript while using Tor. And be aware that Tails isn't necessarily going to be speedy, even while using a fiber connect, but that's what you pay for anonymity.
-- Victor R. Garza
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100614438-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][21]
[Bossie Awards 2015: The best open source application development tools][22]
[Bossie Awards 2015: The best open source big data tools][23]
[Bossie Awards 2015: The best open source data center and cloud software][24]
[Bossie Awards 2015: The best open source desktop and mobile software][25]
[Bossie Awards 2015: The best open source networking and security software][26]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982630/open-source-tools/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.libreoffice.org/download/libreoffice-fresh/
[2]:http://lwn.net/Articles/637735/
[3]:https://www.mozilla.org/en-US/firefox/new/
[4]:https://nakedsecurity.sophos.com/2015/06/24/not-ok-google-privacy-advocates-take-on-the-chromium-team-and-win/
[5]:https://www.mozilla.org/en-US/thunderbird/
[6]:https://notepad-plus-plus.org/
[7]:http://www.videolan.org/vlc/index.html
[8]:http://www.videolan.org/press/vlc-2.2.0.html
[9]:http://www.7-zip.org/
[10]:https://handbrake.fr/
[11]:http://keepass.info/
[12]:http://www.infoworld.com/article/2931753/open-source-software/sourceforge-the-end-cant-come-too-soon.html
[13]:https://www.virtualbox.org/
[14]:https://inkscape.org/en/download/windows/
[15]:http://www.keepassdroid.com/
[16]:http://preyproject.com/
[17]:https://www.torproject.org/docs/android.html.en
[18]:https://guardianproject.info/apps/orweb/
[19]:https://guardianproject.info/apps/chatsecure/
[20]:https://tails.boum.org/
[21]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[22]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[23]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[24]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[25]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[26]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,186 +0,0 @@
[translating by ray]
Interviews: Linus Torvalds Answers Your Question
================================================================================
Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2].
**Productivity**
by DoofusOfDeath
> You've somehow managed to originate two insanely useful pieces of software: Linux, and Git. Do you think there's anything in your work habits, your approach to choosing projects, etc., that have helped you achieve that level of productivity? Or is it just the traditional combination of talent, effort, and luck?
**Linus**: I'm sure it's pretty much always that "talent, effort and luck". I'll leave it to others to debate how much of each...
I'd love to point out some magical work habit that makes it all happen, but I doubt there really is any. Especially as the work habits I had wrt the kernel and Git have been so different.
With Git, I think it was a lot about coming at a problem with fresh eyes (not having ever really bought into the traditional SCM mindset), and really trying to think about the issues, and spending a fair amount of time thinking about what the real problems were and what I wanted the design to be. And then the initial self-hosting code took about a day to write (ok, that was "self-hosting" in only the weakest sense, but still).
And with Linux, obviously, things were very different - the big designs came from the outside, and it took half a year to host itself, and it hadn't even started out as a kernel to begin with. Clearly not a lot of thinking ahead and planning involved ;). So very different circumstances indeed.
What both the kernel and Git have, and what I think is really important (and I guess that counts as a "work habit"), is a maintainer that stuck to it, and was responsive, responsible and sane. Too many projects falter because they don't have people that stick with them, or have people who have an agenda that doesn't match reality or the user expectations.
But it's very important to point out that for Git, that maintainer was not me. Junio Hamano really should get pretty much all the credit for Git. Credit where credit is due. I'll take credit for the initial implementation and design of Git - it may not be perfect, but ten years on it still is very solid and very clearly the same basic design. But I'll take even _more_ credit for recognizing that Junio had his head screwed on right, and was the person to drive the project. And all the rest of the credit goes to him.
Of course, that kind of segues into something else the kernel and Git do have in common: while I still maintain the kernel, I did end up finding a lot of smart people to maintain all the different parts of it. So while one important work habit is that "stick to it" persistence that you need to really take a project from a not-quite-usable prototype to something bigger and better, another important work-habit is probably to also "let go" and not try to own and control the project too much. Let other people really help you - guide the process but don't get in their way.
**init system**
by lorinc
> There wasn't a decent unix-like kernel, you wrote one which ultimately became the most used. There wasn't a decent version control software, you wrote one which ultimately became the most love. Do you think we already have a decent init system, or do you have plan to write one that will ultimately settle the world on that hot topic?
**Linus**: You can say the word "systemd", It's not a four-letter word. Seven letters. Count them.
I have to say, I don't really get the hatred of systemd. I think it improves a lot on the state of init, and no, I don't see myself getting into that whole area.
Yeah, it may have a few odd corners here and there, and I'm sure you'll find things to despise. That happens in every project. I'm not a huge fan of the binary logging, for example. But that's just an example. I much prefer systemd's infrastructure for starting services over traditional init, and I think that's a much bigger design decision.
Yeah, I've had some personality issues with some of the maintainers, but that's about how you handle bug reports and accept blame (or not) for when things go wrong. If people thought that meant that I dislike systemd, I will have to disappoint you guys.
**Can Valve change the Linux gaming market?**
by Anonymous Coward
> Do you think Valve is capable of making Linux a primary choice for gamers?
**Linus**: "Primary"? Probably not where it's even aiming. I think consoles (and all those handheld and various mobile platforms that "real gamers" seem to dismiss as toys) are likely much more primary, and will stay so.
I think Valve wants to make sure they can control their own future, and Linux and ValveOS is probably partly to explore a more "console-like" Valve experience (ie the whole "get a box set up for a single main purpose", as opposed to a more PC-like experience), and partly as a "second source" against Microsoft, who is a competitor in the console area. Keeping your infrastructure suppliers honest by making sure you have alternatives sounds like a good strategy, and particularly so when those suppliers may be competing with you directly elsewhere.
So I don't think the aim is really "primary". "Solid alternative" is I think the aim. Of course, let's see where it goes after that.
But I really have not been involved. People like Greg and the actual graphics driver guys have been in much more direct contact with Valve. I think it's great to see gaming on Linux, but at the same time, I'm personally not really much of a gamer.
**The future of RT-Linux?**
by nurhussein
> According to Thomas Gleixner, [the future of the realtime patchset to Linux is in doubt][2], as it is difficult to secure funding from interested parties on this functionality even though it is both useful and important: What are your thoughts on this, and what do you think we need to do to get more support behind the RT patchset, especially considering Linux's increasing use in embedded systems where realtime functionality is undoubtedly useful.
**Linus**: So I think this is one of those things where the markets decide how important rtLinux ends up being, and I suspect there are more than enough companies who end up wanting and using rtLinux that the project isn't really going anywhere. The complaints by Thomas were - I think - a wake-up call to the companies who end up wanting the extended hard realtime patches.
So I suspect there are companies and groups like OSADL that end up funding and helping with rtLinux, and that it isn't going away.
**Rigor and developments**
by hcs_$reboot
> The most complex program running on a machine is arguably its OS, especially the kernel. Linux (kernel) reached the top level in terms of performance, reliability and versatility. You have been criticized quite a few times for some virulent mails addressed to developers. Do you think Linux would be where it is without managing the project with an iron fist? To go further, do you think some other main OSS project would benefit from a more rigorous management approach?
**Linus**: One of the nice things about open source is how it allows people to really concentrate on what they are good at, and it has been a huge advantage for Linux that we've had people who are interested in the marketing side and selling Linux, as well as the legal side etc.
And that is all in addition, of course, to the original "we're motivated by the technology" people like me. And even within that "we're motivated by technology" group, you most certainly don't need to find _everything_ interesting, you can find the area you are passionate about and really care about and want to work on.
That's _fundamentally_ how open source works.
Now, if somebody is passionate about some "good management" thing, go wild, and try to get involved, and try to manage things. It's not what _I_ am interested in, but hey, the proof is in the pudding - anybody who thinks they have a new rigorous management approach that they think will help some part of the process, go wild.
Now, I personally suspect that it wouldn't work - not only are tech people an ornery lot to begin with (that whole "herding cats" thing), just look at all the crazy arguments on the internet. And ask yourself what actually holds an open source project like the kernel together? I think you need to be very oriented towards the purely technical solutions, simply because then you have tangible and real issues you can discuss (and argue about) with fairly clear-cut hard answers. It's the only thing people can really agree on in the big picture.
So the Linux approach to "management" has been to put technology first. That's rigorous enough for me. But as mentioned, it's a free-for-all. Anybody can come in and try to do better. Really.
And btw, it's worth noting that there are obviously specific smaller development teams where other management models work fine. Most of the individual developers are parts of teams inside particular companies, and within the confines of that company, there may well be a very strict rigorous management model. Similarly, within the confines of a particular productization effort there may be particular goals and models for that particular team that transcend that general "technical issues" thing.
Just to give a concrete example, the "development kernel" tree that I maintain works fundamentally differently and with very different rules from the "stable tree" that Greg does, which in turn is maintained very differently from what a distribution team within a Linux company does inside its maintenance kernel team.
So there's certainly room for different approaches to managing those very different groups. But do I think you can "rigorously manage" people on the internet? No.
**Functional languages?**
by EmeraldBot
> While historically you've been a C and Assembly guy (and the odd shell scripting and such), what do you think of functional languages such as Lisp, Closure, Haskell, etc? Do you see any advantages to them, or do you view them as frivolous and impractical? If you decide to do so, thanks for taking the time to answer my question! You're a legend at what you do, and I think it's awesome that the significantly less interesting me can ask you a question like this.
**Linus**: I may be a fan of C (with a certain fondness for assembly, just because it's so close to the machine), but that's very much about a certain context. I work at a level where those languages make sense. I certainly don't think that tools like Haskell etc are "frivolous and impractical" in general, although on a kernel level (or in a source control management system) I suspect they kind of are.
Many moons ago I worked on sparse (the C parser and analyzer), and one of my coworkers was a Haskell fan, and did incredible example transformations in very simple (well, to him) code - stuff that is just nasty to write in C because it's pretty high-level, there's tons of memory management, and you're really talking about implementing fairly abstract and high-level rules with pattern matching etc.
So I'm definitely not a functional language kind of guy - it's not how I learnt programming, and it really isn't very relevant to what I do, and I wouldn't recognize Haskell code if it bit me in the ass and called me names. But no, I wouldn't call them frivolous.
**Critical software to the use of Linux**
by TWX
> Mr. Torvalds, For many uses of Linux such as on the desktop, other software beyond the kernel and the base GNU tools are required. What other projects would you like to see given priority, and what would you like to see implemented or improved? Admittedly I thought most about X-Windows when asking this question; but I don't doubt that other daemons or systems can be just as important to the user experience. Thank you for your efforts all these years.
**Linus**: Hey, I don't really have any particular project I would want to champion, largely because we all have so different requirements on the desktop. There's just no single thing that stands out as being hugely more important than others to me.
What I do wish particularly desktop developers cared about is "consistency of experience". And by that I don't mean some kind of enforced visual consistency between different applications to make things "look coherent". No, I'm just talking about the pain and uncertainty users go through with upgrades, and understanding that while your project may be the most important project to *you* (because it's what you do), to your users, your project is likely just a fairly small and irrelevant part of their experience, and it's not very central at all, and they've learnt the quirks about that thing they don't even care about, and you really shouldn't break their expectations. Because it turns out that that is how you really make people hate their desktop.
This is not at all Linux-specific, of course - just look at the less than enthusiastic reception that other operating system redesigns have received. But I really wish that we hadn't had *both* of the major Linux desktop environments have to learn this (well, I hope they learnt) the hard way, and both of them ending up blaming their users rather than themselves.
**"anykernel"-style portable drivers?**
by staalmannen
> What do you think about the "anykernel" concept (invented by another Finn btw) used in NetBSD? Basically, they have modularized the code so that a driver can be built either in a monolithic kernel or for user space without source code changes ( rumpkernel.org ). The drivers are highly portable and used in Genode os (L4 type kernels), minix etc... Would this be possible or desirable for Linux? Apparently there is one attempt called "libos"...
**Linus**: So I have bad experiences with "portable" drivers. Writing drivers to some common environment tends to force some ridiculously nasty impedance matching abstractions that just get in the way and make things really hard to read and modify. It gets particularly nasty when everybody ends up having complicated - and differently so - driver subsystems to handle a lot of commonalities for a certain class of drivers (say a network driver, or a USB driver), and the different operating systems really have very different approaches and locking rules etc.
I haven't seen anykernel drivers, but from past experience my reaction to "portable device drivers" is to run away, screaming like little girl. As they say in Swedish "Bränt barn luktar illa".
**Processor Architecture**
by swv3752
> Several years ago, you were employed by Transmeta designing the Crusoe processor. I understand you are quite knowledgeable about cpu architecture. What are your thoughts on the Current Intel and AMD x86 CPUs particularly in comparison with ARM and IBM's Power8 CPUs? Where do you see the advantages of each one?
**Linus**: I'm no CPU architect, I just play one on TV.
But yes, I've been close to the CPU both as part of my kernel work, and as part of a processor company, and working at that level for a long time just means that you end up having fairly strong opinions. One of the things that my experiences at Transmeta convinced me of, for example, was that there's definitely very much a limit to what software should care about. I loved working at Transmeta, I loved the whole startup company environment, I loved working with really smart people, but in the end I ended up absolutely *not* loving to work with overly simple hardware (I also didn't love the whole IPO process, and what that did to the company culture, but that's a different thing).
Because there's only so much that software can do to compensate.
Something similar happened with my kernel work on the alpha architecture, which also started out as being an overly simplified implementation in the name of being small and supposedly running really fast. While I really started out liking the alpha architecture for being so clean, I ended up detesting how fragile the architecture implementations were (and by the time that got fixed in the 21264, I had given up on alpha).
So I've come to absolutely detest CPU's that need a lot of compiler smarts or special tuning to go fast. Life is too short to waste on in-order CPU's, or on hardware designers who think software should take care of the pieces that they find to be too complicated to handle themselves, and as a result just left undone. "Weak memory ordering" is just another example.
Thankfully, most of the industry these days seems to agree. Yes, there are still in-order cores, but nobody tries to make excuses for them any more: they are for the truly cheap and low-end market.
I tend to really like the modern Intel cores in particular, which tend to take that "let's not be stupid" really to heart. With the kernel being so threaded, I end up caring a lot about things like memory ordering etc, and the Intel big-core CPU's tend to be in a class of their own there. As a software person who cares about performance and looks at instruction profiles etc, it's just so *nice* to see that the CPU doesn't have some crazy glass jaw where you have to be very careful.
**GPU kernels**
by maraist
> Is there any inspiration that a GPU based kernel / scheduler has for you? How might Linux be improved to better take advantage of GPU-type batch execution models. Given that you worked transmeta and JIT compiled host-targeted runtimes. GPUs 1,000-thread schedulers seem like the next great paradigm for the exact type of machines that Linux does best on.
**Linus**: I don't think we'll see the kernel ever treat GPU threads the way we treat CPU threads. Not with the current model of GPU's (and that model doesn't really seem to be changing all that much any more).
Yes, GPU's are getting much better, and now generally have virtual memory and the ability to preempt execution, and you could run an OS on them. But the scheduling latencies are pretty high, and the threads are not really "independent" (ie they tend to share a lot of state - like the virtual address space and a large shared register set), so GPU "threads" don't tend to work like CPU threads. You'd schedule them all-or-nothing, so if you were to switch processes, you'd treat the GPU as one entity where you switch all the threads at once.
So it really wouldn't look like a thousand threads to the kernel. The GPU would still be scheduled as one single entity (or maybe a couple of entities depending on how the GPU is partitioned). The fact that that single entity works by doing a lot of things in massive parallelism is kind of immaterial for the kernel that doesn't end up seeing that parallelism as separate threads.
**alleged danger of Artificial Intelligence**
by peter303
> Some computer experts like Marvin Minsky, Larry Page, Ray Kuzweil think A.I. will be a great gift to Mankind. Others like Bill Joy and Elon Musk are fearful of potential danger. Where do you stand, Linus?
**Linus**: I just don't see the thing to be fearful of.
We'll get AI, and it will almost certainly be through something very much like recurrent neural networks. And the thing is, since that kind of AI will need training, it won't be "reliable" in the traditional computer sense. It's not the old rule-based prolog days, when people thought they'd *understand* what the actual decisions were in an AI.
And that all makes it very interesting, of course, but it also makes it hard to productize. Which will very much limit where you'll actually find those neural networks, and what kinds of network sizes and inputs and outputs they'll have.
So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.
The whole "Singularity" kind of event? Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really..
It's like Moore's law - yeah, it's very impressive when something can (almost) be plotted on an exponential curve for a long time. Very impressive indeed when it's over many decades. But it's _still_ just the beginning of the "S curve". Anybody who thinks any different is just deluding themselves. There are no unending exponentials.
**Is the kernel basically a finished project?**
by NaCh0
> Aside from adding drivers and refactoring algorithms when performance limits are discovered, is there anything left for the kernel? Maybe it's a failure of tech journalism but we never hear about the next big thing in kernel land anymore.
**Linus**: I don't think there's much of a "next big thing" in the kernel.
I wouldn't say that there is nothing but drivers (and architectures are kind of "CPU drivers) and improving scalability left, because I'm constantly amazed by how many new things people figure out are still good ideas. But they tend to still be pretty incremental improvements. An OS kernel doesn't look *that* radically different from what it was 40 years ago, and that's fine. I think radical new ideas are often overrated, and the thing that really matters in the end is that plodding detail work. That's how technology evolves.
And judging by how our kernel releases are going, there's no end in sight for that "plodding detail work". And it's still as interesting as it ever was.
--------------------------------------------------------------------------------
via: http://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds-answers-your-question
作者:[samzenpus][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:samzenpus@slashdot.org
[1]:http://interviews.slashdot.org/story/15/06/24/1718247/interview-ask-linus-torvalds-a-question
[2]:http://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions
[3]:https://lwn.net/Articles/604695/

View File

@ -1,3 +1,5 @@
vim-kakali is translating.
While the event had a certain amount of drama surrounding it, the [announcement][1] of the end for the [Debian Live project][2] seems likely to have less of an impact than it first appeared. The loss of the lead developer will certainly be felt—and the treatment he and the project received seems rather baffling—but the project looks like it will continue in some form. So Debian will still have tools to create live CDs and other media going forward, but what appears to be a long-simmering dispute between project founder and leader Daniel Baumann and the Debian CD and installer teams has been "resolved", albeit in an unfortunate fashion.
The November 9 announcement from Baumann was titled "An abrupt End to Debian Live". In that message, he pointed to a number of different events over the nearly ten years since the [project was founded][3] that indicated to him that his efforts on Debian Live were not being valued, at least by some. The final straw, it seems, was an "intent to package" (ITP) bug [filed][4] by Iain R. Learmonth that impinged on the namespace used by Debian Live.

View File

@ -1,61 +0,0 @@
alim0x translating
The history of Android
================================================================================
![Another Play Store redesign! This one is very close to the current design and uses cards that make layout changes a piece of cake.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/get-em-Kirill.jpg)
Another Play Store redesign! This one is very close to the current design and uses cards that make layout changes a piece of cake.
Photo by Ron Amadeo
### Out-of-cycle updates—who needs a new OS? ###
In between Android 4.2 and 4.3, Google went on an out-of-cycle update tear and showed just how much Android could be improved without having to fire up the arduous OTA update process. Thanks to the [Google Play Store and Play Services][1], all of these updates were able to be delivered without updating any core system components.
In April 2013, Google released a major redesign to the Google Play Store. Like most redesigns from here on out, the new Play Store fully adopted the Google Now aesthetic, with white cards on a gray background. The action bar changed color based on the current content section, and since the first screen featured content from all sections of the store, the action bar was a neutral gray. Buttons to navigate to the content sections were now given top billing, and below that was usually a promotional block or rows of recommended apps.
In April 2013, Google released a major redesign to the Google Play Store. Like most redesigns from here on out, the new Play Store fully adopted the Google Now aesthetic, with white cards on a gray background. The action bar changed color based on the current content section, and since the first screen featured content from all sections of the store, the action bar was a neutral gray. Buttons to navigate to the content sections were now given top billing, and below that was usually a promotional block or rows of recommended apps.
![The individual content sections are beautifully color-coded.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/content-rainbow.jpg)
The individual content sections are beautifully color-coded.
Photo by Ron Amadeo
The new Play Store showed off the real power of Googles card design language, which enabled a fully responsive layout across all screen sizes. One large card could be stuck next to several little cards, larger-screened devices could show more cards, and rather than stretch things in horizontal mode, more cards could just be added to a row. The Play Store content editors were free to play with the layout of the cards, too; a big release that needed to be highlighted could get a larger card. This design would eventually trickle down to the other Google Play content apps, finally resulting in a unified design.
![Hangouts replaced Google Talk and is now continually developed by the Google+ team.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/talkvhangouts2.jpg)
Hangouts replaced Google Talk and is now continually developed by the Google+ team.
Photo by Ron Amadeo
Google I/O, the company's annual developer conference, was usually where a new Android version was announced. But at the 2013 edition, Google made just as many improvements without having to update the OS.
One of the biggest things announced at the show was an update to Google Talk, Google's instant messaging platform. For a long time, Google shipped four text communication apps for Android: Google Talk, Google+ Messenger, Messaging (the SMS app), and Google Voice. Having four apps that accomplished the same task—sending a text message to someone—was very confusing for users. At I/O, Google killed Google Talk and started their messaging product over from scratch, creating [Google Hangouts][2]. While initially it only replaced Google Talk, the plan for Hangouts was to unify all of Google's various messaging apps into a single interface.
The layout of the Hangouts UI really wasn't drastically different from Google Talk. The main page contained your open conversations, and tapping on one opened a chat page. The design was updated, the chat page now used a card-style display for each paragraph, and the chat list was now a "drawer"-style interface, meaning you could open it with a horizontal swipe. Hangouts had read receipts and a typing status indicator, and group chat was now a primary feature.
Google+ was the center of Hangouts now, so much so that the full name of the product was actually "Google+ Hangouts." Hangouts was completely integrated with the Google+ desktop site so that video and chats could be made from one to the other. Identity and avatars were pulled from Google+, and tapping on an avatar would open that person's Google+ profile. And much like the change from Browser to Google Chrome, core Android functionality was passed off to a separate team—the Google+ team—as opposed to being a side product of the very busy Android engineers. With the Google+ takeover, Android's main IM client now became a continually developed application. It was placed into the Play Store and received fairly regular updates.
![The new navigation drawer interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/navigation_drawer_overview1.png)
The new navigation drawer interface.
Photo by [developer.android.com][3]
Google also introduced a new design element for the action bar: the navigation drawer. This drawer was shown as a set of three lines next to the app icon in the top-right corner. By tapping on it or dragging from the edge of the screen to the right, a side-mounted menu would appear. As the name implies, this was used to navigate around the app, and it would show several top-level locations within the app. This allowed the first screen to show content, and it gave users a consistent, easy-to-access place for navigation elements. The nav drawer was basically a super-sized version of the normal menu, scrollable and docked to the right side.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/23/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
[2]:http://arstechnica.com/information-technology/2013/05/hands-on-with-hangouts-googles-new-text-and-video-chat-architecture/
[3]:https://developer.android.com/design/patterns/navigation-drawer.html
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,82 +0,0 @@
The history of Android
================================================================================
![The slick new Google Play Music app, which changed from Tron to a perfect match for the Play Store.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Goooogleplaymusic.jpg)
The slick new Google Play Music app, which changed from Tron to a perfect match for the Play Store.
Photo by Ron Amadeo
Another app update pushed out at I/O was a new Google Music app. The app was completely redesigned, finally doing away with the blue-on-blue design introduced in Honeycomb. Play Music's design was unified with the new Play Store released a few months earlier, with a responsive white card layout. Music was also one of the first major apps to take advantage of the new navigation drawer style. Along with the new app, Google launched Google Play Music All Access, an all-you-can-eat subscription service for $10 a month. Google Music now had a subscription plan, à la carte purchasing, and a cloud music locker. This version also introduced "Instant Mix," a mode where Google would cloud-compute a playlist of similar songs.
![A game showing support for Google Play Games. This lineup shows the Play Store game feature descriptions, the permissions box triggered by signing into the game, a Play Games notification, and the achievements screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gooooogleplaygames.jpg)
A game showing support for Google Play Games. This lineup shows the Play Store game feature descriptions, the permissions box triggered by signing into the game, a Play Games notification, and the achievements screen.
Photo by Ron Amadeo
Google also introduced "Google Play Games," a back-end service that developers could plug into their games. The service was basically an Android version of Xbox Live or Apple's Game Center. Developers could build Play Games support into their game, which would easily let them integrate achievements, leaderboards, multiplayer, matchmaking, user accounts, and cloud saves by using Google's back-end services.
Play Games was the start of Google's big push into gaming. Just like standalone GPS units, flip phones, and MP3 players, smartphone makers were hoping standalone gaming devices would be turned into nothing more than a smartphone feature bullet point. Why buy a Nintendo DS or PS Vita when you had a smartphone with you? An easy-to-use multiplayer service would be a big part of this, and we've still yet to see the final consequence of this move. Today, Google and Apple are both rumored to be planning living room gaming devices.
![Google Keep, Google's first note taking service since Google Notebook.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/goooglekeep.jpg)
Google Keep, Google's first note taking service since Google Notebook.
Photo by Ron Amadeo
It was clear some products were developed in time for presentation at Google I/O, [but the three-and-a-half hour keynote][1] was already so massive, some things were cut from being announced. Once the smoke cleared three days after Google I/O, Google introduced Google Keep, a note taking app for Android and the Web. Keep was a fairly straightforward affair, applying the responsive Google Now-style design to a note taking app. Users could change the size of the cards from a multi-column layout to a single column view. Notes could consist of plain text, checklists, voice note with automatic transcription, or pictures. Note cards could be dragged around and rearranged on the main screen, and you could even assign a color to a note.
![Gmail 4.5, which switched to the new navigation drawer design and merged the action bars, thanks to some clever button elimination.](http://cdn.arstechnica.net/wp-content/uploads/2014/05/gmail.png)
Gmail 4.5, which switched to the new navigation drawer design and merged the action bars, thanks to some clever button elimination.
Photo by Ron Amadeo
After I/O, not much was safe from Google's out-of-cycle updating. In June 2013, Google released a redesigned version of Gmail. The headline feature of the new design was the new navigation drawer interface that was introduced a month earlier at Google I/O. The most eye catching change was the addition of Google+ profile pictures instead of checkboxes. While the checkboxes were visibly removed, they were still there, just tap on a picture.
![The new Google Maps, which switched to an all-white Google Now-style theme.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps11.png.)
The new Google Maps, which switched to an all-white Google Now-style theme.
Photo by Ron Amadeo
One month later, Google released a completely overhauled version of Google Maps to the Play Store. It was the first ground-up redesign of Google Maps since Ice Cream Sandwich. The new version fully adopted the Google Now white card aesthetic, and it greatly reduced the amount of stuff on the screen. The new Google Maps seemed to have a design mandate to always show a map on the screen somewhere, as youll be hard pressed to find something other than the settings that fully covers the map.
This version of Google Maps seemed to live in its own little design world. The white search bar “floated" above the map, with maps showing on the sides and top of the bar. That didn't really make it seem like the traditional Action Bar design. The navigation drawer, in the top left on every other app, was in the bottom left. There was no up button, app icon, or overflow button on the main screen.
![The new Google Maps cut a lot of fat and displayed more information on a single screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps21.png)
The new Google Maps cut a lot of fat and displayed more information on a single screen.
Photo by Ron Amadeo
The left picture shows what popped up when you tapped on the search bar (along with the keyboard, which had been closed). In the past, Google would show an empty page below a blank search bar, but in Maps, Google used that space to link to the new “Local" page. The “blank" search results displayed links to common, browsable results like restaurant listings, gas stations, and attractions. At the bottom of the results page was a list of nearby results from your search history and an option to manually cache parts of the map.
The right set of images shows location page. The map shown in the top of the Maps 7 screenshot isnt a thumbnail; thats the full map view. In the new version of Google Maps, a location was displayed as a card that “floats" overtop of the main map, and the map was repositioned to center on the location. Scrolling up would move the card up and cover the map, and scrolling down would show the whole map with the result reduced to a small strip at the bottom. If the location was part of a list of search results, swiping left and right would move through the results.
The location pages were redesigned to be much more useful at a glance. On the first page, the new version added critical information, like the location on a map, the review score, and the number of reviews. Since this is a phone, and the software will be dialing for you, the phone number was deemed pointless and was removed. The old version showed the distance to the location in miles, while the new version of Google Maps showed the distance in terms of time, based on traffic and preferred mode of transportation—a much more useful metric. The new version also put a share button front and center, which made coordination over IM or text messaging a lot easier.
### Android 4.3, Jelly Bean—getting wearable support out early ###
Android 4.3 would have been an incredible update if Google had done the traditional thing and not released updates between 4.3 and 4.2 through the Play Store. If the new Play Store, Gmail, Maps, Books, Music, Hangouts, Keep, and Play Games were bundled into a big brick as a new version of Android, it would have been hailed as the biggest release ever. Google didn't need to do hold back features anymore though. With very little left that required an OS update, at the end of July 2013, Google released the seemingly insignificant update called "Android 4.3."
![Android Wear plugging into Android 4.3's Notification access screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-28-12.231.jpg)
Android Wear plugging into Android 4.3's Notification access screen.
Photo by Ron Amadeo
Google made no qualms about the low importance of 4.3, calling the newest release "Jelly Bean" (the third one in a row). Android 4.3's feature list read like a laundry list of things Google couldn't update from the Play Store or through Google Play Services, mostly consisting of low-level framework changes for developers.
Many of the additions seemed to fit a singular purpose, though—Android 4.3 was Google's trojan horse for wearable computing support. 4.3 added support for Bluetooth Low Energy, a way to wirelessly connect Android to another device and pass data back and forth while using a very small amount of power—an integral feature to a wearable device. Android 4.3 also added a "Notification Access" API, which allowed apps to completely replicate and control the notification panel. Apps could display notification text and pictures and interact with the notification the same way users do—namely pressing action buttons and dismissing notifications. Doing this from an on-board app when you have the notification panel is useless, but on a device that is separate from your phone, replicating the information in the notification panel becomes much more useful. One of the few apps that plugged into this was "Android Wear Preview," which used the notification API to power most of the interface for Android Wear.
The "4.3 is for wearables" theory explained the relatively low number of features in 4.3: it was pushed out the door to give OEMs time to update devices in time for the launch of [Android Wear][2]. The plan seems to have worked. Android Wear requires Android 4.3 and up, which has been out for so long now that most major flagships have updated.
Android 4.3 was not all that exciting, but Android releases from here on out didn't need to be all that exciting. Everything became so modularized that Google could push updates out as soon as they were done through Google Play, rather than drop everything in one huge brick as an OS update.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/24/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://live.arstechnica.com/liveblog-google-io-2013-keynote/
[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,70 +0,0 @@
The history of Android
================================================================================
![The LG-made Nexus 5, the launch device for KitKat.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nexus56.jpg)
The LG-made Nexus 5, the launch device for KitKat.
Android 4.4, KitKat—more polish; less memory usage
Google got really cute with the launch of Android 4.4. The company [teamed up with Nestlé][1] to name the OS "KitKat," and it launched on Halloween, October 31, 2013. Nestlé produced limited-edition Android-shaped KitKat bars, and KitKat packaging in stores promoted the new OS while offering a chance to win a Nexus 7.
KitKat launched with a new Nexus device, the Nexus 5. The new flagship had the biggest display yet: a five-inch, 1920x1080 LCD. Despite the bigger screen size, LG—again the manufacturer for the device—was able to fit the Nexus 5 into the same dimensions as a Galaxy Nexus or Nexus 4.
The Nexus 5 was specced comparatively to the highest-end phones at the time, with a 2.3Ghz Snapdragon 800 processor and 2GB of RAM. The phone was again sold unlocked on the Play Store, but while most phones with specs like this would go for $600-$700, Google sold the Nexus 5 for only $350.
One of the most important improvements in KitKat was one you couldn't see: significantly lower memory usage. For KitKat, Google started a concerted effort to lower memory usage across the OS and bundled apps called "Project Svelte." After tons of optimization work and a "low memory" mode that disabled expensive graphical effects, Android could now run on as little as 340MB of RAM. Lower memory requirements were a big deal, because devices in the developing world—the biggest growth markets for smartphones—often ran on only 512MB of RAM. Ice Cream Sandwich's more advanced UI significantly raised the system requirements of Android devices, which left many low-end devices—even newly released low-end devices—stuck on Gingerbread. The lower system requirements of KitKat meant to bring these cheap devices back into the fold. With KitKat, Google hoped to finally kill Gingerbread (which, at the time of writing, is around 20 percent of the market). Just in case the lower system requirements weren't enough, there have even been reports that Google will [no longer license][2] the Google apps to Gingerbread devices.
Besides bringing low-end phones to a modern version of the OS, Project Svelte's lower memory requirements were to be a boon to wearable computers, too. Google Glass [announced][3] it was also switching to the slimmer OS, and [Android Wear][4] ran on KitKat, too. The lower memory requirements in Android 4.4 and the notification API and Bluetooth LE support in 4.3 came together nicely to support wearable computing.
KitKat also featured a lot of polish to the core OS interfaces that couldn't be updated via the Play Store. The System UI, Dialer, Clock, and Settings all saw updates.
![KitKat's transparent bars on the Google Now Launcher.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/1homescreenz.png)
KitKat's transparent bars on the Google Now Launcher.
Photo by Ron Amadeo
KitKat not only got rid of the unpopular lines to the left and right sides of the lock screen—it completely disabled lock screen widgets by default! Google obviously felt multiple lock screens and multiple home screens were a little to complicated for new users, so lock screen widgets now needed to be enabled in the settings. The lopsided time here and in the clock app was switched to a symmetrical weight, which looked a lot nicer.
In KitKat, apps had the ability to make the system and status bars transparent, which significantly changed the look of the OS. The bars now blended into the wallpaper and any other app that chose to enable transparent bars. The bars could also be completely hidden by any app via a new feature called “immersive" mode.
KitKat was the final nail in the “Tron" coffin, removing almost all traces of blue from the operating system. The status bar icons were changed from a blue to a neutral white. The status and system bars on the home screen werent completely transparent; a dark gradient was added to the top and bottom of the screen so that the white icons would still be visible on a light background.
![Tweaks to Google Now and the folders.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nowfolders.png)
Tweaks to Google Now and the folders.
Photo by Ron Amadeo
The home screen that shipped with KitKat on the Nexus 5 was actually exclusive to the Nexus 5 for a few months, but it could now be on any Nexus device. The new home screen was called the "Google Now Launcher," and it was actually [the Google Search app][5]. Yes, Google Search grew from a simple search box to an entire home screen, and in KitKat, it drew the wallpaper, icons, app drawer, widgets, home screen settings, Google Now, and, of course, the search box. Thanks to Search now running the entire home screen, any time the home screen was open and the screen was on, voice commands could be activated by saying “OK Google." This was pointed out to the user with introductory “Say 'OK Google' text in the search bar, which would fade away after a few uses.
Google Now was more integrated, too. Besides the usual swipe up from the system bar, Google Now was also the leftmost home screen. The new version brought some design tweaks as well. The Google logo was moved into the search bar, and the whole top area was compacted. A few card designs were cleaned up, and a new set of buttons at the bottom led to reminders, customization options, and an overflow button with settings, feedback, and help. Since Google Now was part of the home screen, it got transparent system and status bars, too.
Transparency and “brightening up" certain parts of the OS were design themes in KitKat. Black was removed in the status and system bars by switching to transparent, and the black background of the folders was switched to white.
![A screenshot showing the new, cleaner app screen layout, and a composite image of the app lineup.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/apps.png)
A screenshot showing the new, cleaner app screen layout, and a composite image of the app lineup.
Photo by Ron Amadeo
The KitKat icon lineup changed significantly from 4.3. To be more dramatic, it was a bloodbath, with Google removing seven icons over the 4.3 loadout. Google Hangouts could handle SMS now, so the Messaging app was removed. Hangouts also took over Google+ Messenger duties, so that app shortcut was cut. Google Currents was removed as a default app, as it would soon be killed—along with Google Play Magazines—in favor of Google Play Newsstand. Google Maps was beaten back into a single icon, which meant Local and Navigation shortcuts were removed. The impossible-to-understand Movie Studio was cut, too—Google must have realized no one wants to edit movies on a phone. Thanks to the home screen “OK Google" hotword detection, the Voice Search icon was rendered redundant and removed. Depressingly, the long abandoned News & Weather app remained.
There was a new app called “Photos"—really the Google+ app—which took over picture management duties. On the Nexus 5, the Gallery and Google+ Photos were pretty similar, but in newer builds of KitKat present on Google Play Edition devices, the Gallery was completely replaced by Google+ photos. Play Games was an interface for Googles back-end multiplayer service—a Googly version of Xbox Live or Apples Game Center. Google Drive, which existed for years as a Play Store app, was finally made a default app. Google bought Quickoffice back in June 2012, now finally deeming the app acceptable for inclusion by default. While Drive opened Google Documents, Quickoffice opened Microsoft Office Documents. If keeping track, that was two document editing apps and two photo editing apps included on most KitKat loadouts.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/25/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2013/09/official-the-next-edition-of-android-is-kitkat-version-4-4/
[2]:http://www.androidpolice.com/2014/02/10/rumor-google-to-begin-forcing-oems-to-certify-android-devices-with-a-recent-os-version-if-they-want-google-apps/
[3]:http://www.androidpolice.com/2014/03/01/glass-xe14-delayed-until-its-ready-promises-big-changes-and-a-move-to-kitkat/
[4]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[5]:http://arstechnica.com/gadgets/2013/11/google-just-pulled-a-facebook-home-kitkats-primary-interface-is-google-search/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,87 +0,0 @@
The history of Android
================================================================================
![The new "add to home screen" interface was definitely inspired by Honeycomb.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/homesetupthrowback.png)
The new "add to home screen" interface was definitely inspired by Honeycomb.
Photo by Ron Amadeo
KitKat added a nice throwback to Honeycomb with the home screen configuration screen. On the massive 10-inch screen of a Honeycomb tablet (right picture, above), long pressing on the home screen background would present you with a zoomed-out view of all your home screens. Widgets could be dragged from the bottom widget drawer into any home screen—it was very handy. When it came time to bring the Honeycomb interface to phones, from Android 4.0 all the way to 4.3, Google skipped this design and left it to the larger screened devices, presenting only a list of options after a long press (center picture).
For KitKat though, Google finally came up with a solution. After a long press, 4.4 presented a slightly zoomed out view—you could see the current home screen and the screens to the left and right of it. Tapping on the “widgets" button would open a full screen list of widget thumbnails, but after long-pressing on a widget, you were thrown back into the zoomed-out view and could scroll through home screen pages and place the icon where you wanted. By dragging an icon or widget all the way past the rightmost home page, you could create a new home page.
![Contacts and the Keyboard both removed any trace of blue.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/RIP33B5E5.png)
Contacts and the Keyboard both removed any trace of blue.
Photo by Ron Amadeo
KitKat was the end of the line for the Tron design. In most parts of the OS, any remaining blue highlights were removed in favor of gray. In the People app, blue was sucked out of the header and the letter separators in the contact list. The pictures swapped sides and the bottom bar was changed to a light gray to match the top. The Keyboard, which injected the color blue into nearly every app, was changed to gray-on-gray-on-gray. That wasn't a bad thing. Apps should be allowed to have their own color scheme—forcing a potentially clashing color on them via the keyboard wasnt good design.
![The first three screenshots show KitKat's dialer, and the last one is 4.3.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/phone.png)
The first three screenshots show KitKat's dialer, and the last one is 4.3.
Photo by Ron Amadeo
Google completely revamped the dialer in KitKat, creating a wild new design that changed the way users thought about a phone. Actual numbers in the new dialer were hidden as much as possible—there wasnt even a dial pad on the main screen. The primary interface for making a phone call was now a search bar! If you wanted to call someone in your contacts, just type their name in; if you wanted to call a business, just type the business name in and the dialer would search through Google Maps extensive database of phone numbers. It worked incredibly well and was something only Google could pull off.
If searching for numbers wasnt your thing, the app also intelligently displayed a listing for the previous phone call, your most-contacted people, and a link to all contacts. At the bottom were links to your call history, the now old school number pad, and the usual overflow button containing a settings page.
![Office stuff: Google Drive, which was now packed in, and the printing support.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googledrive-and-printing.png)
Office stuff: Google Drive, which was now packed in, and the printing support.
Photo by Ron Amadeo
It was amazing it took this long, but in KitKat, Google Drive was finally included as a default app. Drive allowed users to create and edit Google Docs spreadsheets and documents, scan documents with the camera and upload them as PDFs, or view (but not edit) presentations. Drive, by this point, had a great, modern design with a slide-out navigation drawer and a Google Now-style card design.
For even more mobile office fun, KitKat included an OS-level printing framework. At the bottom of the settings was a "Printing" screen, and any printer OEM could make a plugin for it. Google Cloud Print was, of course, one of the first supporters. Once your printer was hooked up to Cloud Print, either natively or through a computer with Chrome installed, you could print to it over the Internet. Apps needed to support the printing framework, too. Pressing the little "i" button on Google Drive would show information about the document and give you the option to print it. Just like a desktop OS, a print dialog would pop up with settings like copies, paper size, and page selection.
![The "Photos" section of the Google+ app, which replaced the Gallery.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/that-is-one-dead-gallery.png)
The "Photos" section of the Google+ app, which replaced the Gallery.
Photo by Ron Amadeo
Google+ Photos and the Gallery initially shipped together on the Nexus 5, but in a later build of KitKat on Google Play devices, the Gallery was axed and Google+ completely took over photo duties. The new app changed the photo app from a light theme to a dark theme, and Google+ Photos brought a modern navigation drawer design.
Android had long included an instant upload feature, which would automatically backup all pictures on Googles cloud storage, first on Picasa and later on Google+. The big benefit of G+ Photos over the Gallery was that it could finally manage those cloud-stored photos. Little cloud icons in the lower right of a photo indicated backup status, and it would fill from right to left to indicate an upload-in-progress. G+ photos brought its own photo editor along with support for a million of other Google+ photo features, like highlights, auto awesome, and, of course, sharing to Google+.
![Tweaks to the Clock app, which added an alarms tab and changed the time input dialog.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clocks.png)
Tweaks to the Clock app, which added an alarms tab and changed the time input dialog.
Photo by Ron Amadeo
Google changed the excellent time picker that was introduced in 4.2 to this strange clock interface, which was both slower and less precise than the old interface. First you were presented with a one-handed clock which you used to choose the hour, then that clock went away and another one-handed clock allowed you to choose the minute. Having to spin the minute hand or tap a spot on the clock face made it very difficult to pick times in non-five-minute increments. Unlike the old time picker, which required you to pick a time period, this just defaulted to AM (again making it possible to accidentally be off by 12 hours).
### Today—Android everywhere ###
![](http://cdn.arstechnica.net/wp-content/uploads/2014/05/android-everywhere2.png)
Photo by Google/Sony/Motorola/Ron Amadeo
What started out as a curious BlackBerry clone from a search engine company became the most popular OS in the world from one of the biggest titans in the tech industry. Android has become Google's de-facto consumer operating system, and it powers phones, tablets, Google Glass, Google TV, and more. [Parts of it][1] are even used in the Chromecast. In the future, Google will be bringing Android to watches and wearables with [Android Wear][2], and the [Open Automotive Alliance][3] will be bringing Android to cars. Google will be making a renewed commitment to the living room soon, too, with [Android TV][4]. The OS is such a core pillar of Google, that events that are supposed to cover company-wide products, like Google I/O, end up becoming Android launch parties.
![Top row: the Google Play content stores. Bottom row: the Google Play Apps.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-30-03.08.jpg)
Top row: the Google Play content stores. Bottom row: the Google Play Apps.
Photo by Ron Amadeo
What was once the ugly duckling of the mobile industry has transformed so much it now [wins design awards][5] for its user interface. The design of things like Google Now have affected everything the company produces, with even the desktop sites like Search, Google+, YouTube, and Maps getting in on the card design unity. The design keeps evolving as well. Google's next plan is to [unify design][6] across not just Android, but all of its products. The goal is to take something like Gmail and make it feel the same, whether you're using it on Android, a desktop browser, or a watch.
Google outsourced so many pieces of Android to the Play Store, that version releases are becoming less and less necessary. Google decided the best way to beat carrier and OEM update issues was to sidestep those roadblocks completely. From here on out, there isn't much left to include in an Android update other than core under-the-hood changes—but even many APIs have been pushed to Google Play Services. If you just look at version releases, it seems like Android development has slowed down from the peak 2.5-month release cycle. But the reality is Google can now continually push out improvements to the Play Store in a never-ending, somewhat subtler stream of updates.
With 1.5 million activations per day, Android has no where to go but up. In the future, Android will be headed from phones and tablets to cars and watches, and the lower system requirements of KitKat will drive phones to even lower prices in the developing world. The bottom line? More and more people will get online. And for many of those people, Android will be not just their phone but their primary computing device. With Android leading the charge for Google in so many areas, the OS that started off as a tiny acquisition has become one of Google's most important products.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/26/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://blog.gtvhacker.com/2013/chromecast-exploiting-the-newest-device-by-google/
[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[3]:http://arstechnica.com/information-technology/2014/01/open-automotive-alliance-aims-to-bring-android-inside-the-car/
[4]:http://arstechnica.com/gadgets/2014/04/documents-point-to-android-tv-googles-latest-bid-for-the-living-room/
[5]:http://userexperienceawards.com/uxa2012/
[6]:http://arstechnica.com/gadgets/2014/04/googles-next-design-challenge-unify-app-design-across-platforms/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,5 +1,3 @@
translating by xiaoyu33
10 tools for visual effects in Linux with Kdenlive
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/kdenlivetoolssummary.png)

View File

@ -1,4 +1,3 @@
translated by bestony
How to Install OsTicket Ticketing System in Fedora 22 / Centos 7
================================================================================
In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system.

View File

@ -1,154 +0,0 @@
How to Install Pure-FTPd with TLS on FreeBSD 10.2
================================================================================
FTP or File Transfer Protocol is application layer standard network protocol used to transfer file from the client to the server, after user logged in to the FTP server over the TCP-Network, such as internet. FTP has been round long time ago, much longer then P2P Program, or World Wide Web, and until this day it was a primary method for sharing file with other over the internet and it it remain very popular even today. FTP provide an secure transmission, that protect username, password and encrypt the content with SSL/TLS.
Pure-FTPd is free FTP Server with strong and focus on the software security. It was great choice for you if you want to provide a fast, secure, lightweight with feature rich FTP Services. Pure-FTPd can be install on variety of Unix-like operating system, include Linux and FreeBSD. Pure-FTPd is created by Frank Dennis in 2001, based on Troll-FTPd, and until now is actively developed by a team led by Dennis.
In this tutorial we will provide about installation and configuration of "**Pure-FTPd**" with Unix-like operating system FreeBSD 10.2.
### Step 1 - Update system ###
The first thing you must do is to install and update the freebsd repository, please connect to your server with SSH and then type command below as sudo/root :
freebsd-update fetch
freebsd-update install
### Step 2 - Install Pure-FTPd ###
You can install Pure-FTPd from the ports method, but in this tutorial we will install from the freebsd repository with "**pkg**" command. So, now let's install :
pkg install pure-ftpd
Once installation is finished, please add pure-ftpd to the start at the boot time with sysrc command below :
sysrc pureftpd_enable=yes
### Step 3 - Configure Pure-FTPd ###
Configuration file for Pure-FTPd is located at directory "/usr/local/etc/", please go to the directory and copy the sample configuration for pure-ftpd to "**pure-ftpd.conf**".
cd /usr/local/etc/
cp pure-ftpd.conf.sample pure-ftpd.conf
Now edit the file configuration with nano editor :
nano -c pure-ftpd.conf
Note : -c option to show line number on nano.
Go to line 59 and change the value of "VerboseLog" to "**yes**". This option is allow you as administrator to see the log all command used by the users.
VerboseLog yes
And now look at line 126 "PureDB" for virtual-users configuration. Virtual users is a simple mechanism to store a list of users, with their password, name, uid, directory, etc. It's just like /etc/passwd. But it's not /etc/passwd. It's a different file and only for FTP. In this tutorial we will store the list of user to the file "**/usr/local/etc/pureftpd.passwd**" and "**/usr/local/etc/pureftpd.pdb**". Please uncomment that line and change the path for the file to "/usr/local/etc/pureftpd.pdb".
PureDB /usr/local/etc/pureftpd.pdb
Next, uncomment on the line 336 "**CreateHomeDir**", this option make you easy to add the virtual users, allow automatically create home directories if they are missing.
CreateHomeDir yes
Save and exit.
Next, start pure-ftpd with service command :
service pure-ftpd start
### Step 4 - Adding New Users ###
At this step FTP server is started without error, but you can not log in to the FTP Server, because the default configuration of pure-ftpd is disabled for anonymous users. We need to create new users with home directory, and then give it the password for login.
On thing you must do befere you add new user to pure-ftpd virtual-user is to create a system user for this, lets create new system user "**vftp**" and the default group is same as username, with home directory "**/home/vftp/**".
pw useradd vftp -s /sbin/nologin -w no -d /home/vftp \
-c "Virtual User Pure-FTPd" -m
Now you can add the new user for the FTP Server with "**pure-pw**" command. For an example here, we will create new user named "**akari**", so please see command below :
pure-pw useradd akari -u vftp -g vftp -d /home/vftp/akari
Password: TYPE YOUR PASSWORD
that command will create user "**akari**" and the data stored at the file "**/usr/local/etc/pureftpd.passwd**", not at /etc/passwd file, so this means is that you can easily create FTP-only accounts without messing up your system accounts.
Next, you must generate the PureDB user database with this command :
pure-pw mkdb
Now restart the pure-ftpd services and try connect with user "akari" :
service pure-ftpd restart
Trying to connect with user akari :
ftp SERVERIP
![FTP Connect user akari](http://blog.linoxide.com/wp-content/uploads/2015/10/FTP-Connect-user-akari.png)
**NOTE :**
If you want to add new user again, you can use "**pure-pw**" command. And if you want to delete the current user, you can use this :
pure-pw userdel useryouwanttodelete
pure-pw mkdb
### Step 5 - Add SSL/TLS to Pure-FTPd ###
Pure-FTPd supports encryption using TLS security mechanisms. To support for TLS/SSL, make sure the OpenSSL library is already installed on your freebsd system.
Now you must generate new "**self-signed certificate**" on the directory "**/etc/ssl/private**". Before you generate the certificate, please create new directory there called "private".
cd /etc/ssl/
mkdir private
cd private/
Now generate "self-signed certificate" with openssl command below :
openssl req -x509 -nodes -newkey rsa:2048 -sha256 -keyout \
/etc/ssl/private/pure-ftpd.pem \
-out /etc/ssl/private/pure-ftpd.pem
FILL ALL WITH YOUR PERSONAL INFO.
![Generate Certificate pem](http://blog.linoxide.com/wp-content/uploads/2015/10/Generate-Certificate-pem.png)
Next, change the certificate permission :
chmod 600 /etc/ssl/private/*.pem
Once the certifcate is generated, Edit the pure-ftpd configuration file :
nano -c /usr/local/etc/pure-ftpd.conf
Uncomment on line **423** to enable the TLS :
TLS 1
And line **439** for the certificate file path :
CertFile /etc/ssl/private/pure-ftpd.pem
Save and exit, then restart the pure-ftpd services :
service pure-ftpd restart
Now let's test the Pure-FTPd that work with TLS/SSL. I'm here use "**FileZilla**" to connect to the FTP Server, and use user "**akari**" that have been created.
![Pure-FTPd with TLS SUpport](http://blog.linoxide.com/wp-content/uploads/2015/10/Pure-FTPd-with-TLS-SUpport.png)
Pure-FTPd with TLS on FreeBSD 10.2 successfully.
### Conclusion ###
FTP or File Transfer Protocol is standart protocol used to transfer file between users and the server. One of the best, lightweight and secure FTP Server Software is Pure-FTPd. It is secure and support for TLS/SSL encryption mechanism. Pure-FTPd is easy to to install and configure, you can manage the user with virtual user support, and it is make you as sysadmin is easy to manage the user if you have a much user ftp server.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-pure-ftpd-tls-freebsd-10-2/
作者:[Arul][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arulm/

View File

@ -1,108 +0,0 @@
8 things to do after installing openSUSE Leap 42.1
================================================================================
![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg)
Credit: [Metropolitan Transportation/Flicrk][1]
> You've installed openSUSE on your PC. Here's what to do next.
[openSUSE Leap is indeed a huge leap][2], allowing users to run a distro that has the same DNA of SUSE Linux Enterprise. Like any other operating system, some work is needed to get it set up for optimal use.
Following are some of the things that I did after installing openSUSE Leap on my PC (these are not applicable for server installations). None of them are mandatory, and you may be fine with the basic install. But if you need more out of your openSUSE Leap, follow me.
### 1. Adding Packman repository ###
Due to software patents and licences, openSUSE, like many Linux distributions, doesn't offer many applications, codecs, and drivers through official repositories (repos). Instead, these are made available through 3rd party or community repos. The first and most important repository is 'Packman'. Since these repos are not enabled by default, we have to add them. You can do so either using YaST (one of the gems of openSUSE) or by command line (instructions below).
![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png)
Adding Packman repositories.
Using YaST, go to the Software Repositories section. Click on the 'Add button and select 'Community Repositories.' Click 'next.' And once the repos are loaded, select the Packman Repository. Click 'OK,' then import the trusted GnuPG key by clicking on the 'Trust' button.
Or, using the terminal you can add and enable the Packman repo using the following command:
zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman
Once the repo is added, you have access to many more packages. To install any application or package, open YaST Software Manager, search for the package and install it.
### 2. Install VLC ###
VLC is the Swiss Army knife of media players and can play virtually any media file. You can install VLC from YaST Software Manager or from software.opensuse.org. You will need to install two packages: vlc and vlc-codecs.
If using terminal, run the following command:
sudo zypper install vlc vlc-codecs
### 3. Install Handbrake ###
If you need to transcode or convert your video files from one format to another, [Handbrake is the tools for you][3]. Handbrake is available through repositories we enabled, so just search for it in YaST and install.
If you are using the terminal, run the following command:
sudo zypper install handbrake-cli handbrake-gtk
(Pro tip: VLC can also transcode audio and video files.)
### 4. Install Chrome ###
OpenSUSE comes with Firefox as the default browser. But since Firefox isn't capable of playing restricted media such as Netflix, I recommend installing Chrome. This takes some extra work. First you need to import the trusted key from Google. Open the terminal app and run the 'wget' command to download the key:
wget https://dl.google.com/linux/linux_signing_key.pub
Then import the key:
sudo rpm --import linux_signing_key.pub
Now head over to the [Google Chrome website][4] and download the 64 bit .rpm file. Once downloaded run the following command to install the browser:
sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm
### 5. Install Nvidia drivers ###
OpenSUSE Leap will work out of the box even if you have Nvidia or ATI graphics cards. However, if you do need the proprietary drivers for gaming or any other purpose, you can install such drivers, but some extra work is needed.
First you need to add the Nvidia repositories; it's the same procedure we used to add Packman repositories using YaST. The only difference is that you will choose Nvidia from the Community Repositories section. Once it's added, go to **Software Management > Extras** and select 'Extras/Install All Matching Recommended Packages'.
![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png)
It will open a dialogue box showing all the packages it's going to install, click OK and follow the instructions. You can also run the following command after adding the Nvidia repository to install the needed Nvidia drivers:
sudo zypper inr
(Note: I have never used AMD/ATI cards so I have no experience with them.)
### 6. Install media codecs ###
Once you have VLC installed you won't need to install media codecs, but if you are using other apps for media playback you will need to install such codecs. Some developers have written scripts/tools which makes it a much easier process. Just go to [this page][5] and install the entire pack by clicking on the appropriate button. It will open YaST and install the packages automatically (of source you will have to give the root password and trust the GnuPG key, as usual).
### 7. Install your preferred email client ###
OpenSUSE comes with Kmail or Evolution, depending on the Desktop Environment you installed on the system. I run Plasma, which comes with Kmail, and this email client leaves a lot to be desired. I suggest trying Thunderbird or Evolution mail. All major email clients are available through official repositories. You can also check my [handpicked list of the best email clients for Linux][7].
### 8. Enable Samba services from Firewall ###
OpenSUSE offers a much more secure system out of the box, compared to other distributions. But it also requires a little bit more work for a new user. If you are using Samba protocol to share files within your local network then you will have to allow that service from the Firewall.
![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png)
Allow Samba Client and Server from Firewall settings.
Open YaST and search for Firewall. Once in Firewall settings, go to 'Allowed Services' where you will see a drop down list under 'Service to allow.' Select 'Samba Client,' then click 'Add.' Do the same with the 'Samba Server' option. Once both are added, click 'Next,' then click 'Finish,' and now you will be able to share folders from your openSUSE system and also access other machines over the local network.
That's pretty much all that I did on my new openSUSE system to set it up just the way I like it. If you have any questions, please feel free to ask in the comments below.
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html
作者:[Swapnil Bhartiya][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
[1]:https://www.flickr.com/photos/mtaphotos/11200079265/
[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution
[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line
[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha
[5]:http://opensuse-community.org/
[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html

View File

@ -1,68 +0,0 @@
Back in early 2013, your editor [dedicated a sacrificial handset][2] to the testing of the then-new Ubuntu Touch distribution. At that time, things were so unbaked that the distribution came with mocked-up data for unready apps; it even came with a set of fake tweets. Nearly three years later, it seemed time to give Ubuntu Touch another try on another sacrificial device. This distribution has certainly made some progress in those years, but, sadly, it still seems far from being a competitive offering in this space.
In particular, your editor tested version 16.04r3 from the testing channel on a Nexus 4 handset. The Nexus 4 is certainly past its prime at the end of 2015, but it still functions as a credible Android device. It is, in any case, the only phone handset on [the list of supported devices][1] other than the three that were sold (in locations far from your editor's home) with Ubuntu Touch pre-installed. It is a bit discouraging that Ubuntu Touch is not supported on a more recent device; the Nexus 4 was discontinued over two years ago.
People who are accustomed to putting strange systems on Nexus devices know the drill fairly well: unlock the bootloader, install a new recovery image if necessary, then use the **fastboot** tool to flash a new image. Ubuntu Touch does not work that way; instead, one must use a set of tools available only on the Ubuntu desktop distribution. Your editor's current menagerie of systems does not include any of those, but, fortunately, running the Ubuntu 15.10 distribution off a USB drive works just fine. It must be said, though, that Ubuntu appears not to have gotten the memo regarding high-DPI laptop displays; 15.10 is an exercise in eyestrain on such a device.
Once the requisite packages have been installed, the **ubuntu-device-flash** command can be used to install Ubuntu Touch on the phone. It finds the installation image wherever Canonical hides them (it's not obvious where that is) and puts it onto the phone; the process, on the Nexus 4, took about three hours — a surprisingly long time. Among other things, it installs a Ubuntu-specific recovery image, regardless of whether that should be necessary or not. The installation takes up about 4.5GB of space on the device. At the end, the phone reboots and comes up with the Ubuntu Touch lock screen, which has changed little in the last three years. The first boot takes a discouragingly long time, but subsequent reboots are faster, perhaps faster than Android on the same device.
Alas, that's about the only thing that is faster than Android. The phone starts sluggish and gets worse as time goes on. At one point it took a solid minute to get the dialer screen up on the running device. Scrolling can be jerky and unpleasant to work with. At least once, the phone bogged down to the point that there was little alternative to shutting it down and starting over.
Logging into the device over the USB connection offers some clues as to why that might be. There were no less than 258 processes running on the system. A number of them have "evolution" in their name, which is never a good sign even on a heftier system. Daemons like NetworkManager and pulseaudio are running. In general, Ubuntu Touch seems to have a large number of relatively large moving parts, leading, seemingly, to memory pressure and a certain amount of thrashing.
Three years ago, Ubuntu Touch was built on an Android chassis. There are still bits of Android that show up here and there (it uses binder, for example), but a number of those components have been replaced. This release runs an Android-derived kernel that identifies itself as "3.4.0-7 #39-Ubuntu". 3.4.0 was released in May 2012, so it is getting a bit long in the tooth; the 3.4.0 number suggests this kernel hasn't even gotten the stable updates that followed that release. Finding the source for the kernel in this distribution is not easy; it must almost certainly be hidden somewhere in this Gerrit repository, but your editor ran out of time while trying to find it. The SurfaceFlinger display manager has been replaced by Ubuntu's own Mir, with Unity providing the interface. Upstart is the init system, despite the fact that Ubuntu has moved to systemd on desktop systems.
When one moves beyond the command-line interface and starts playing with the touchscreen, one finds that the basics of the interface resemble what was demonstrated three years ago. Swiping from the left edge brings the [Overview screen] Unity icon bar (but no longer switches to a home screen; the "home screen" concept doesn't really seem to exist anymore). Swiping from the right will either switch to another application or produce an overview of running applications; it's not clear how it decides which. The overview provides a cute oblique view of the running applications; it's sufficient to choose one, but seems somewhat wasteful of screen space. Swiping up from the bottom produces an application-specific menu — usually.
![][3]
The swipe gestures work well enough once one gets used to them, but there is scope for confusion. The camera app, for example, will instruct the user to "swipe left for photo roll," but, unless one is careful to avoid [Swipe left] the right edge of the screen, that gesture will yield the overview screen instead. One can learn subtleties like "swipes involving the edge" and "swipes avoiding the edge," but one could argue that such an interface is more difficult than it needs to be and less discoverable than it could be.
![][4]
Speaking of the camera app, it takes pictures as one might expect, and it has gained a high-dynamic-range mode in recent years. It still has no support for stitching together photos in a panorama or "photo sphere" mode, though.
![][5]
The base distribution comes with a fairly basic set of apps. Many of them appear to be interfaces to an associated web page; the Amazon, GMail, and Facebook apps, for example. Something called "Shorts" appears to be an RSS reader, though it seems impervious to the addition of arbitrary feeds. There is a terminal app, but it prompts for a password — a bit surprising [Terminal emulator] given that no password had ever been supplied for the device (it turns out that one should use the screen-lock PIN here). It's not clear that this extra level of "security" is helpful, given that the user involved is already able to install, launch, and run applications on the device, but so it goes.
Despite the presence of all those evolution processes, there is no IMAP-capable email app; there are also no mapping apps. There is a rudimentary web browser with Ubuntu branding; it appears that this browser is based on Chromium. The weather app is limited to a few dozen hardwired locations worldwide; the closest supported location to LWN headquarters was Houston, which, one assumes, is unlikely to be dealing with the foot of snow your editor had to shovel while partway through this article. One suspects we would have heard about that.
![][6]
Inevitably, there is a store from which one can obtain other apps. There are, for example, a couple of seemingly capable, OpenStreetMap-based mapping apps there, including one that claims turn-by-turn navigation, but nothing requiring GPS access worked in your editor's tests. Games abound, of course, but [Maps] there is little in the way of apps that are well known in the Android or iOS worlds. The store will refuse to allow the installation of apps until one creates a "Ubuntu One" account; that is unfortunate, but most Android users never get anywhere near that far before having to create or supply a Google account.
![][7]
Canonical puts a fair amount of energy into promoting its "scopes," which are said to be better than apps for the aggregation of content. In truth, they seem to just be another type of app with a focus on gathering information from more than one source. Although, with "branded scopes," the "more than one source" part is often deliberately put by the wayside. Your editor played around with scopes for a while, but, in truth, could not find what was supposed to make them special.
Permissions management in Ubuntu Touch resembles that found in recent Android releases: the user will be prompted the first time an application tries to exercise a specific privilege. As with Android, the number of [Permissions request] actions requiring privilege is relatively small, and "connect to any arbitrary site on the Internet" is not among them. Access to location information or the camera, though, will generate a prompt. There is also, again as with Android, a way to control which applications are allowed to place notifications on the screen.
Ubuntu Touch still seems to drain the battery far more quickly than Android does on the same device. Indeed, it is barely able to get through the night while sitting idle. There is a cute battery app that offers a couple of "ways to reduce battery use," but it lacks Android's ability to say which apps are actually draining the battery (though, it must be said, that information from Android is often less helpful than one might hope).
![][8]
The keyboard now has proper multi-lingual support (though there is no visual indication of which language is currently in effect) and, as with Android, one can switch between languages on the fly. It offers word suggestions, does [Keyboard] spelling correction, and all the usual things. One missing feature, though, is "swipe" typing which, your editor has found, can speed the process of inputting text on a small keyboard considerably. There is also no voice input; no major loss from your editor's point of view, but others will probably see that differently.
There is a lot to like in Ubuntu Touch. There is some appeal to running something that looks like a proper Linux system, even if it still has a number of Ubuntu-specific components. One does not get the sense that the device is watching quite as closely as Android devices do, though it's not entirely clear, for example, what happens with location data or where it might be stored. In any case, a Ubuntu device clearly has more free software on it than most alternatives do; there is no proprietary "play services" layer maintaining control over the system.
Sadly, though, this distribution still is not up to the capabilities and the performance of the big alternatives. Switching to Ubuntu Touch means settling for a much slower system, running on a severely limited set of devices, with a relative scarcity of apps to choose from. Your editor would very much like to see a handset distribution that is more free and more open than the alternatives, but that distribution must also be competitive with those alternatives, and that does not seem to be the case here. Unless Canonical can find a way to close the performance and feature gaps with Android, it seems unlikely to have much hope of achieving uptake that is within a few orders of magnitude of Android's.
--------------------------------------
via: https://lwn.net/Articles/667983/
作者Jonathan Corbet
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]: https://developer.ubuntu.com/en/start/ubuntu-for-devices/devices/
[2]: https://lwn.net/Articles/540138/
[3]: https://static.lwn.net/images/2015/utouch/overview-sm.png
[4]: https://static.lwn.net/images/2015/utouch/camera-swipe-sm.png
[5]: https://static.lwn.net/images/2015/utouch/terminal.png
[6]: https://static.lwn.net/images/2015/utouch/gps-sm.png
[7]: https://static.lwn.net/images/2015/utouch/camera-perm.png
[8]: https://static.lwn.net/images/2015/utouch/schifo.png

View File

@ -1,4 +1,3 @@
bioIkke 翻译中
7 Steps to Start Your Linux SysAdmin Career
===============================================

View File

@ -1,50 +0,0 @@
A Linux-powered microwave oven
================================================================================
Scratching an itch is a recurring theme in presentations at [linux.conf.au](http://linux.conf.au/). As the open-hardware movement gains strength, more and more of these itches relate to the physical world, not just the digital. David Tulloh used his [presentation [WebM]](http://mirror.linux.org.au/linux.conf.au/2016/04_Thursday/D4.303_Costa_Theatre/Linux_driven_microwave.webm) on the “Linux Driven Microwave” to discuss how annoying microwave ovens can be and to describe his project to build something less irritating.
Tulloh's story began when he obtained a microwave oven, admittedly an inexpensive one, with a user interface even worse than the norm. Setting the time required pressing buttons so hard that the microwave tended to get pushed away — a fact that was elegantly balanced by the door handle requiring a sufficiently hard tug to return the oven to its original position. While this is clearly an extreme case, Tulloh lamented that microwave ovens really hadn't improved noticeably in recent decades. They may have gotten a little cheaper and gained a few features that few people could use without poring over the instruction manual — the implied contrast to smartphones, which are widely used with little instruction, was clear.
This microwave oven was not a lost cause — it gave its life to the greater good and became the prototype for an idea that Tulloh hopes to turn into a crowd-funded project if he can find the right match between features and demand: a Linux-driven microwave oven.
![](https://static.lwn.net/images/2016/lca-oven-sm.jpg)
## Adding novelty
Adding a smartphone-like touchscreen and a network connection and encouraging a community to build innovative apps such as recipe sharing are fairly obvious ideas once you think to put “Linux” and “microwave oven” together, but Tulloh's vision and prototype lead well beyond there. Two novel features that have been fitted are a thermal camera and a scale for measuring weight.
The thermal camera provides an eight-by-eight-pixel image of the contents of the oven with a precision of about two degrees. This is enough to detect if a glass of milk is about to boil over, or if the steak being thawed is in danger of getting cooked. In either case, the power can be reduced or removed. If appropriate, an alert can be sounded. This would not be the first microwave to be temperature sensitive — GE sold microwave ovens with temperature probes decades ago — but an always-present sensor is much more useful than a manually inserted probe, especially when there is an accessible API behind it.
The second innovation is a built-in scale to weigh the food (and container) being cooked. Many recipes give cooking-time guidance based on weight and some microwave ovens allow you to enter the weight manually so it can do a calculation for you. With built-in scales, that can become automatic. Placing a scale reliably under the rotating plate typical of many microwave ovens would be a mechanical challenge that Tulloh did not think worth confronting. Instead his design is based on the “flat-plate” or “flat-bed” style of oven — placing a sensor at each of the four corners is mechanically straightforward and gives good results.
[User interface]
Once you have these extra sensors — weight and temperature — connected to a suitable logic engine, more interesting possibilities can be explored. A cup of cold milk from the fridge will have a particular weight and temperature profile with a modest degree of error. Tulloh suggested that situation could be detected and some relevant options such as “Boil” or “Warm” could be offered for easy selection (a mock up of the interface is at right, a clickable version is [here](http://mwgui.tulloh.id.au/)). Simple machine learning could extend this to create a personalized experience. It would be easy to collect a history of starting profiles and cooking choices; when those patterns are detected, the most likely cooking choices could be made the easiest to select.
![](https://static.lwn.net/images/2016/lca-ovengui-sm.png)
## Overcoming staleness
Beyond just new functionality, Tulloh wants to improve the functionality that already exists. Door handles as stiff as on Tulloh's cheap microwave may not be common, but few microwave oven doors seem designed to make life easy for people with physical handicaps. There are regulatory restrictions, particularly in the US, that require the oven to function only if there is positive confirmation that the door is actually shut. This confirmation must be resilient against simple fraud, so poking a stick in the hole must not trick the oven into working with the door open. In fact, there must be two independent confirmations and, if they disagree, a fuse must be blown so that a service call is required. Tulloh believes that a magnetic latch would provide much greater flexibility (including easy software control) and that magnetic keying similar to that used in a [magnetic keyed lock](https://en.wikipedia.org/wiki/Magnetic_keyed_lock) would allow the magnetic latch to pass certification.
Another pain point with microwave ovens is the annoying sounds they make. Tulloh has discarded the beeper and hooked up a speaker to the Banana Pi that is controlling his prototype. This allows for more pleasant and configurable alerts as well as for advice and guidance through a text-to-speech system. Adding a microphone for voice control is an obvious next step.
Many microwave ovens can do more than just set a time and a power level — they provide a range of power profiles for cooking, warming, defrosting, and so on. Adding precise temperature sensing will allow the community to extend this range substantially. A question from Andrew Tridgell in the audience wondered if tempering chocolate — a process that requires very precise temperature control — would be possible. Tulloh had no experience with the process, and couldn't make promises, but thought it was certainly worth looking in to. Even if that doesn't work out, it shows clear potential for value to be gained from community input.
## Availability
Tulloh would very much like to get these Linux-enabled microwave ovens out into the world to create a community and see where it goes. Buying existing ovens and replacing the electronics is not seen as a viable option. The result would be ugly and, given that a small-run smart microwave will inevitably cost more, potential buyers are going to want something that doesn't look completely out of place in their kitchen.
Many components are available off-the-shelf (magnetron, processor board, thermal sensor) and others, such as a USB interface for the thermal sensor, are easily built. Prototype software is, of course, already available on [GitHub](https://github.com/lod?tab=repositories). The case and door are more of a challenge and would need to be made to order. Tulloh wants to turn this adversity into an opportunity by providing the option for left-handed microwave ovens and a variety of colors.
A quick survey of the audience suggested that few people would hastily commit to his target price of $AU1000 for a new, improved, open oven. Whether a bit more time for reflection and a wider audience might tip the balance is hard to know. The idea is intriguing, so it seems worth watching Tulloh's [blog](http://david.tulloh.id.au/category/microwave/) for updates.
------------------------------------------------------------------------------
via: https://lwn.net/Articles/674877/
作者Neil Brown
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,3 @@
GHLandy Translating
How to Best Manage Encryption Keys on Linux
=============================================

View File

@ -1,5 +1,3 @@
Ricky Gong 翻译中
Linux Systems Patched for Critical glibc Flaw
=================================================

View File

@ -1,136 +0,0 @@
[Translating by cposture 2016-03-12]
Best Cloud Services For Linux To Replace Copy
===============================================
![](http://itsfoss.com/wp-content/uploads/2016/02/Linux-cloud-services.jpg)
Cloud storage service Copy is shutting down and it is time for us Linux users to look for a worthy **cloud storage alternative to Copy for Linux**.
All files will be deleted on May 1st, 2016. If you are a Copy user, you should save your files and move it to other
Copy has been my favorite cloud storage for past couple of years. It gave me plenty of free storage and came with native apps for desktop platforms including Linux and mobile platforms as iOS and Android.
It was a perfect Cloud storage for me where I get plenty of free storage (380 GB) with a seamless experience between desktop and mobile OSes. But this easy free storage, 15GB for signup and 5Gb for each referral, had me thinking that if Copy doesnt get business customers, they will be running out of business soon. Such huge free storage only meant that they were not targeting individual customers like Dropbox do.
My fear came true when I read about the shutting down of Copy.com. In fact, Copy is not alone. Its parent company [Barracuda Networks](https://www.barracuda.com/) is going through a rough patch and has [hired Morgan Stanely to look for suitable buyer](http://www.bloomberg.com/news/articles/2016-02-01/barracuda-networks-said-to-work-with-morgan-stanley-to-seek-sale)(s).
Whatever be the reason, all we know is that Copy will soon be history and we need to find similarly **good cloud services for Linux**. I am putting emphasis on Linux because other popular cloud storage services like [Microsofts OneDrive](https://onedrive.live.com/about/en-us/) and [Google Drive](https://www.google.com/drive/) do not provide native Linux client. This is something expected out of Microsoft but [Googles apathy towards Linux](http://itsfoss.com/google-hates-desktop-linux/) is shocking.
## Best Copy alternatives for Linux
Now, what do you want in a cloud storage services as a Linux storage? Let me guess:
- Lots of free storage. After all, individuals cannot pay hefty amounts every month.
- Native Linux client. So that you can synchronize files easily with the server without doing special tweaking or running scripts at regular intervals.
- Desktop clients for other desktop OSes i.e. Windows and OS X. Portability is a necessity and syncing files between devices is such a good relief.
- Mobile apps for Android and iOS. In todays modern world, you need to be connected across all the devices.
I am not counting the self-hosted cloud services like OwnCloud or [Seafile](https://www.seafile.com/en/home/) because they require set-up and run a server. This is not apt for all home users who want a Copy like cloud service.
Lets see what are the services that you could use to replace Copy.com on Linux.
## Mega
![](http://itsfoss.com/wp-content/uploads/2016/02/Mega-Linux.jpg)
If you are a regular Its FOSS reader, you might have come across my earlier article about [Mega on Linux](http://itsfoss.com/install-mega-cloud-storage-linux/). This cloud service is an offering by the infamous [Kim Dotcom](https://en.wikipedia.org/wiki/Kim_Dotcom) of [Megaupload scandal](https://en.wikipedia.org/wiki/Megaupload). This also makes some users skeptical about it because Kim Dotcom has been a target by US authorities for a long time.
Mega has everything that you would expect in a hassle free cloud service. It provides 50 GB of free storage to individual users. Provide native clients for Linux and other platforms and also has end to end encryption. The native Linux client works fine and the sync across the device is seamless. You can also view and access your files in a web browser.
### Pros:
- 50 GB of free storage
- End to end encryption
- Native clients for Linux and other platforms such as Windows, Mac OS X, Android, iOS
### Cons:
- Shady past of the owner
[Mega](https://mega.nz/)
## Hubic
![](http://itsfoss.com/wp-content/uploads/2016/02/hubic.jpeg)
Hubic is a cloud service from French company [OVH](https://www.ovh.com/fr/). Hubic also offers 25 GB of free cloud storage at sign up. You can further extend it to 50GB (for free users) by referring it to friends.
Hubic has a Linux client which is in beta (for over two years now). Hubic has an official Linux client but it is limited to command line. I did not go on to test the mobile versions.
Hubic boasts of some nice features though. Apart from simple to use interface, file sharing etc, it has a Backup feature where you can archive your important files regularly.
### Pros:
- 25 GB of free storage, extendable up to 50 GB
- Available on multiple platforms
- Backup feature
### Cons:
- Linux client in beta, only available in command line
[Hubic](https://hubic.com/)
## pCloud
![](http://itsfoss.com/wp-content/uploads/2016/02/pCloud-Linux.jpeg)
pCloud is another European offering but this time across the French border, from Switzerland. Focused on encryption and security, pCloud offers 10 GB of free storage for each signup. You can further increase it up to 20 GB by inviting friends, sharing links on social media etc.
It has all the standard features of a cloud service such as file sharing and synchronization, selective syncing etc. pCloud also has native clients across platforms, including Linux of course.
Linux client is easy to use and worked well in my limited testing on Linux Mint 17.3.
### Pros:
- 10 GB of free storage, extendable up to 20 GB
- A good working Linux client with GUI
### Cons:
- Encryption is a premium feature
[pCloud](https://www.pcloud.com/)
## Yandex Disk
![](http://itsfoss.com/wp-content/uploads/2016/02/Yandex.jpg)
Russian internet giant Yandex has everything that Google has. A search engine, analytics and webmaster tool, email, web browser and cloud storage service.
Yandex Disk offers 10 GB of free cloud storage on sign up. It has native clients for multiple platforms, including Linux. However, the official Linux client is only command line. You can get [unofficial GUI client for Yandex disk](https://mintguide.org/tools/265-yd-tools-gui-indicator-for-yandexdisk-free-cloud-storage-in-linux-mint.html) though. File sharing via links is available as along with other standard cloud storage feature.
### Pros:
- 10 GB of free storage, extendable up to 20 GB via referrals.
### Cons:
- Only command line client available
[Yandex Disk](https://disk.yandex.com/)
## Honorable and deliberate omissions
I have deliberately skipped [Dropbox](https://www.dropbox.com/), [SpiderOak](https://spideroak.com/) from the list. Dropbox is excellent for Linux but the free storage is limited to 2 GB. Over the past several years, I have managed to increase it to over 21 GB, but thats another story.
SpiderOak also provides only 2 GB of free storage and you cannot access it in a web browser.
OwnCloud needs its own server and set-up and thus it is not everyones cup of tea. And it certainly doesnt fit the criteria of a typical cloud service.
## Verdict
If you ask me what I am going to use in place of Copy, my answer is Mega. It has plenty of free cloud storage and a great Linux desktop client. What is your choice among this list of **best cloud storage services for Linux**? Which one do you prefer?
------------------------------------------------------------------------------
via: http://itsfoss.com/cloud-services-linux/
作者:[ABHISHEK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/

View File

@ -1,37 +0,0 @@
NXP unveils a tiny 64-bit ARM processor for the Internet of Things
=========================================================================
**TAGS**:[ARM][1], [INTERNET OF THINGS][2], [NXP][3], [NXP SEMICONDUCTORS][4]
![](http://1u88jj3r4db2x4txp44yqfj1.wpengine.netdna-cdn.com/wp-content/uploads/2016/02/nxp-930x556.jpg)
[NXP Semiconductors][5] has unveiled what it calls the worlds smallest and lowest-power 64-bit ARM processor for the Internet of Things (IoT).
The tiny QorIQ LS1012A delivers networking-grade security and performance acceleration to battery-powered, space-constrained applications. This includes powering applications for Internet of Things, or everyday objects that are smart and connected. If IoT is to reach its potential of $1.7 trillion by 2020 (as estimated by market researcher IDC), its going to need processors like the new one from NXP, which was unveiled at the Embedded World 2016 event in Nuremberg, Germany.
The chip has a 64-bit ARMv8 processor with network packet acceleration and built-in security. It fits in a 9.6 mm-square space and draws about 1 watt of power. Potential applications include next-generation IoT gateways, portable entertainment platforms, high-performance portable storage applications, mobile hard disk drives, and mobile storage for cameras, tablets, and other rechargeable devices.
Additionally, the LS1012A is the first processor designed specifically for an emerging new storage solution, dubbed object-based storage. Object-based storage relies on a smart hard disk drive that is directly connected to the data centers Ethernet network. The processor must be small enough to be integrated directly on the circuit board for a hard disk drive.
“The groundbreaking combination of low power, tiny footprint and networking-grade performance of NXPs LS1012 processor is ideal for consumer, networking and Internet of Things applications alike,” said Tareq Bustami, senior vice president and general manager of NXPs Digital Networking division, in a statement. “This unique blend of capabilities unleashes embedded systems designers and developers to imagine and create radically innovative end-products across a broad spectrum of high-growth markets.”
NXP said it is the only 1-watt, 64-bit processor in the market to combine such a comprehensive set of high-speed peripherals in a single chip, thus enabling lower system-level costs. And due to innovative packaging, the processor can be routed on low-cost circuit boards.
NXPs LS1012A will be available in April 2016 and can be ordered now. NXP has more than 45,000 employees in 35 countries.
--------------------------------------------------------------------------------
via: http://venturebeat.com/2016/02/21/nxp-unveils-a-small-and-tiny-64-bit-arm-processor-for-the-internet-of-things/
作者:[DEAN TAKAHASHI][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://venturebeat.com/author/dean-takahashi/
[1]:http://venturebeat.com/tag/arm/
[2]:http://venturebeat.com/tag/internet-of-things/
[3]:http://venturebeat.com/tag/nxp/
[4]:http://venturebeat.com/tag/nxp-semiconductors/
[5]:http://www.nxp.com/

View File

@ -1,6 +1,5 @@
Achieving Enterprise-Ready Container Tools With Werckers Open Source CLI
===========================================
#CoderBOBO translating
For enterprises, containers offer more efficient build environments, cloud-native applications and migration from legacy systems to the cloud. But enterprise adoption of the technology -- Docker specifically -- has been hampered by, among other issues, [a lack of mature developer tools][1].

View File

@ -1,49 +0,0 @@
New Docker Data Center Admin Suite Should Bring Order to Containerization
===============================================================================
![](https://tctechcrunch2011.files.wordpress.com/2016/02/shutterstock_119411227.jpg?w=738)
[Docker][1] announced a new container control center today its calling the Docker Datacenter (DDC), an integrated administrative console that has been designed to give large and small businesses control over creating, managing and shipping containers.
The DDC is a new tool made up of various commercial pieces including Docker Universal Control Plane (which also happens to be generally available today) and Docker Trusted Registry. It also includes open source pieces such as Docker Engine. The idea is to give companies the ability to manage the entire lifecycle of Dockerized applications from one central administrative interface.
Customers actually were the driving force behind this new tool. While companies liked the agility that Docker containers give them, they also wanted management control over administration, security and governance around the containers they were creating and shipping, Scott Johnston, SVP of product management told TechCrunch.
The company has called this Containers as a Service (CaaS), mostly because when customers came to them asking for this type of administrative control, thats how they described it, Johnston said.
![](https://tctechcrunch2011.files.wordpress.com/2016/02/screen-shot-2016-02-23-at-7-56-54-am.png?w=680&h=401)
>Image courtesy of Docker
Like many open source projects, Docker gained a strong following among developers first, but as it grew in popularity, the companies these developers were working for wanted a straight-forward way to track and manage them.
Thats exactly what DDC is designed to do. It gives developers the agility they need to create containerized applications, while providing operations with the tools they need to bring order to the process.
In practice this means that developers can create a set of containerized components, have them approved for deployment by operations and then have access to a library of fully certified images. This lets developers pull the pieces they need across a range of applications without having to reinvent the wheel every time. That should speed up application development and deployment (and add to the agility that containers should in theory be providing in the first place).
This aspect appealed to Beta customer ADP. The payroll services giant particularly liked having this central repository of images available to developers.
“As part of our initiative to modernize our business-critical applications to microservices, ADP has been investigating solutions that would enable our developers to leverage a central library of IT-vetted and secured core services that they could rapidly iterate on,” said Keith Fulton, Chief Technology Officer at ADP said in a statement.
Docker was launched in 2010 by founder Solomon Hykes as dotCloud. He pivoted the company to Docker in 2013, [selling dotCloud in August][2], 2014 to focus completely on Docker.
The company came out of the gate like gangbusters a couple of years ago raising $180 million ($168 million since becoming Docker) over five rounds, according to CrunchBase. What caught the attention of investors was that Docker offered a way to deliver applications for the modern age called containers, a way of building, managing and shipping distributed applications.
Containerization enables developers to create these distributed applications made up of small discrete pieces that run across multiple servers, as opposed to the large monolithic applications companies used to create running on a single server.
Pricing for Docker Datacenter starts at $150 per node per month.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/calico-virtual-private-networking-docker/
作者:[ Ron Miller][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://techcrunch.com/author/ron-miller/
[1]: https://www.docker.com/
[2]: http://techcrunch.com/2014/08/04/docker-sells-dotcloud-to-cloudcontrol-to-focus-on-core-container-business/

View File

@ -1,91 +0,0 @@
How to add open source experience to your resume
==================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/lightning-test.png?itok=aMceg0Vg)
In this article, I'll share my technique for leveraging open source contributions to stand out as a great candidate for a job in the technology field.
No goal can be accomplished without first being set. Before jumping into a new commitment or spending the evening overhauling your resume, it pays to clearly define the traits of the job you're seeking. Your resume is a piece of persuasive writing, so you have to know your audience for it to reach its full potential. Your resume's audience is anyone with the need for your skills and the budget to hire you. When editing, read your resume while imagining what it's like to be in their position. Do you look like a candidate that you would hire?
I personally find it helpful to make a list of the key traits that the ideal candidate for my target job displays. I gather this list from a combination of personal experience, reading job postings, and asking colleagues in similar roles. LinkedIn and conferences are great places to find people happy to offer this sort of advice. Many people enjoy talking about themselves, and inviting them to tell part of their own story to help you expand your knowledge makes everyone feel good. As you talk to others about their career paths, you'll gain insights not only into how to land the jobs you want, but also into which traits or behaviors correspond to ending up in situations you'd rather avoid.
For example, the list of key traits for a junior role might look like this:
### Technical:
- Experience with CI, Jenkins preferred
- Strong scripting background in Python and Ruby
- Familiarity with Eclipse IDE
- Basic Git and Bash
### Personal:
- Self-directed learner
- Clear communication and documentation skills
- Experience working on a multi-person development team ("team player")
- Familiarity with issue tracker workflow
### Apply anyway
Remember, you don't have to meet every single criterion listed in a job description to get an interview.
The job description describes whoever left the role, and if you start out knowing everything you've likely signed yourself up for a few years that don't challenge or expand your skill set. If you're nervous about missing a particular technology on the list, do some research into it to see whether comparable skills from another experience would apply. For example, someone who's never used [Jenkins][1] might still understand the principles of continuous integration testing from working on a project that uses [Buildbot][2] or [Travis CI][3].
If you're applying at a larger company, they probably have an entire department and comprehensive screening process to make sure they don't hire any candidate unable to succeed in a role. That means it's your job to apply and their job to decide whether to reject you. Don't prematurely reject yourself from the job by refusing to apply.
Now you have an idea of what job you want and what skills you'll need to impress your interviewers. The next steps to take will vary based on how much experience you've already got.
### Tailoring existing involvement
Start by making a list of all the projects you've been involved with in the past few years. One way to get a quick list of things you've worked on lately is to navigate to the **Repositories** tab of your GitHub profile and filter the list by clicking on **Forks**. Additionally, look down your [Organizations][4] list for places you might have been engaging in leadership roles. If you already have a resume, make sure you've included everything from these lists under experience.
Consider any IRC channel where you have special permissions as a potential leadership experience. Check your Meetup and Eventbrite accounts and add any events that you organize or volunteer at to your list. Skim your calendar for the past year and note any volunteering, mentoring, or public speaking engagements.
Now for the hard part: Map the list of required skills onto the list of experiences. I like to assign a letter or number to each trait needed for the job, then mark the same symbol next to every piece of experience or involvement where you demonstrated the trait. When in doubt, claim it anyway—your problem is more likely a reluctance to brag than actual incompetence.
This is the point in the process at which resume writers are often fettered by reluctance to risk overselling their own skills. It often helps to re-frame the question as: "Did someone who organized a meetup show leadership and planning skills?" Rather than: "Did I personally show these skills when I organized that meetup?".
If you've been sufficiently thorough at figuring out where your free time has gone for the past year or two and you code a lot, you might now be facing a surprising problem: Too many items to fit on a single-page resume! If anything on your list of experiences didn't demonstrate any of the skills you're trying to showcase, cross it off. If an item demonstrates few skills and you don't have any stories that you enjoy telling about it, cross it off. If this abridged list of things you've done still won't fit in the format of a resume, prioritize the experiences from which you gained a relevant story or extensive experience with a desired technology.
At this point, it should be obvious if you need a better piece of experience to hone a particular skill. Consider using an issue aggregator like OpenHatch to find an open source project where you build and practice your skills with the tool or technology that you're missing.
### Make your resume beautiful
A resume's beauty comes from conciseness, clarity, and layout. Each piece of experience should be accompanied by enough information for a reader to immediately know why you included it, but no more. Each type of information should be formatted consistently throughout the document—it's distracting to have some dates italicized or right-aligned and others not.
Typeset your resume using a tool that makes these goals easy to achieve. I enjoy using [LaTeX][5], since its macro system makes visual consistency easy and most interviewers recognize it immediately. Your tool of choice might be [LibreOffice][6] or HTML, depending on your skills and how you want to distribute your resume.
Remember that a digitally submitted resume might be scanned for keywords, so it can help to use the same acronyms as the job posting when describing your experiences. To make your resume easy for your interviewer to use, place the most important information first.
Coders often struggle to quantify balance and layout when typesetting a document. My favorite technique for stepping back and assessing whether my document's whitespace is in the right place is to fullscreen the PDF or print it out, then look at it in a mirror. If you're using LibreOffice Writer, save a copy of your resume then change the font to that of a language you can't read. Both of these techniques forcibly pull you out of reading the content, and allow you to see the overall layout of the document in a new light. They take you from a "That sentence is poorly worded!" critique to noticing things like "It looks funny to have only a single word on that line."
Finally, double check that your resume displays correctly in the media where it will be seen. If you're distributing it as a web page, test it at different screen widths in multiple browsers. If it's a PDF, open it on your phone or a friend's computer to make sure all the fonts it needs are available.
### Next steps
Finally, don't let the content that you worked so hard on for your resume go to waste! Mirror it to your LinkedIn account—complete with the buzzwords from the job posting—and don't be surprised if recruiters start reaching out to you. Even if the jobs they're describing aren't a good fit right now, you can leverage their time and interest to get feedback on what's working well about your resume and what isn't.
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/2/add-open-source-to-your-resume
作者:[edunham][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/edunham
[1]: https://jenkins-ci.org/
[2]: http://buildbot.net/
[3]: https://travis-ci.org/
[4]: https://github.com/settings/organizations
[5]: https://www.latex-project.org/
[6]: https://www.libreoffice.org/download/libreoffice-fresh/

View File

@ -1,217 +0,0 @@
Vic020
How to use Python to hack your Eclipse IDE
==============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa)
The Eclipse Advanced Scripting Environment ([EASE][1]) project is a new but powerful set of plugins that enables you to quickly hack your Eclipse IDE.
Eclipse is a powerful framework that can be extended in many different ways by using its built-in plugin mechanism. However, writing and deploying a new plugin can be cumbersome if all you want is a bit of additional functionality. Now, using EASE, there's a better way to do that, without having to write a single line of Java code. EASE provides a way to easily automate workbench functionality using scripting languages such as Python or Javascript.
In this article, based on my [talk][2] at EclipseCon North America this year, I'll cover the basics of how to set up your Eclipse environment with Python and EASE and look at a few ideas to supercharge your IDE with the power of Python.
### Setup and run "Hello World"
The examples in this article are based on the Java-implementation of Python, Jython. You can install EASE directly into your existing Eclipse IDE. In this example we use Eclipse [Mars][3] and install EASE itself, its modules and the Jython engine.
From within the Eclipse Install Dialog (`Help>Install New Software`...), install EASE: [http://download.eclipse.org/ease/update/nightly][4]
And, select the following components:
- EASE Core feature
- EASE core UI feature
- EASE Python Developer Resources
- EASE modules (Incubation)
This will give you EASE and its modules. The main one we are interested in is the Resource module that gives you access to the Eclipse workspace, projects, and files API.
![](https://opensource.com/sites/default/files/1_installease_nightly.png)
After those have been successfully installed, next install the EASE Jython engine: [https://dl.bintray.com/pontesegger/ease-jython/][5]. Once the plugins are installed, test EASE out. Create a new project and add in a new file called hello.py with this content:
```
print "hello world"
```
Select the file, right click, and select 'Run as -> EASE script'. You should see "Hello World" appear in the console.
Now you can start writing Python scripts that can access the workspace and projects. This power can be used for all sorts of hacks, below are just a few ideas.
### Improve your code quality
Maintaining good code quality can be a tiresome job especially when dealing with a large codebase or when lots of developers are involved. Some of this pain can be made easier with a script, such as for batch formatting for a set of files, or even fixing certain files to [remove unix line endings][6] for easy comparison in source control like git. Another nice thing to do is use a script to generate Eclipse markers to highlight code that could do with improving. Here's an example script that you could use to add task markers for all "printStackTrace" methods it detects in Java files. See the source code: [markers.py][7]
To run, copy the file to your workspace, then right click and select 'Run as -> EASE script'.
```
loadModule('/System/Resources')
```
from org.eclipse.core.resources import IMarker
```
for ifile in findFiles("*.java"):
file_name = str(ifile.getLocation())
print "Processing " + file_name
with open(file_name) as f:
for line_no, line in enumerate(f, start=1):
if "printStackTrace" in line:
marker = ifile.createMarker(IMarker.TASK)
marker.setAttribute(IMarker.TRANSIENT, True)
marker.setAttribute(IMarker.LINE_NUMBER, line_no)
marker.setAttribute(IMarker.MESSAGE, "Fix in Sprint 2: " + line.strip())
```
If you have any java files with printStackTraces you will be able to see the newly created markers in the Tasks view and in the editor margin.
![](https://opensource.com/sites/default/files/2_codequality.png)
### Automate tedious tasks
When you are working with several projects you may want to automate some tedious, repetitive tasks. Perhaps you need to add in a copyright header to the beginning of each source file, or update source files when adopting a new framework. For instance, when we first switched to using Tycho and Maven, we had to add a pom.xml to each project. This is easily done using a few lines of Python. Then when Tycho provided support for pom-less builds, we wanted to remove unnecessary pom files. Again, a few lines of Python script enabled this. As an example, here is a script which adds a README.md file to every open project in your workspace, noting if they are Java or Python projects. See the source code: [add_readme.py][8].
To run, copy the file to your workspace, then right click and select 'Run as -> EASE script'.
loadModule('/System/Resources')
```
for iproject in getWorkspace().getProjects():
if not iproject.isOpen():
continue
ifile = iproject.getFile("README.md")
if not ifile.exists():
contents = "# " + iproject.getName() + "\n\n"
if iproject.hasNature("org.eclipse.jdt.core.javanature"):
contents += "A Java Project\n"
elif iproject.hasNature("org.python.pydev.pythonNature"):
contents += "A Python Project\n"
writeFile(ifile, contents)
```
The result should be that every open project will have a README.md file, with Java and Python projects having an additional descriptive line.
![](https://opensource.com/sites/default/files/3_tedioustask.png)
### Prototype new features
You can also use a Python script to hack a quick-fix for some much wanted functionality, or as a prototype to help demonstrate to your team or users how you envision a feature. For instance, one feature Eclipse IDE doesn't currently support is auto-save on the current file you are working on. Although this feature is in the works for future releases, you can have a quick and dirty version that autosaves every 30 seconds or when the editor is deactivated. Below is a snippet of the main method. See the full source: [autosave.py][9]
```
def save_dirty_editors():
workbench = getService(org.eclipse.ui.IWorkbench)
for window in workbench.getWorkbenchWindows():
for page in window.getPages():
for editor_ref in page.getEditorReferences():
part = editor_ref.getPart(False)
if part and part.isDirty():
print "Auto-Saving", part.getTitle()
part.doSave(None)
```
Before running this script you will need to turn on the 'Allow Scripts to run code in UI thread' setting by checking the box under Window > Preferences > Scripting. Then you can add the file to your workspace, right click on it and select 'Run As>EASE Script'. A save message is printed out in the Console view every time an editor is saved. To turn off the autosave just stop the script by pressing the 'Terminate' red square button in the Console view.
![](https://opensource.com/sites/default/files/4_prototype.png)
### Quickly extend the user interface with custom buttons, menus, etc
One of the best things about EASE is that it allows you to take your scripts and quickly hook them into UI elements of the IDE, for example, as a new button or new menu item. No need to write Java or have a new plugin, just add a couple of lines to your script header—it's that simple.
Here's an example for a simplistic script that creates us three new projects.
```
# name : Create fruit projects
# toolbar : Project Explorer
# description : Create fruit projects
loadModule("/System/Resources")
for name in ["banana", "pineapple", "mango"]:
createProject(name)
```
The comment lines specify to EASE to add a button to the Project Explorer toolbar. Here's another script that adds a button to the same toolbar to delete those three projects. See the source files: [createProjects.py][10] and [deleteProjects.py][11]
```
# name :Delete fruit projects
# toolbar : Project Explorer
# description : Get rid of the fruit projects
loadModule("/System/Resources")
for name in ["banana", "pineapple", "mango"]:
project = getProject(name)
project.delete(0, None)
```
To get the buttons to appear, add the two script files to a new project—let's call it 'ScriptsProject'. Then go to Windows > Preference > Scripting > Script Locations. Click on the 'Add Workspace' button and select the ScriptsProject. This project now becomes a default location for locating script files. You should see the buttons show up in the Project Explorer without needing to restart your IDE. You should be able to quickly create and delete the projects using your newly added buttons.
![](https://opensource.com/sites/default/files/5_buttons.png)
### Integrate with third-party tools
Every now and then you may need to use a tool outside the Eclipse ecosystem (sad but true, it has a lot but it does not do everything). For those occasions it might be quite handy to wrap calling that call to the tool in a script. Here's an example that allows you to integrate with explorer.exe, and add it to the content menu so you could instantly open a file browser using the current selection. See the source code: [explorer.py][12]
```
# name : Explore from here
# popup : enableFor(org.eclipse.core.resources.IResource)
# description : Start a file browser using current selection
loadModule("/System/Platform")
loadModule('/System/UI')
selection = getSelection()
if isinstance(selection, org.eclipse.jface.viewers.IStructuredSelection):
selection = selection.getFirstElement()
if not isinstance(selection, org.eclipse.core.resources.IResource):
selection = adapt(selection, org.eclipse.core.resources.IResource)
if isinstance(selection, org.eclipse.core.resources.IFile):
selection = selection.getParent()
if isinstance(selection, org.eclipse.core.resources.IContainer):
runProcess("explorer.exe", [selection.getLocation().toFile().toString()])
```
To get the menu to appear, add the script to a new project—let's call it 'ScriptsProject'. Then go to Windows > Preference > Scripting > Script Locations. Click on the 'Add Workspace' button and select the ScriptsProject. You should see the new menu item show up in the context menu when you right-click on a file. Select this action to bring up a file browser. (Note this functionality already exists in Eclipse but this example is one you could adapt to other third-party tools).
![](https://opensource.com/sites/default/files/6_explorer.png)
The Eclipse Advanced Scripting Environment provides a great way to get more out of your Eclipse IDE by leveraging the power of Python. It is a project in its infancy so there is so much more to come. Learn more [about the project][13] and get involved by signing up for the [forum][14].
I'll be talking more about EASE at [Eclipsecon North America][15] 2016. My talk [Scripting Eclipse with Python][16] will go into how you can use not just Jython, but C-Python and how this functionality can be extended specifically for scientific use-cases.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/2/how-use-python-hack-your-ide
作者:[Tracy Miranda][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tracymiranda
[1]: https://eclipse.org/ease/
[2]: https://www.eclipsecon.org/na2016/session/scripting-eclipse-python
[3]: https://www.eclipse.org/downloads/packages/eclipse-ide-eclipse-committers-451/mars1
[4]: http://download.eclipse.org/ease/update/nightly
[5]: https://dl.bintray.com/pontesegger/ease-jython/
[6]: http://code.activestate.com/recipes/66434-change-line-endings/
[7]: https://gist.github.com/tracymiranda/6556482e278c9afc421d
[8]: https://gist.github.com/tracymiranda/f20f233b40f1f79b1df2
[9]: https://gist.github.com/tracymiranda/e9588d0976c46a987463
[10]: https://gist.github.com/tracymiranda/55995daaea9a4db584dc
[11]: https://gist.github.com/tracymiranda/baa218fc2c1a8e898194
[12]: https://gist.github.com/tracymiranda/8aa3f0fc4bf44f4a5cd3
[13]: https://eclipse.org/ease/
[14]: https://dev.eclipse.org/mailman/listinfo/ease-dev
[15]: https://www.eclipsecon.org/na2016
[16]: https://www.eclipsecon.org/na2016/session/scripting-eclipse-python

View File

@ -1,4 +1,4 @@
翻译中by ![zky001]
翻译中by ping
Top 5 open source command shells for Linux
===============================================

View File

@ -1,81 +0,0 @@
Image processing at NASA with open source tools
=======================================================
keyword: NASA , Image Process , Node.js , OpenCV
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/nasa_spitzer_space_pink_spiral.jpg?itok=3XEUstkl)
This past summer, I was an intern at the [GVIS][1] Lab at [NASA][2] Glenn, where I brought my passion for open source into the lab. My task was to improve our lab's contributions to an open source fluid flow dynamics [simulation][3] developed by Dan Schroeder. The original simulation presents obstacles that users can draw in with their mouse to model computational fluid dynamics. My team contributed by adding image processing code that analyzes each frame of a live video feed to show how a physical object interacts with a fluid. But, there was more for us to do.
We wanted to make the image processing more robust, so I worked on improving the image processing library.
With the new library, the simulation would be able to detect contours, perform coordinate transformations in place, and find the center of mass of an object. The image processing doesn't directly relate to the physics of the fluid flow dynamics simulation. It detects the object with a camera and creates a barrier for the fluid flow simulation by getting the outline of the object. Then, the fluid flow simulation runs, and the output is projected down onto the actual object.
My goal was to improve the simulation in three ways:
1. to find accurate contours of an object
2. to find the center of mass of an object
3. to be able to perform accurate transformations about the center of an object
My mentor recommended that I install [Node.js][4], [OpenCV][5], and the [Node.js bindings for OpenCV][6]. While I was waiting for those to install, I looked at the example code on the OpenCV bindings on their [GitHub page][7]. I discovered that the example code was in JavaScript, so because I didnt know JavaScript, I started a short course from Codecademy. Two days later, I was sick of JavaScript but ready to start my project... which involved yet more JavaScript.
The example contour-finding code worked well. In fact, it allowed me to accomplish my first goal in a matter of hours! To get the contours of an image, here's what it looked like:
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_1-520x293.jpg)
>The original image with all of the contours.
The example contour-finding code worked a bit too well. Instead of the contour of the object being detected, all of the contours in the image were detected. This would have resulted in the simulation interacting with all of the unwanted contours. This is a problem because it would return incorrect data. To keep the simulation from interacting with the unwanted contours, I added an area constraint. If the contour was in a certain area range, then it would be drawn. The area constraint resulted in a much cleaner contour.
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_2-520x293.jpg)
>The filtered contour with the shadow contour.
Though the extraneous contours weren't detected, there was still a problem with the image. There was only one contour in the image, but it doubled back on itself and wasn't complete. Area couldn't be a deciding factor here, so it was time to try something else.
This time around, instead of immediately finding the contours, I first converted the image into a binary image. A binary image is an image where each pixel is either black or white. To get a binary image I first converted the color image to grayscale. Once the image was in grayscale, I called the threshold method on the image. The threshold method went through the image pixel by pixel and if the color value of the pixel was less than 30, the pixel color would be changed to black. Otherwise, the pixel value would be turned to white. After the original image was converted to a binary image, the resulting image looked like this:
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_3-520x293.jpg)
>The binary image.
Then I got the contours from the binary image, which resulted in a much cleaner contour, without the shadow contour.
![](https://opensource.com/sites/default/files/image_processing_nasa_4.jpg)
>The final clean contour.
At this point, I was able to get clean contours and detect the center of mass. Unfortunately, I didn't have enough time to be able to complete transformations about the center of mass. Since I only had a few days left in my internship, I started to think about other things I could do within a limited time span. One of those things was the bounding rectangle. The bounding rectangle is a quadrilateral with the smallest area that contains the entire contour of an image. The bounding rectangle is important because it is key in scaling the contour on the page. Unfortunately I didn't have time to do much with the bounding rectangle, but I still wanted to learn about it because it's a useful tool.
Finally, after all of that, I was able to finish processing the image!
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_5-521x293.jpg)
>The final image with bounding rectangle and center of mass in red.
Once the image processing code was complete, I replaced the old image processing code in the simulation with my code. To my surprise, it worked!
Well, mostly.
The program had a memory leak in it, which leaked 100MB every 1/10 of a second. I was glad that it wasnt because of my code. The bad thing was that fixing it was out of my control. The good thing was that there was a workaround that I could use. It was less than ideal, but it checked the amount of memory the simulation was using and when it used more than 1 GiB, the simulation restarted.
At the NASA lab, we use a lot of open source software, and my work there isn't possible without it.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/3/image-processing-nasa
作者:[Lauren Egts][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/laurenegts
[1]: https://ocio.grc.nasa.gov/gvis/
[2]: http://www.nasa.gov/centers/glenn/home/index.html
[3]: http://physics.weber.edu/schroeder/fluids/
[4]: http://nodejs.org/
[5]: http://opencv.org/
[6]: https://github.com/peterbraden/node-opencv
[7]: https://github.com/peterbraden/node-opencv

View File

@ -1,3 +1,4 @@
Translating by yuba0604
Healthy Open Source
============================

View File

@ -0,0 +1,87 @@
A newcomer's guide to navigating OpenStack Infrastructure
===========================================================
New contributors to OpenStack are welcome, but having a road map for navigating within this maturing, fast-paced open source community doesn't hurt. At OpenStack Summit in Austin, [Paul Belanger][1] (Red Hat, Inc.), [Elizabeth K. Joseph][2] (HPE), and [Christopher Aedo][3] (IBM) will lead a session on [OpenStack Infrastructure for Beginners][4]. In this interview, they offer tips and resources to help onboard new OpenStack contributors.
![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
**Your talk description says you'll be "diving into the heart of infrastructure and explain everything you need to know about the systems that keep OpenStack working." That's a tall order for a 40-minute time slot. What are the top things beginners should know about OpenStack infrastructure?**
**Elizabeth K. Joseph (EKJ)**: We don't use GitHub for OpenStack patches. This is something that trips up a lot of new contributors because we do maintain mirrors of all our repositories on GitHub for historical reasons. Instead we use a fully open source code review and continuous integration (CI) system maintained by the OpenStack Infrastructure team. Relatedly, since we run a CI system, every change proposed to OpenStack is tested before merging.
**Paul Belanger (PB)**: A lot of passionate people in the project, so don't get discouraged if your patch gets a -1.
**Christopher Aedo (CA)**: The community wants to help you succeed, don't be afraid to ask questions or ask for pointers to more information to improve your understanding.
### Which online resources would you recommend for beginners to fill in the holes for what you can't cover in your talk?
**PB**: Definitely our [OpenStack Project Infrastructure documentation][5]. At lot of effort has been taken to keep it up to date as much as possible. Every system used in running OpenStack as a project has a dedicated page, even the OpenStack cloud the Infrastructure teams is bringing online.
**EKJ**: I'll echo what Paul said about the Infrastructure documentation, and add that we love seeing patches from folks who are learning. We often don't realize what we're missing in terms of documentation until someone asks. So read, learn, and then help us fill in the gaps. You can ask questions on the [openstack-infra mailing list][6] or in our IRC channel at #openstack-infra on Freenode.
**CA**: I love [this detailed post][7] about building images, by Ian Wienand.
### Which "gotchas" should new OpenStack contributors look out for?
**EKJ**: Contributing is not just about submitting new code and new features; the OpenStack community places a very high value on doing code reviews. If you want people to look at a patch you submitted, consider reviewing some of the work of others and providing clear and constructive feedback. The more your fellow contributors know about your work and see you doing reviews, the more likely you'll get your code reviewed in a timely manner.
**CA**: I see a lot of newcomers getting tripped up with [Gerrit][8]. Read through the [developer workflow][9] in the Developers Guide, and then maybe read through it one more time. If you're not used to Gerrit, it can seem confusing and overwhelming at first, but walking through a few code reviews usually makes it all come together. Also, I'm a big fan of IRC. It can be a great place to get help, but it's best if you can maintain a persistent presence so people can answer your questions even if you're not "there" at that particular moment. (Read [IRC, the secret to success in open source][10].) You don't need to be "always on," but the ability to easily scroll back in a channel and catch up on a conversation can be invaluable.
**PB**: I agree with both Elizabeth and Chris—Gerrit is what to look out for. It is going to be the hub of your development effort. Not only will you be submitting code for people to review, but you'll also be reviewing other contributors' code. Watch out for the Gerrit UI; it can be confusing at times. I'd recommend trying out [Gertty][11], which is a console-based interface to the Gerrit Code Review system, which happens to be a project driven by OpenStack Infrastructure.
### What resources do you recommend for beginners to help them network with other OpenStack contributors?
**PB**: For me, it was using IRC and joining the #openstack-infra channel on Freenode ([IRC logs][12]). There is a lot of fantastic information and people in that channel. You get to see the day-to-day operations of the OpenStack project, and once you know how the project works, you'll have a better understanding on how to contribute to its future.
**CA**: I want to second that note for IRC; staying on IRC throughout the day made a huge difference for me in terms of feeling informed and connected. It's also such a great way to get help when you're stuck with someone on one of the projects—the ones with active IRC channels always have someone around willing to get your issues sorted out.
**EKJ**: The [openstack-dev mailing list][13] is quite important for staying up to date with news about projects you're working on inside of OpenStack, so I recommend subscribing to that. The mailing list uses subject tags to separate projects, so you can instruct your email client to use those and focus on threads that impact projects you care about. Beyond online resources, many OpenStack groups have popped up all over the world that serve the needs of both users and contributors to OpenStack, and many of them routinely have talks and events with key OpenStack contributors. You can search on Meetup.com in your area, or search on [groups.openstack.org][14] to see if there is an OpenStack group in your area. Finally, there are the [OpenStack Summits][15], which happen every six months, and where we'll be giving our Infrastructure talk. In their current format, the summits consist of both a user conference and a developer conference in one space to talk about everything related to OpenStack, past, present, and future.
### In which areas does OpenStack need to improve to become more beginner-friendly?
**PB**: I think our [account-setup][16] process could be made easier for new contributors, especially how many steps are needed to submit your first patch. There is a large cost to enroll into OpenStack development model, which maybe be too much for contributors; however, once enrolled, the model works fantastic for developers.
**CA**: We have a very pro-developer community, but the focus is on developing OpenStack itself, with less consideration given to the users of OpenStack clouds. We need to bring in application developers and encourage more people to develop things that run beautifully on OpenStack clouds, and encourage them to share those apps in the [Community App Catalog][17]. We can do this by continuing to improve our API standards and by ensuring different libraries (like libcloud, phpopencloud, and others) continue to work reliably for developers. Oh, also by sponsoring more OpenStack hackathons! All these things can ease entry for newcomers, which will lead to them sticking around.
**EKJ**: I've worked on open source software for many years, but for a large number of OpenStack developers, this is the first open source project they've every worked on. I've found that their proprietary software background doesn't prepare them for the open source ideals, methodologies, and collaboration techniques used in an open source project. I'd love to see us do a better job of welcoming people who have this proprietary software background and working with them so they can truly understand the value of what they're working on in the open source software community.
### I think 2016 is shaping up to be the Year of the Open Source Haiku. Explain OpenStack to beginners via Haiku.
**PB**: OpenStack runs clouds If you enjoy free software Submit your first patch
**CA**: In the near future OpenStack will rule the world Help make it happen!
**EKJ**: OpenStack is free Deploy on your own servers And run your own cloud!
*Paul, Elizabeth*, and Christopher will be [speaking at OpenStack Summit][18] in Austin on Monday, April 25, starting at 11:15am.
------------------------------------------------------------------------------
via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners
作者:[linux.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://rikkiendsley.com/
[1]: https://twitter.com/pabelanger
[2]: https://twitter.com/pleia2
[3]: https://twitter.com/docaedo
[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337
[5]: http://docs.openstack.org/infra/system-config/
[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html
[8]: https://code.google.com/p/gerrit/
[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow
[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/
[11]: https://pypi.python.org/pypi/gertty
[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/
[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[14]: https://groups.openstack.org/
[15]: https://www.openstack.org/summit/
[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup
[17]: https://apps.openstack.org/
[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337

View File

@ -0,0 +1,206 @@
Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands
============================================================================================
Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well.
![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png)
>LFCS: Manage LVM and Create LVM Partition Part 11
One of the most important decisions while installing a Linux system is the amount of storage space to be allocated for system files, home directories, and others. If you make a mistake at that point, growing a partition that has run out of space can be burdensome and somewhat risky.
**Logical Volumes Management** (also known as **LVM**), which have become a default for the installation of most (if not all) Linux distributions, have numerous advantages over traditional partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical divisions to be resized (reduced or increased) at will without much hassle.
The structure of the LVM consists of:
* One or more entire hard disks or partitions are configured as physical volumes (PVs).
* A volume group (**VG**) is created using one or more physical volumes. You can think of a volume group as a single storage unit.
* Multiple logical volumes can then be created in a volume group. Each logical volume is somewhat equivalent to a traditional partition with the advantage that it can be resized at will as we mentioned earlier.
In this article we will use three disks of **8 GB** each (**/dev/sdb**, **/dev/sdc**, and **/dev/sdd**) to create three physical volumes. You can either create the PVs directly on top of the device, or partition it first.
Although we have chosen to go with the first method, if you decide to go with the second (as explained in [Part 4 Create Partitions and File Systems in Linux][3] of this series) make sure to configure each partition as type `8e`.
### Creating Physical Volumes, Volume Groups, and Logical Volumes
To create physical volumes on top of **/dev/sdb**, **/dev/sdc**, and **/dev/sdd**, do:
```
# pvcreate /dev/sdb /dev/sdc /dev/sdd
```
You can list the newly created PVs with:
```
# pvs
```
and get detailed information about each PV with:
```
# pvdisplay /dev/sdX
```
(where **X** is b, c, or d)
If you omit `/dev/sdX` as parameter, you will get information about all the PVs.
To create a volume group named `vg00` using `/dev/sdb` and `/dev/sdc` (we will save `/dev/sdd` for later to illustrate the possibility of adding other devices to expand storage capacity when needed):
```
# vgcreate vg00 /dev/sdb /dev/sdc
```
As it was the case with physical volumes, you can also view information about this volume group by issuing:
```
# vgdisplay vg00
```
Since `vg00` is formed with two **8 GB** disks, it will appear as a single **16 GB** drive:
![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png)
>List LVM Volume Groups
When it comes to creating logical volumes, the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.
For example, lets create two LVs named `vol_projects` (**10 GB**) and `vol_backups` (remaining space), which we can use later to store project documentation and system backups, respectively.
The `-n` option is used to indicate a name for the LV, whereas `-L` sets a fixed size and `-l` (lowercase L) is used to indicate a percentage of the remaining space in the container VG.
```
# lvcreate -n vol_projects -L 10G vg00
# lvcreate -n vol_backups -l 100%FREE vg00
```
As before, you can view the list of LVs and basic information with:
```
# lvs
```
and detailed information with
```
# lvdisplay
```
To view information about a single **LV**, use **lvdisplay** with the **VG** and **LV** as parameters, as follows:
```
# lvdisplay vg00/vol_projects
```
![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png)
>List Logical Volume
In the image above we can see that the LVs were created as storage devices (refer to the LV Path line). Before each logical volume can be used, we need to create a filesystem on top of it.
Well use ext4 as an example here since it allows us both to increase and reduce the size of each LV (as opposed to xfs that only allows to increase the size):
```
# mkfs.ext4 /dev/vg00/vol_projects
# mkfs.ext4 /dev/vg00/vol_backups
```
In the next section we will explain how to resize logical volumes and add extra physical storage space when the need arises to do so.
### Resizing Logical Volumes and Extending Volume Groups
Now picture the following scenario. You are starting to run out of space in `vol_backups`, while you have plenty of space available in `vol_projects`. Due to the nature of LVM, we can easily reduce the size of the latter (say **2.5 GB**) and allocate it for the former, while resizing each filesystem at the same time.
Fortunately, this is as easy as doing:
```
# lvreduce -L -2.5G -r /dev/vg00/vol_projects
# lvextend -l +100%FREE -r /dev/vg00/vol_backups
```
![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png)
>Resize Reduce Logical Volume and Volume Group
It is important to include the minus `(-)` or plus `(+)` signs while resizing a logical volume. Otherwise, youre setting a fixed size for the LV instead of resizing it.
It can happen that you arrive at a point when resizing logical volumes cannot solve your storage needs anymore and you need to buy an extra storage device. Keeping it simple, you will need another disk. We are going to simulate this situation by adding the remaining PV from our initial setup (`/dev/sdd`).
To add `/dev/sdd` to `vg00`, do
```
# vgextend vg00 /dev/sdd
```
If you run vgdisplay `vg00` before and after the previous command, you will see the increase in the size of the VG:
```
# vgdisplay vg00
```
![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png)
>Check Volume Group Disk Size
Now you can use the newly added space to resize the existing LVs according to your needs, or to create additional ones as needed.
### Mounting Logical Volumes on Boot and on Demand
Of course there would be no point in creating logical volumes if we are not going to actually use them! To better identify a logical volume we will need to find out what its `UUID` (a non-changing attribute that uniquely identifies a formatted storage device) is.
To do that, use blkid followed by the path to each device:
```
# blkid /dev/vg00/vol_projects
# blkid /dev/vg00/vol_backups
```
![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png)
>Find Logical Volume UUID
Create mount points for each LV:
```
# mkdir /home/projects
# mkdir /home/backups
```
and insert the corresponding entries in `/etc/fstab` (make sure to use the UUIDs obtained before):
```
UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4 defaults 0 0
UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0
```
Then save the changes and mount the LVs:
```
# mount -a
# mount | grep home
```
![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png)
>Find Logical Volume UUID
When it comes to actually using the LVs, you will need to assign proper `ugo+rwx` permissions as explained in [Part 8 Manage Users and Groups in Linux][4] of this series.
### Summary
In this article we have introduced [Logical Volume Management][5], a versatile tool to manage storage devices that provides scalability. When combined with RAID (which we explained in [Part 6 Create and Manage RAID in Linux][6] of this series), you can enjoy not only scalability (provided by LVM) but also redundancy (offered by RAID).
In this type of setup, you will typically find `LVM` on top of `RAID`, that is, configure RAID first and then configure LVM on top of it.
If you have questions about this article, or suggestions to improve it, feel free to reach us using the comment form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/
[3]: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
[4]: http://www.tecmint.com/manage-users-and-groups-in-linux/
[5]: http://www.tecmint.com/create-lvm-storage-in-linux/
[6]: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/

View File

@ -0,0 +1,273 @@
Part 14 - Monitor Linux Processes Resource Usage and Set Process Limits on a Per-User Basis
=============================================================================================
Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well.
![](http://www.tecmint.com/wp-content/uploads/2016/03/Linux-Process-Monitoring-Set-Process-Limits-Per-User.png)
>Monitor Linux Processes and Set Process Limits Per User Part 14
Every Linux system administrator needs to know how to verify the integrity and availability of hardware, resources, and key processes. In addition, setting resource limits on a per-user basis must also be a part of his / her skill set.
In this article we will explore a few ways to ensure that the system both hardware and the software is behaving correctly to avoid potential issues that may cause unexpected production downtime and money loss.
### Linux Reporting Processors Statistics
With **mpstat** you can view the activities for each processor individually or the system as a whole, both as a one-time snapshot or dynamically.
In order to use this tool, you will need to install **sysstat**:
```
# yum update && yum install sysstat [On CentOS based systems]
# aptitutde update && aptitude install sysstat [On Ubuntu based systems]
# zypper update && zypper install sysstat [On openSUSE systems]
```
Read more about sysstat and its utilities at [Learn Sysstat and Its Utilities mpstat, pidstat, iostat and sar in Linux][3]
Once you have installed **mpstat**, use it to generate reports of processors statistics.
To display **3** global reports of CPU utilization (`-u`) for all CPUs (as indicated by `-P` ALL) at a 2-second interval, do:
```
# mpstat -P ALL -u 2 3
```
### Sample Output
```
Linux 3.19.0-32-generic (tecmint.com) Wednesday 30 March 2016 _x86_64_ (4 CPU)
11:41:07 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
11:41:09 IST all 5.85 0.00 1.12 0.12 0.00 0.00 0.00 0.00 0.00 92.91
11:41:09 IST 0 4.48 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 94.53
11:41:09 IST 1 2.50 0.00 0.50 0.00 0.00 0.00 0.00 0.00 0.00 97.00
11:41:09 IST 2 6.44 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 92.57
11:41:09 IST 3 10.45 0.00 1.99 0.00 0.00 0.00 0.00 0.00 0.00 87.56
11:41:09 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
11:41:11 IST all 11.60 0.12 1.12 0.50 0.00 0.00 0.00 0.00 0.00 86.66
11:41:11 IST 0 10.50 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 88.50
11:41:11 IST 1 14.36 0.00 1.49 2.48 0.00 0.00 0.00 0.00 0.00 81.68
11:41:11 IST 2 2.00 0.50 1.00 0.00 0.00 0.00 0.00 0.00 0.00 96.50
11:41:11 IST 3 19.40 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 79.60
11:41:11 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
11:41:13 IST all 5.69 0.00 1.24 0.00 0.00 0.00 0.00 0.00 0.00 93.07
11:41:13 IST 0 2.97 0.00 1.49 0.00 0.00 0.00 0.00 0.00 0.00 95.54
11:41:13 IST 1 10.78 0.00 1.47 0.00 0.00 0.00 0.00 0.00 0.00 87.75
11:41:13 IST 2 2.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 97.00
11:41:13 IST 3 6.93 0.00 0.50 0.00 0.00 0.00 0.00 0.00 0.00 92.57
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
Average: all 7.71 0.04 1.16 0.21 0.00 0.00 0.00 0.00 0.00 90.89
Average: 0 5.97 0.00 1.16 0.00 0.00 0.00 0.00 0.00 0.00 92.87
Average: 1 9.24 0.00 1.16 0.83 0.00 0.00 0.00 0.00 0.00 88.78
Average: 2 3.49 0.17 1.00 0.00 0.00 0.00 0.00 0.00 0.00 95.35
Average: 3 12.25 0.00 1.16 0.00 0.00 0.00 0.00 0.00 0.00 86.59
```
To view the same statistics for a specific **CPU** (**CPU 0** in the following example), use:
```
# mpstat -P 0 -u 2 3
```
### Sample Output
```
Linux 3.19.0-32-generic (tecmint.com) Wednesday 30 March 2016 _x86_64_ (4 CPU)
11:42:08 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
11:42:10 IST 0 3.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 0.00 96.50
11:42:12 IST 0 4.08 0.00 0.00 2.55 0.00 0.00 0.00 0.00 0.00 93.37
11:42:14 IST 0 9.74 0.00 0.51 0.00 0.00 0.00 0.00 0.00 0.00 89.74
Average: 0 5.58 0.00 0.34 0.85 0.00 0.00 0.00 0.00 0.00 93.23
```
The output of the above commands shows these columns:
* `CPU`: Processor number as an integer, or the word all as an average for all processors.
* `%usr`: Percentage of CPU utilization while running user level applications.
* `%nice`: Same as `%usr`, but with nice priority.
* `%sys`: Percentage of CPU utilization that occurred while executing kernel applications. This does not include time spent dealing with interrupts or handling hardware.
* `%iowait`: Percentage of time when the given CPU (or all) was idle, during which there was a resource-intensive I/O operation scheduled on that CPU. A more detailed explanation (with examples) can be found [here][4].
* `%irq`: Percentage of time spent servicing hardware interrupts.
* `%soft`: Same as `%irq`, but with software interrupts.
* `%steal`: Percentage of time spent in involuntary wait (steal or stolen time) when a virtual machine, as guest, is “winning” the hypervisors attention while competing for the CPU(s). This value should be kept as small as possible. A high value in this field means the virtual machine is stalling or soon will be.
* `%guest`: Percentage of time spent running a virtual processor.
* `%idle`: percentage of time when CPU(s) were not executing any tasks. If you observe a low value in this column, that is an indication of the system being placed under a heavy load. In that case, you will need to take a closer look at the process list, as we will discuss in a minute, to determine what is causing it.
To put the place the processor under a somewhat high load, run the following commands and then execute mpstat (as indicated) in a separate terminal:
```
# dd if=/dev/zero of=test.iso bs=1G count=1
# mpstat -u -P 0 2 3
# ping -f localhost # Interrupt with Ctrl + C after mpstat below completes
# mpstat -u -P 0 2 3
```
Finally, compare to the output of **mpstat** under “normal” circumstances:
![](http://www.tecmint.com/wp-content/uploads/2016/03/Report-Processors-Related-Statistics.png)
>Report Linux Processors Related Statistics
As you can see in the image above, **CPU 0** was under a heavy load during the first two examples, as indicated by the `%idle` column.
In the next section we will discuss how to identify these resource-hungry processes, how to obtain more information about them, and how to take appropriate action.
### Reporting Linux Processes
To list processes sorting them by CPU usage, we will use the well known `ps` command with the `-eo` (to select all processes with user-defined format) and `--sort` (to specify a custom sorting order) options, like so:
```
# ps -eo pid,ppid,cmd,%cpu,%mem --sort=-%cpu
```
The above command will only show the `PID`, `PPID`, the command associated with the process, and the percentage of CPU and RAM usage sorted by the percentage of CPU usage in descending order. When executed during the creation of the .iso file, heres the first few lines of the output:
![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Linux-Processes-By-CPU-Usage.png)
>Find Linux Processes By CPU Usage
Once we have identified a process of interest (such as the one with `PID=2822`), we can navigate to `/proc/PID` (`/proc/2822` in this case) and do a directory listing.
This directory is where several files and subdirectories with detailed information about this particular process are kept while it is running.
#### For example:
* `/proc/2822/io` contains IO statistics for the process (number of characters and bytes read and written, among others, during IO operations).
* `/proc/2822/attr/current` shows the current SELinux security attributes of the process.
* `/proc/2822/cgroup` describes the control groups (cgroups for short) to which the process belongs if the CONFIG_CGROUPS kernel configuration option is enabled, which you can verify with:
```
# cat /boot/config-$(uname -r) | grep -i cgroups
```
If the option is enabled, you should see:
```
CONFIG_CGROUPS=y
```
Using `cgroups` you can manage the amount of allowed resource usage on a per-process basis as explained in Chapters 1 through 4 of the [Red Hat Enterprise Linux 7 Resource Management guide][5], in Chapter 9 of the [openSUSE System Analysis and Tuning guide][6], and in the [Control Groups section of the Ubuntu 14.04 Server documentation][7].
The `/proc/2822/fd` is a directory that contains one symbolic link for each file descriptor the process has opened. The following image shows this information for the process that was started in tty1 (the first terminal) to create the **.iso** image:
![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Linux-Process-Information.png)
>Find Linux Process Information
The above image shows that **stdin** (file descriptor **0**), **stdout** (file descriptor **1**), and **stderr** (file descriptor **2**) are mapped to **/dev/zero**, **/root/test.iso**, and **/dev/tty1**, respectively.
More information about `/proc` can be found in “The `/proc` filesystem” document kept and maintained by Kernel.org, and in the Linux Programmers Manual.
### Setting Resource Limits on a Per-User Basis in Linux
If you are not careful and allow any user to run an unlimited number of processes, you may eventually experience an unexpected system shutdown or get locked out as the system enters an unusable state. To prevent this from happening, you should place a limit on the number of processes users can start.
To do this, edit **/etc/security/limits.conf** and add the following line at the bottom of the file to set the limit:
```
* hard nproc 10
```
The first field can be used to indicate either a user, a group, or all of them `(*)`, whereas the second field enforces a hard limit on the number of process (nproc) to **10**. To apply changes, logging out and back in is enough.
Thus, lets see what happens if a certain user other than root (either a legitimate one or not) attempts to start a shell fork bomb. If we had not implemented limits, this would initially launch two instances of a function, and then duplicate each of them in a neverending loop. Thus, it would eventually bringing your system to a crawl.
However, with the above restriction in place, the fork bomb does not succeed but the user will still get locked out until the system administrator kills the process associated with it:
![](http://www.tecmint.com/wp-content/uploads/2016/03/Shell-Fork-Bomb.png)
>Run Shell Fork Bomb
**TIP**: Other possible restrictions made possible by **ulimit** are documented in the `limits.conf` file.
### Linux Other Process Management Tools
In addition to the tools discussed previously, a system administrator may also need to:
**a)** Modify the execution priority (use of system resources) of a process using **renice**. This means that the kernel will allocate more or less system resources to the process based on the assigned priority (a number commonly known as “**niceness**” in a range from `-20` to `19`).
The lower the value, the greater the execution priority. Regular users (other than root) can only modify the niceness of processes they own to a higher value (meaning a lower execution priority), whereas root can modify this value for any process, and may increase or decrease it.
The basic syntax of renice is as follows:
```
# renice [-n] <new priority> <UID, GID, PGID, or empty> identifier
```
If the argument after the new priority value is not present (empty), it is set to PID by default. In that case, the niceness of process with **PID=identifier** is set to `<new priority>`.
**b)** Interrupt the normal execution of a process when needed. This is commonly known as [“killing” the process][9]. Under the hood, this means sending the process a signal to finish its execution properly and release any used resources in an orderly manner.
To [kill a process][10], use the **kill** command as follows:
```
# kill PID
```
Alternatively, you can use [pkill to terminate all processes][11] of a given owner `(-u)`, or a group owner `(-G)`, or even those processes which have a PPID in common `(-P)`. These options may be followed by the numeric representation or the actual name as identifier:
```
# pkill [options] identifier
```
For example,
```
# pkill -G 1000
```
will kill all processes owned by group with `GID=1000`.
And,
```
# pkill -P 4993
```
will kill all processes whose `PPID is 4993`.
Before running a `pkill`, it is a good idea to test the results with `pgrep` first, perhaps using the `-l` option as well to list the processes names. It takes the same options but only returns the PIDs of processes (without taking any further action) that would be killed if `pkill` is used.
```
# pgrep -l -u gacanepa
```
This is illustrated in the next image:
![](http://www.tecmint.com/wp-content/uploads/2016/03/List-User-Running-Processes.png)
>Find User Running Processes in Linux
### Summary
In this article we have explored a few ways to monitor resource usage in order to verify the integrity and availability of critical hardware and software components in a Linux system.
We have also learned how to take appropriate action (either by adjusting the execution priority of a given process or by terminating it) under unusual circumstances.
We hope the concepts explained in this tutorial have been helpful. If you have any questions or comments, feel free to reach us using the contact form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/
[3]: http://www.tecmint.com/sysstat-commands-to-monitor-linux/
[4]: http://veithen.github.io/2013/11/18/iowait-linux.html
[5]: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Resource_Management_Guide/index.html
[6]: https://doc.opensuse.org/documentation/leap/tuning/html/book.sle.tuning/cha.tuning.cgroups.html
[7]: https://help.ubuntu.com/lts/serverguide/cgroups.html
[8]: http://man7.org/linux/man-pages/man5/proc.5.html
[9]: http://www.tecmint.com/kill-processes-unresponsive-programs-in-ubuntu/
[10]: http://www.tecmint.com/find-and-kill-running-processes-pid-in-linux/
[11]: http://www.tecmint.com/how-to-kill-a-process-in-linux/

View File

@ -1,4 +1,3 @@
WingCuengRay翻译中
Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
================================================================================
A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams.

View File

@ -0,0 +1,31 @@
因 SUSE Linux Enterprise 12Linux Kernel 3.12 的支持延长至 2017 年
==================================================================================
>Linux 内核开发者 Jiri Slaby 已经宣布了 Linux 3.12 系列内核的第 58 个长期支持维护版本,以及关于生命周期状态的一些修改。
Linux Kernel 3.12.58 LTS长期支持版已经可用那些运行该版本内核的 Linux 操作系统应尽快升级。有个好消息是 Linux 3.12 分支的支持将延长一年至 2017 年,因为 SUSE Linux Enterprise (SLE) 12 Service Pack 1 正是基于该分支。
“因为 SLE12-SP1 基于 3.12 之上,并且它的生命周期持续到 2017 年,所以将 3.12 的生命周期结束也修改到 2017 年”Jiri Slaby 在发到 Linux 内核邮件列表的一个[补丁声明][1]中说道同时他还公布了其它内核分支的生命周期结束时间EOLend of life比如 Linux 3.14 为 2016 年 8 月Linux 3.18 为 2017 年 1 月Linux 4.1 是 2017 年 9 月。
### Linux kernel 3.12.58 LTS 改动
如果你想知道 Linux kernel 3.12.58 LTS 中有哪些改动,我们能告诉你的是,通过[附加的短日志][2]可知,这是一个健康的更新,改动涉及 114 个文件,有 835 行插入和 298 行删除。在这些改动之中,我们注意到多数是声音和网络栈的改进,以及许多驱动的更新。
更新中还有多个对 x86 硬件架构的改进,以及一些对 Xtensa 和 s390 的小修复。文件系统,如 NFSOCFS2BtrfsJBD2 还有 XFS 也都收到了不同的修复,在驱动的更新中可以提到的还有 USBXenMDMTD媒体SCSITTY网络ATA蓝牙hwmonEDAC 以及 CPUFreq。
和往常一样你现在就可以从我们网站linux.com或直接从 kernel.org [下载 Linux kernel 3.12.58 LTS 的源码][3]。如果您的 GNU/Linux 操作系统运行的是 Linux 3.12 系列内核,请尽快升级您的内核。
------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/191-linux-training/834644-7-steps-to-start-your-linux-sysadmin-career
作者:[Marius Nestor][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]: http://lkml.iu.edu/hypermail/linux/kernel/1604.1/00792.html
[2]: http://www.spinics.net/lists/stable/msg128634.html
[3]: http://linux.softpedia.com/get/System/Operating-Systems/Kernels/Linux-Kernel-Stable-1960.shtml

View File

@ -1,77 +0,0 @@
微软和 Linux :真正的浪漫还是有毒的爱情?
================================================================================
时不时的我们会读到一个能让你喝咖啡呛到或者把热拿铁喷到你显示器上的新闻故事。微软最近宣布的对 Linux 的钟爱就是这样一个鲜明的例子。
从常识来讲微软和自由开源软件FOSS运动就是恒久的敌人。在很多人眼里微软体现了过分的贪婪而这正为自由开源软件运动FOSS所拒绝。另外之前微软就已经给自由开源软件社区贴上了"一伙强盗"的标签。
我们能够理解为什么微软一直以来都害怕免费的操作系统。免费操作系统结合挑战微软核心产品线的开源应用时,就威胁到了微软在台式机和笔记本电脑市场的控制地位。
尽管微软有对在台式机主导地位的担忧,在网络服务器市场 Linux 却有着最高的影响力。今天,大多数的服务器都是 Linux 系统。包括世界上最繁忙的站点服务器。对微软来说,看到这么多没有被声明的许可证收入一定是非常痛苦的。
掌上设备是微软输给免费软件的另一个领域。曾几何时,微软的 Windows CE 和 Pocket PC 操作系统走在移动计算的前沿。Windows PDA 设备是最闪亮的和豪华的产品。但是这一切在苹果公司发布了iphone之后都结束了。从那时起安卓已经开始进入公众视野Windows的移动产品开始被忽略被遗忘。而安卓平台是建立在免费开源的组件的基础上的。
由于安卓平台的开放性,安卓的市场份额在迅速扩大。不像 IOS任何一个手机制造商都可以发布安卓手机。也不像Windows手机安卓没有许可费用。这对消费者来说是件好事。这也导致了许多强大却又价格低廉的手机制造商在世界各地涌现。这非常明确的证明了自由开源软件FOSS的价值。
在服务器和移动计算的角逐中失利对微软来说是非常惨重的损失。考虑一下服务器和移动计算这两个加起来所占有的市场大小,台式机市场似乎是死水一潭。没有人喜欢失败,尤其是涉及到金钱。并且,微软确实有许多正在慢慢失去。你可能期望着微软自尝苦果。在过去,确实如此。
微软使用了各种可以支配的手段来对 Linux 和自由开源软件FOSS进行反击从宣传到专利威胁。尽管这种攻击确实减慢了适配 Linux 的步伐,但却从来没有让 Linux 的脚步停下。
所以,当微软在开源大会和重大事件上拿出印有"Microsoft Loves Linux"的T-恤和徽章时,请原谅我们表现出来的震惊。这是真的吗?微软真的爱 Linux
当然公关的口号和免费的T-恤并不代表真理。行动胜于雄辩。当你思考一下微软的行动时,微软的立场就变得有点模棱两可了。
一方面,微软招募了几百名 Linux 开发者和系统管理员。将 .NET 核心框架作为一个开源的项目进行了发布,并提供了跨平台的支持(这样 .NET 就可以跑在 OS X 和 Linux 上了)。并且,微软与 Linux 公司合作把最流行的发型版本放到了Azure平台上。事实上微软已经走的如此之远以至于要为Azure数据中心开发自己的Linux发行版了。
另一方面微软继续直接通过法律或者傀儡公司来对开源项目进行攻击。很明显微软在与自由软件的所有权较量上并没有发自内心的进行大的道德转变。那为什么要公开生命对Linux的钟爱之情呢
一个显而易见的事实:微软是一个经营性实体。对股东来说是一个投资工具,对雇员来说是收入来源。微软所做的只有一个终极目标:盈利。微软并没有表现出来爱或者恨(尽管这是一个最常见的指控)。
所以问题不应该是"微软真的爱 Linux 吗?"相反,我们应该问,微软是怎么从这一切中获利的。
让我们以 .NET 核心框架的开源发行为例。这一举动使得任何平台都很容易进入 .NET 的运行时环境。这使得微软的.NET框架所涉及到的范围远远大于Windows平台。
开放 .NET 的核心包,最终使得 .NET 开发者开发跨平台的app成为可能比如OS X,Linux甚至安卓——都基于同一个核心代码库。
从开发者角度来讲,这使得.NET框架比之前更有吸引力了。能够从单一的代码库就可以触及到多个平台使得使用.NET框架开发的任何app戏剧性的扩大了潜在的目标市场。
另外,一个强大的开源社区能够提供给开发者一些代码来在他们自己的项目中进行服用。所以,开源项目的可利用性也将会成就.NET框架。
更进一步讲,开放 .NET 的核心代码能够减少跨越不同平台锁产生的碎片意味着对消费者来说有对app更广的选择。无论是开源软件还是专用的app都有更多的选择。
从微软的角度来讲会得到一队开发者大军。微软可以通过销售培训、证书、技术支持、开发者工具包括Visual Studio和应用扩展来获利。
我们应该自问的是,这对自由软件社区有利还是有弊?
.NET 框架的大范围适用意味着许多参与竞争的开源项目的消亡,迫使我们会跟着微软的节奏走下去。
先抛开.NET不谈微软正在花费大量的精力在Linux对Azure云计算平台的支持上。要记得Azure最初是Windows的Azure。Windows服务器是唯一能够支持Azure的操作系统。今天Azure也提供了对多个Linux发行版的支持。
关于此有一个原因付费给需要或者想要Linux服务的顾客。如果微软不提供Linux虚拟机那些顾客就会跟别人合作了。
看上去好像是微软意识到"Linux就在这里"的这样一个现实。微软不能真正的消灭它,所以必须接收它。
这又把我们带回到那个问题关于微软和Linux为什么有这么多的流言我们在谈论这个问题因为微软希望我们思考这个问题。毕竟所有这些谈资都会追溯到微软不管是在新闻稿、博客还是会议上的公开声明。微软在努力吸引大家对其在Linux专业知识方面的注意力。
首席架构师 Kamala Subramaniam 的博文声明Azure Cloud Switch背后的其他企图会是什么ACS是一个定制的Linux发行版。微软用它来对Azure数据中心的开关硬件进行自动配置。
ACS不是公开的。它是用于Azure内部使用的。别人也不太可能找到这个发行版其他的用途。事实上Subramaniam 在她的博文中也表述了同样的观点。
所以微软不会通过卖ACS来获利也不会因为放弃使用而获得一个用户基础。相反微软在Linux和Azure上花费经历加强其在Linux云计算平台方面的地位。
微软最近迷上Linux对社区来说是好消息吗
我们不应该慢慢忘记微软的"拥抱、扩展、消灭"的诅咒。现在微软处在拥抱Linux的初期阶段。微软会通过定制扩展和专有标准来分裂社区吗
赶紧评论吧,让我们知道你是怎么想的。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0
作者:[James Darvell][a]
译者:[sonofelice](https://github.com/sonofelice)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/james-darvell

View File

@ -0,0 +1,57 @@
安卓编年史
================================================================================
![Play 商店又一次重新设计!这一版非常接近现在的设计,卡片结构让改变布局变得易如反掌。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/get-em-Kirill.jpg)
Play 商店又一次重新设计!这一版非常接近现在的设计,卡片结构让改变布局变得易如反掌。
Ron Amadeo 供图
### 周期外更新——谁需要一个新系统? ###
在安卓 4.2 和安卓 4.3 之间,谷歌进行了一次周期外更新,显示了有多少安卓可以不经过费力的 OTA 更新而得到改进。得益于[谷歌 Play 商店和 Play 服务][1],这些更新可以在不更新任何系统核心组件的前提下送达。
2013 年 4 月,谷歌发布了谷歌 Play 商店的一个主要设计改动。就如同在这之后的大多数重新设计,新的 Play 商店完全接受了 Google Now 审美,即在灰色背景上的白色卡片。操作栏基于当前页面内容部分更改颜色,由于首屏内容以商店的各部分为主,操作栏颜色是中性的灰色。导航至内容部分的按钮指向热门付费,在那下面通常是一块促销内容或一组推荐应用。
![独立的内容部分有漂亮的颜色。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/content-rainbow.jpg)
独立的内容部分有漂亮的颜色。
Ron Amadeo 供图
新的 Play 商店展现了谷歌卡片设计语言的真正力量在所有的屏幕尺寸上能够拥有响应式布局。一张大的卡片能够和若干小卡片组合大屏幕设备能够显示更多的卡片而且相对于拉伸来适应横屏模式可以通过在一行显示更多卡片来适应。Play 商店的内容编辑们也可以自由地使用卡片布局;需要关注的大更新可以获得更大的卡片。这个设计最终会慢慢渗透向其它谷歌 Play 内容应用,最后拥有一个统一的设计。
![Hangouts 取代了 Google Talk现在仍由 Google+ 团队继续开发。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/talkvhangouts2.jpg)
Hangouts 取代了 Google Talk现在仍由 Google+ 团队继续开发。
Ron Amadeo 供图
Google I/O谷歌的年度开发者会议通常会宣布一个新的安卓版本。但是 2013 年的会议,谷歌只是发布了一些改进而没有系统更新。
谷歌宣布的大事件之一是 Google Talk 的更新谷歌的即时消息平台。在很长一段时间里谷歌随安卓附四个文本交流应用Google TalkGoogle+ Messenger信息短信应用Google Voice。拥有四个应用来完成相同的任务——给某人发送文本消息——对用户来说很混乱。在 I/O 上,谷歌结束了 Google Talk 并且从头开始创建全新的消息产品 [Google Hangouts][2]。虽然最初只是想替代 Google TalkHangouts 的计划是统一所有谷歌的不同的消息应用到统一的界面下。
Hangouts 的用户界面布局真的和 Google Talk 没什么大的差别。主页面包含你的聊天会话点击某一项就能进入聊天页面。界面设计上有所更新聊天页面现在使用了卡片风格来显示每个段落并且聊天列表是个“抽屉”风格的界面这意味着你可以通过水平滑动打开它。Hangouts 有已读回执和输入状态指示,并且群聊现在是个主要特性。
Google+ 是 Hangouts 的中心所以产品的全名实际上是“Google+ Hangouts”。Hangouts 完全整合到了 Google+ 桌面站点。身份和头像直接从 Google+ 拉取,点击头像会打开用户的 Google+ 资料。和将浏览器换为 Google Chrome 类似核心安卓功能交给了一个单独的团队——Google+ 团队——作为对应用成为繁忙的安卓工程师的副产品的反对。随着 Google+ 团队的接手,安卓的主要即时通讯客户端现在成为一个持续开发的应用。它被放进了 Play 商店并且有稳定的更新频率。
![新导航抽屉界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/navigation_drawer_overview1.png)
新导航抽屉界面。
图片来自 [developer.android.com][3]
谷歌还给操作栏引入了新的设计元素:导航抽屉。这个抽屉显示为在左上角应用图标旁的三道横线。点击或从屏幕左边缘向右滑动,会出现一个侧边菜单目录。就像名字所指明的,这个是用来应用内导航的,它会显示若干应用内的顶层位置。这使得应用首屏可以用来显示内容,也给了用户一致的,易于访问的导航元素。导航抽屉基本上就是个大号的菜单,可以滚动并且固定在左侧。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/23/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
[2]:http://arstechnica.com/information-technology/2013/05/hands-on-with-hangouts-googles-new-text-and-video-chat-architecture/
[3]:https://developer.android.com/design/patterns/navigation-drawer.html
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,83 @@
安卓编年史
================================================================================
![漂亮的新 Google Play Music 应用,从电子风格转向完美契合 Play 商店的风格。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Goooogleplaymusic.jpg)
漂亮的新 Google Play Music 应用,从电子风格转向完美契合 Play 商店的风格。
Ron Amadeo 供图
在 I/O 大会推出的另一个应用更新是 Google Music 应用。音乐应用经过了完全的重新设计最终摆脱了蜂巢中引入的蓝底蓝色调的设计。Play Music 的设计和几个月前发布的 Play 商店一致有着响应式的白色卡片布局。Music 同时还是最早采用新抽屉导航样式的主要应用之一。谷歌还随新应用发布了 Google Play Music All Access每月 10 美元的包月音乐订阅服务。Google Music 现在拥有订阅计划音乐购买以及云端音乐存储空间。这个版本还引入了“Instant Mix”谷歌会在云端给相似的歌曲计算出一份歌单。
![一个展示对 Google Play Games 支持的游戏。上面是 Play 商店游戏特性描述登陆游戏触发的权限对话框Play Games 通知,以及成就界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gooooogleplaygames.jpg)
一个展示对 Google Play Games 支持的游戏。上面是 Play 商店游戏特性描述登陆游戏触发的权限对话框Play Games 通知,以及成就界面。
Ron Amadeo 供图
谷歌还引入了“Google Play Games”一个后端服务开发者可以将其附加到游戏中。这项服务简单说就是安卓版的 Xbox Live 或苹果的 Game Center。开发者可以给游戏添加 Play Games 支持,这样就能通过使用谷歌的后端服务,更简单地集成成就,多人游戏,游戏匹配,用户账户以及云端存档到游戏中。
Play Games 是谷歌在游戏方面推进的开始。就像单独的 GPS 设备,翻盖手机,以及 MP3 播放器,智能手机的生产者希望游戏设备能够变成智能手机的一个功能点。当你有部智能手机的时候你为什么还有买个任天堂 DS 或 PS Vita 呢?一个易于使用的多人游戏服务是这项计划的重要部分,我们仍能看到这个决定最后的成果。在今天,坊间都在传言谷歌和苹果有关于客厅游戏设备的计划。
![Google Keep谷歌自 Google Notebook 以来第一个笔记服务。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/goooglekeep.jpg)
Google Keep谷歌自 Google Notebook 以来第一个笔记服务。
Ron Amadeo 供图
毫无疑问一些产品为了赶上 Google I/O 大会的发布准时开发完成了,[但是三个半小时内的主题][1]已经够多了一些产品在大会的发布上忽略了。Google I/O 大会的三天后一切都清楚了,谷歌带来了 Google Keep一个用于安卓和在线的笔记应用。Keep 看起来很简单,就是一个用上了响应式 Google-Now 风格设计的笔记应用。用户可以改变卡片的尺寸,从多栏布局改为单列视图。笔记可以由文本,清单,自动转文本的语音或者图片组成。笔记卡片可以拖动并在主界面重新组织,你甚至可以给笔记换个颜色。
![Gmail 4.5,换上了新的导航抽屉设计,去掉了几个按钮并将操作栏合并到了抽屉里。](http://cdn.arstechnica.net/wp-content/uploads/2014/05/gmail.png)
Gmail 4.5,换上了新的导航抽屉设计,去掉了几个按钮并将操作栏合并到了抽屉里。
Ron Amadeo 供图
在 I/O 大会之后没有哪些应用不在谷歌的周期外更新里。2013 年 6 月,谷歌发布了新版设计的 Gmail。最显眼的变化就是一个月前 Google I/O 大会引入的新导航抽屉界面。最吸引眼球的变化是用上了 Google+ 资料图片来取代复选框。虽然复选框看起来被去掉了,它们其实还在那,点击邮件左边的图片就是了。
![新谷歌地图,换上了全白的 Google-Now 风格主题。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps11.png)
新谷歌地图,换上了全白的 Google-Now 风格主题。
Ron Amadeo 供图
一个月后,谷歌在 Play 商店发布了全新的谷歌地图。这是谷歌地图自冰淇淋三明治以来第一个经过细致地重新设计的版本。新版本完全适配了 Google Now 白色卡片审美,还大大减少了屏幕上显示的元素。新版谷歌地图似乎设计时有意使地图总是显示在屏幕上,你很难找到除了设置页面之外还能完全覆盖地图显示的选项。
这个版本的谷歌地图看起来活在它自己的小小设计世界中。白色的搜索栏“浮动”在地图之上,地图显示部分在它旁边和上面都有。这和传统的操作栏设计有所不同。一般在应用左上角的导航抽屉,在这里是在左下角。这里的主界面没有向上按钮,应用图标,也没有浮动按钮。
![新谷歌地图轻量化了许多,在一屏内能够显示更多的信息。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps21.png)
新谷歌地图轻量化了许多,在一屏内能够显示更多的信息。
Ron Amadeo 供图
左边的图片显示的是点击了搜索栏后的效果(带键盘,这里关闭了)。过去谷歌在空搜索栏下面显示一个空页面,但在地图中,谷歌利用这些空间链接到新的“本地”页面。搜索结果页显示一般信息的结果,比如餐馆,加油站,以及景点。在结果页的底部是个列表,显示你的搜索历史和手动缓存部分地图的选项。
右侧图片显示的是地点页面。上面地图 7.0 的截图里显示的地图不是略缩图,它是完整的地图视图。在新版的谷歌地图中,地点作为卡片浮动显示在主地图之上,地图重新居中显示该地点。向上滑动可以让卡片覆盖地图,向下滑动可以显示带有底部一小条结果的完整地图。如果该地点是搜索结果列表中的一个,左右滑动可以在结果之间切换。
地点页面重新设计以显示更有用的信息概览。在第一页,新版添加了重要信息,比如地图上的位置,点评得分,以及点评数目。因为这是个手机,所以软件内可以直接拨打电话,电话号码的显示被认为是毫无意义的,被去掉了。旧版地点显示到那里的距离,新版谷歌地图显示到那里的时间,基于交通状况和偏好的交通方式——一个更加实用的衡量方式。新版还在中间放了个分享按钮,这使得通过即时通讯或短信协调的时候更加方便。
### Android 4.3,果冻豆——早早支持可穿戴设备 ###
如果谷歌没有在安卓 4.3 和安卓 4.2 之间通过 Play 商店发布更新的话,安卓 4.3 会是个不可思议的更新。如果新版 Play 商店Gmail地图书籍音乐Hangouts 环聊,以及 Play Games 打包作为新版安卓的一部分,它将会作为有史以来最大的发布受到欢呼。虽然谷歌没必要延后新功能的发布。有了 Play 服务框架只剩很少的部分需要系统更新2013 年 7 月底谷歌发布了看似无关紧要的“安卓 4.3”。
![安卓 4.3 通知访问权限界面的可穿戴设备选项。
](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-28-12.231.jpg)
安卓 4.3 通知访问权限界面的可穿戴设备选项。
Ron Amadeo 供图
谷歌也毫无疑问地认为 4.3 的重要性不高,将新版也叫做“果冻豆”(第三个叫果冻豆的版本了)。安卓 4.3 的新功能列表像是谷歌无法通过 Play 商店或谷歌 Play 服务更新的部分的细目清单,大部分包含了为开发者作出的底层架构改动。
但许多新增功能似乎只为了一个目的——安卓 4.3 是谷歌对可穿戴计算支持的特洛伊木马。4.3 加入了低功耗蓝牙支持,使用很少的能耗将安卓和其它设备连接到一起并传输数据——可穿戴设备的必要特性。安卓 4.3 还添加了“通知访问权限”API允许应用完全复制和控制通知面板。应用可以显示通知文本以及和用户操作一样地和通知交互——也就是点击操作按钮和消除通知。当你有个通知面板时从本机应用做这个操作没什么意义但是在一个独立于你手机的设备上复制通知面板的消息就显得很有用了。为数不多的接入的应用是 “Android Wear Preview安卓可穿戴预览使用了通知 API 驱动大部分的 Android Wear 界面。
“4.3 是给可穿戴设备准备的”这个理论解释了 4.3 相对较少的新特性:它的推出是为了给 OEM 厂商时间去升级设备,为 [Android Wear][2] 的发布做准备。这个计划看起来起作用了。Android Wear 要求 安卓 4.3 及以上版本,安卓 4.3 已经发布很长时间了,大部分主要的旗舰设备都已经升级了。
安卓并没有那么激动人心,但安卓从现在起的新版也不需要那么激动人心了。一切都变得那么模块化了,谷歌可以通过 Google Play 在它们完成时随时推送更新,不用再作为一个大系统更新来更新这么多组件。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/24/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://live.arstechnica.com/liveblog-google-io-2013-keynote/
[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,71 @@
安卓编年史
================================================================================
![LG 制造的 Nexus 5奇巧KitKat的首发设备。
](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nexus56.jpg)
LG 制造的 Nexus 5奇巧KitKat的首发设备。
Android 4.4,奇巧——更完美;更少的内存占用
谷歌安卓 4.4 的发布确实很讨巧。谷歌和[雀巢公司合作][1]新版系统的代号是“奇巧KitKat并且它是在 2013 年 10 月 31 日发布的,也就是万圣节。雀巢公司推出了限量版带安卓机器人的奇巧巧克力,它的包装也帮助新版系统推广,消费者有机会赢取一台 Nexus 7。
一部新的 Nexus 设备也随奇巧一同发布,就是 Nexus 5。新旗舰拥有迄今最大的显示屏一块五英寸1920x1080 分辨率的 LCD 显示屏。除了更大尺寸的屏幕LG——Nexus 5 的制造商——还将 Nexus 5 的机器大小控制得和 Galaxy Nexus 或 Nexus 4 差不多。
Nexus 5 相对同时期的高端手机配置算是标准了,拥有 2.3Ghz 骁龙 800 处理器和 2GB 内存。手机再次在 Play 商店销售无锁版,相同配置的大多数手机价格都在 600 到 700 美元之间,但 Nexus 5 的售价仅为 350 美元。
奇巧最重要的改进之一你并不能看到显著减少的内存占用。对奇巧而言谷歌齐心协力开始了降低系统和预装应用内存占用的努力称作“Project Svelte”。经过了无数的优化工作和通过一个“低内存模式”禁用图形开销大的特效安卓现在可以在 340MB 内存下运行。低内存需求是件了不起的事,因为在发展中国家的设备——智能手机增长最快的市场——许多设备的内存仅有 512MB。冰淇淋三明治更高级的 UI 显著提高了对安卓设备的系统配置要求,这使得很多低端设备——甚至是新发布的低端设备——的安卓版本停留在姜饼。奇巧更低的配置需求意味着这些廉价设备能够跟上脚步。有了奇巧,谷歌希望完全消灭姜饼(写下本文时姜饼的市场占有率还在 20% 左右)。为了防止更低的系统需求还不够有效,甚至有报道称谷歌将[不再授权][2]谷歌应用给姜饼设备。
除了给低端设备带来更现代版本的系统Project Svelte 更低的内存需求同样对可穿戴设备也是个好消息。Google Glass [宣布][3]它会切换到这个更精简的系统,[Android Wear][4] 同样也运行在奇巧之上。安卓 4.4 带来的更低的内存需求以及 4.3 中的通知消息 API 和低功耗蓝牙支持给了可穿戴计算漂亮的支持。
奇巧的亮点还有无数精心打磨过的核心系统界面,它们无法通过 Play 商店升级。系统界面,拨号盘,时钟还有设置都能看到升级。
![奇巧在 Google Now 启动器下的透明系统栏。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/1homescreenz.png)
奇巧在 Google Now 启动器下的透明系统栏。
Ron Amadeo 供图
奇巧不仅去掉了讨人厌的锁屏左右屏的线框——它还默认完全禁用了锁屏小部件!谷歌明显感觉到了多屏锁屏和锁屏主屏对新用户来说有点复杂,所以锁屏小部件现在需要从设置里启用。锁屏和时钟里不平衡的时间字体换成了一个对称的字重,看起来好看多了。
在奇巧中,应用拥有将系统栏和状态栏透明的能力,显著地改变了系统的外观。系统栏和状态栏现在混合到壁纸和启用透明栏的应用中去了。这些栏还能通过新功能“沉浸”模式完全被应用隐藏。
奇巧是“电子”科幻风格棺材上的最后一颗钉子,几乎完全移除了系统的蓝色痕迹。状态栏图标由蓝色变成中性的白色。主屏的状态栏和系统栏并不是完全透明的;它们有深色的渐变,这样在使用浅色壁纸的时候白色的图标还能轻易地识别出来。
![Google Now 和文件夹的调整。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nowfolders.png)
Google Now 和文件夹的调整。
Ron Amadeo 供图
在 Nexus 5 上随奇巧到来的主屏实际上由 Nexus 5 独占了几个月,但现在任何 Nexus 设备都能拥有它了。新的主屏叫做“Google Now Launcher”它实际上是[谷歌搜索应用][5]。是的谷歌搜索从一个简单的搜索框成长到了整个主屏幕并且在奇巧中它涉及了壁纸图标应用抽屉小部件主屏设置Google Now当然还有搜索框。由于搜索现在运行在整个主屏幕任何时候只要打开了主屏并且屏幕是点亮的就可以通过说“OK Google”激活语音命令。在搜索栏有引导用户说出“OK Google”的文本在几次使用后这个介绍会隐去。
Google Now 的集成度现在更高了。除了通常的系统栏上滑激活Google Now 还占据了最左侧的主屏。新版还引入了一些设计上的调整。谷歌的 logo 移到了搜索栏内,整个顶部区域更紧凑了。显示更多卡片的设计被去除了,新添加的一组底部按钮指向备忘录,自定义选项,以及一个更多操作按钮,里面由设置,反馈,以及帮助。因为 Google Now 是主屏幕的一部分,所以它也拥有透明 的系统栏和状态栏。
透明以及让系统的特定部分“更明亮”是奇巧的设计主题。黑色调通过透明化从状态栏和系统栏移除了,文件夹的黑色背景也换为了白色。
![新的,更加干净的应用列表,以及完整的应用阵容。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/apps.png)
新的,更加清爽的应用列表,以及完整的应用阵容。
Ron Amadeo 供图
奇巧的图标阵容相对 4.3 有显著的变化。更戏剧化地说,这是一场大屠杀,谷歌从 4.3 的配置中移除了七个图标。谷歌 Hangouts 现在能够处理短信所以信息应用被去除了。Hangouts 同时还接手了 Google Messenger 的职责所以它的图标也不见了。Google Currents 不再作为默认应用预装,因为它不久后就会被终结——和它一起的还有 Google Play MagazinesPlay 杂志),取代它们的是 Google Play NewsstandPlay 报刊亭)。谷歌地图被打回一个图标,这意味着本地和导航的快捷方式被去掉了。难以理解的 Movie Studio 也被去除了——谷歌肯定已经意识到了没人想在手机上剪辑电影。有了主屏的“OK Google”关键词检测语音搜索图标的呈现就显得多余了因而将其移除。令人沮丧的是没人用的新闻和天气应用还在。
有个新应用“Photos相片”——实际上是 Google+ 的一部分——接手了图片管理的工作。在 Nexus 5 上,相册和 Google+ 相片十分相似,但在 Google Play 版设备上更新版的奇巧中,相册已经完全被 Google+ 相片所取代。Play Games 是谷歌的后端多用户游戏服务——谷歌版的 Xbox Live 或苹果的 Game Center。Google Drive已经在 Play 商店存在数年的应用,终于成为了内置应用。谷歌 2012 年 6 月收购的 Quickoffice 也进入了内置应用阵容。Drive 可以打开 Google 文档Quickoffice 可以打开微软 Office 文档。如果细细追究起来,在大多数奇巧中包含了两个文档编辑应用和两个相片编辑应用。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/25/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2013/09/official-the-next-edition-of-android-is-kitkat-version-4-4/
[2]:http://www.androidpolice.com/2014/02/10/rumor-google-to-begin-forcing-oems-to-certify-android-devices-with-a-recent-os-version-if-they-want-google-apps/
[3]:http://www.androidpolice.com/2014/03/01/glass-xe14-delayed-until-its-ready-promises-big-changes-and-a-move-to-kitkat/
[4]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[5]:http://arstechnica.com/gadgets/2013/11/google-just-pulled-a-facebook-home-kitkats-primary-interface-is-google-search/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,87 @@
安卓编年史
================================================================================
![新的“添加到主屏幕”界面无疑受到了蜂巢的启发。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/homesetupthrowback.png)
新的“添加到主屏幕”界面无疑受到了蜂巢的启发。
Ron Amadeo 供图
奇巧的主屏幕配置界面漂亮地对蜂巢进行了复古。在有巨大的 10 英寸屏幕的蜂巢平板上(上方右侧图片),长按主屏背景会向你展现一个所有主屏幕的缩放视图。可以从下面的小部件抽屉里将它们拖放到任意主屏上——这很方便。在将蜂巢的界面带到手机上时,从安卓 4.0 直到 4.3,谷歌都跳过了这个设计,把它留给了大屏幕设备,在手机上长按后只显示一个选项列表(中间的图片)。
但在奇巧上谷歌最终给出了解决方案。在长按后4.4 呈现一个略微缩放的视图——你可以看到当前主屏以及它左右侧的屏幕。点击“小部件”按钮会打开一个小部件略缩图的完整列表,但是长按一个小部件后,你会回到缩放视图,并且你可以在主屏页面之间滚动,将图标放在你想要的位置。将图标或者小部件拖动过最右侧的主屏页面,你可以创建一个新的主屏页面。
![联系人和去掉所有蓝色痕迹的键盘。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/RIP33B5E5.png)
联系人和去掉所有蓝色痕迹的键盘。
Ron Amadeo 供图
奇巧是电子风格设计的完结。在系统的大多数部分,剩下的蓝色高亮都被换成了灰色。在联系人应用中,头部和联系人列表字母分割线的蓝色都移除掉了。图片的位置换了一侧,底栏变成了浅灰色以和顶部相称。几乎将蓝色渗透进每个应用的键盘,现在是灰底灰色灰高亮。这可不是件坏事。应用应该允许有它们自己的配色方案——在键盘上强迫存在潜在的颜色冲突可不是个好设计。
![前三张是奇巧的拨号盘,最后一张是 4.3 的。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/phone.png)
前三张是奇巧的拨号盘,最后一张是 4.3 的。
Ron Amadeo 供图
谷歌完全重制了奇巧中的拨号,创造了一个疯狂的设计,改变了用户对手机的思考方式。实际上新版拨号中的数字都被尽可能地隐藏了——在首屏上甚至没有拨号盘。打电话的主要界面现在是个搜索栏!如果你想给你的联系人打电话,只要在搜索栏输入他的名字;如果你想给一个公司打电话,只要输入公司的名字,拨号会通过谷歌地图庞大的数据库找到号码。它工作得令人难以置信的好,这是只有谷歌才能完成的事情。
如果搜索不是你的菜的话,应用还会智能地显示通话记录列表,最常联系人,还有指向所有联系人的链接。底部的链接指向你的通话记录,传统的拨号盘,以及常规的更多操作按钮,包含一个设置页面。
![Office 相关:新的内置应用 Google Drive以及打印支持。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googledrive-and-printing.png)
Office 相关:新的内置应用 Google Drive以及打印支持。
Ron Amadeo 供图
在奇巧中 Google Drive 终于作为内置应用包含了进来令人惊奇的是这居然用了这么长时间。Drive 允许用户创建和编辑 Google Docs 表格和文档,用相机扫描文档并作为 PDF 上传或者查看不能编辑演示文稿。Drive 的设计十分现代,侧面拥有滑出式导航抽屉,并且是 Google Now 风格卡片式设计。
为了有更多的移动办公乐趣,奇巧包含了系统级打印框架。在设置的底部有“打印”设置界面,任何打印机 OEM 厂商都可以为它写个插件。谷歌云打印自然是首批支持者之一。只要你的打印机和云打印相连接,无论是本地或通过一台装有 Chrome 浏览器的电脑,你都可以借助网络进行打印。应用同样也需要支持打印框架。点击 Google Drive 里的“i”按钮会显示文档信息并且给你打印的选项。就像桌面系统那样会弹出一个设置对话框有打印份数纸张尺寸以及页面选择等选项。
![Google+ 应用的“相片”部分,它取代了相册。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/that-is-one-dead-gallery.png)
Google+ 应用的“相片”部分,它取代了相册。
Ron Amadeo 供图
Google+ 相片和相册最初都在 Nexus 5 上随附,但在 Google Play 设备稍晚版本的奇巧上相册被砍掉了Google+ 完全接手了相片管理。新应用的主题从深色变成了浅色Google+ 相片还带来了现代的导航抽屉设计。
安卓一直以来都有即时上传功能,它会自动备份所有图片到谷歌的云存储,开始是 Picasa 后来是 Google+。G+ 相片相比相册最大的好处是它可以管理那些云端存储的图片。图片右下角的云图标指示备份状态它会从右到左地填满来指示正在上传。G+ 相片带来了它自己的照片编辑器,还有许多其它的 Google+ 图片功能,比如高亮,自动美化,当然,还有分享到 Google+。
![时钟应用的调整,添加了一个闹钟页面并修改了时间输入框。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clocks.png)
时钟应用的调整,添加了一个闹钟页面并修改了时间输入框。
Ron Amadeo 供图
谷歌将 4.2 引入的优秀时间选择器换成了一个奇怪的时钟界面,操作起来比旧界面更慢了也更不精确了。首先是个可以选择小时的单指针时钟,然后显示的是另一个选择分钟的单指针时钟。选择的时候要转动分针或点击数字,这让用用户很难选择不是整五分钟的时间增量。不像之前的时间选择器需要选择一个时间段,这里默认时间段是 AM重复一下这样设置的时候容易不小心偏差 12 小时)。
### 今日安卓无处不在 ###
![](http://cdn.arstechnica.net/wp-content/uploads/2014/05/android-everywhere2.png)
图片来自 Google/Sony/Motorola/Ron Amadeo
一开始从一家搜索引擎公司的古怪的黑莓复制品一步一步到如今科技界巨头之一在世界上最流行的系统。安卓已经成为谷歌的实际消费者操作系统它驱动着手机平板Google GlassGoogle TV甚至更多。[它的一部分][1]甚至还用到了 Chromecast 中。在未来,谷歌还会将 [Android Wear][2] 带到手表和可穿戴设备上,[开放汽车联盟][3] 要将安卓带到汽车上。不久后会谷歌再次承诺对客厅的计划,带上 [Android TV][4]。这个系统对谷歌是如此重要的支柱,原本应该覆盖全公司产品的大会活动,比如 Google I/O俨然成为了安卓发布派对。
![上排:谷歌 Play 内容商店。下排:谷歌 Play 应用。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-30-03.08.jpg)
上排:谷歌 Play 内容商店。下排:谷歌 Play 应用。
Ron Amadeo 供图
移动产业曾经的的丑小鸭脱胎换骨,它的用户界面还[赢得了设计奖项][5]。像 Google Now 一样的设计风格影响了整个公司的产品甚至连像搜索Google+Youtube以及地图这样的桌面站点都加入了卡片式设计中。设计也在不断地演进。谷歌下一步[统一设计][6]的计划不仅是面对安卓,也包括了所有的产品。谷歌的目标是让你不管在安卓,还是桌面浏览器,或是一个手表上,使用像 Gmail 这样的服务时都能有一样的体验。
谷歌将很多安卓的组件转移到 Play 商店,这样版本发布就越来越不重要了。谷歌决定了解决运营商和 OEM 厂商更新问题的最佳途径,就是完全绕开这些绊脚石。从这里开始,在一个安卓更新里除了核心底层变动外就没什么内容了——但是更多的 API 被加入了谷歌 Play 服务。如果你只看版本更新的话,相对安卓高峰期 2.5 个月的发布周期来说开发已经放缓了。但实际情况是谷歌现在可以持续将改进推送到 Play 商店,从周期发布变成了永无止境,有些微妙的更新流。
每天 150 万台设备激活,安卓除了增长就是增长。在未来,安卓会是手机和平板到汽车和手表的领军者,奇巧更低的系统配置要求也会让发展中国家的手机价格更低。结果呢?越来越多的人会来到线上。对那里的大多数人来说,安卓不止是他们的手机,也是他们首要的计算设备。随着安卓为谷歌领导掌管众多领域,从一个小收购而来的系统逐渐成长为了谷歌最重要的产品。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/26/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://blog.gtvhacker.com/2013/chromecast-exploiting-the-newest-device-by-google/
[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[3]:http://arstechnica.com/information-technology/2014/01/open-automotive-alliance-aims-to-bring-android-inside-the-car/
[4]:http://arstechnica.com/gadgets/2014/04/documents-point-to-android-tv-googles-latest-bid-for-the-living-room/
[5]:http://userexperienceawards.com/uxa2012/
[6]:http://arstechnica.com/gadgets/2014/04/googles-next-design-challenge-unify-app-design-across-platforms/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,279 +0,0 @@
10 Tips for 10x Application Performance
将程序性能提高十倍的10条建议
================================================================================
提高web 应用的性能从来没有比现在更关键过。网络经济的比重一直在增长全球经济超过5% 的价值是在因特网上产生的数据参见下面的资料。我们的永远在线、超级连接的世界意味着用户的期望值也处于历史上的最高点。如果你的网站不能及时的响应或者你的app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。
举一个例子一份亚马逊十年前做过的研究可以证明甚至在那个时候网页加载时间每减少100毫秒收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有着在调查中说他们会因为应用程序性能的问题流失用户。
网站到底需要多块呢对于页面加载每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间而这是提供最高舒适度的速度。很明显这种利害关系对于web 应用来说很高,而且在不断的增加。
想要提高效率很简单但是看到实际结果很难。要在旅途上帮助你这篇blog 会给你提供10条最高可以10倍的提升网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章包括测试充分的优化技术和一点NGIX 的帮助。这个系列给出了潜在的提高安全性的帮助。
### Tip #1: 通过反向代理来提高性能和增加安全性 ###
如果你的web 应用运行在单个机器上那么这个办法会明显的提升性能只需要添加一个更快的机器更好的处理器更多的内存更快的磁盘阵列等等。然后新机器就可以更快的运行你的WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么这个办法还是很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。)
问题是机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换:和用户的成千上万的连接,从磁盘访问文件,运行代码,等等。应用服务器可能会抖动-内存不足将内存数据写会磁盘以及多个请求等待一个任务完成如磁盘I/O。
你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和程序的通讯都是通过一个快速的内部网络完成的。
使用反向代理服务器可以将应用服务器从等待用户与web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要在能带客户端的响应,可以运行与接近优化过的性能水平。
添加方向代理服务器还可以给你的web 服务器安装带来灵活性。比如,一个已知类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易的被替代。
因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如:
- **负载均衡** (参见 [Tip #2][2]) 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。
- **缓存静态文件** (参见 [Tip #3][3]) 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快
- **网站安全** 反响代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。
NGINX 软件是一个专门设计的反响代理服务器也包含了上述的多种功能。NGINX 使用事件驱动的方式处理问题着回避传统的服务器更加有效率。NGINX plus 天价了更多高级的反向代理特性,比如程序[健康度检查][4]专门用来处理request 路由,高级缓冲和相关支持。
![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
### Tip #2: 添加负载平衡 ###
添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。使用负载均衡讲流量分配到多个服务器是用来替代只使用一个巨大且高性能web 服务器的方案。即使程序写的不好,或者在扩容方面有困难,只使用负载均衡服务器就可以很好的提高用户体验。
负载均衡服务器首先是一个反响代理服务器(参见[Tip #1][6])——它接收来自互联网的流量,然后转发请求给另一个服务器。小戏法是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法只需要将新的请求发给列表里的下一个服务器。其它的方法包括将请求发给负载最小的活动连接。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8].
负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有流量来处理。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。
可以进行负载均衡的协议包括HTTP, HTTPS, SPDY, HTTP/2, WebSocket[FastCGI][9],SCGI,uwsgi, memcached以及集中其它的应用类型包括采用TCP 第4层协议的程序。分析你的web 应用来决定那些你要使用以及那些地方的性能不足。
相同的服务器或服务器群可以被用来进行负载均衡也可以用来处理其它的任务如SSL 终止提供对客户端使用的HTTP/1/x 和 HTTP/2 ,以及缓存静态文件。
NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访问我们的[overview blog post][10], [configuration blog post][11], [ebook][12] 以及相关网站 [webinar][13], 和 [documentation][14]。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性如基于服务器响应时间的加载路由和Microsofts NTLM 协议上的负载均衡。
### Tip #3: 缓存静态和动态的内容 ###
缓存通过加速内容的传输速度来提高web 应用的性能。它可以采用一下集中策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者结合起来使用。
下面要考虑两种不同类型数据的缓冲:
- **静态内容缓存**。不经常变化的文件,比如图像(JPEG,PNG) 和代码(CSS,JavaScript),可以保存在边缘服务器,这样就可以快速的从内存和磁盘上提取。
- **动态内容缓存**。很多web 应用回针对每个网页请求生成不同的HTML 页面。在短时间内简单的缓存每个生成HTML 内容,就可以很好的减少要生成的内容的数量,这完全可以达到你的要求。
举个例子如果一个页面每秒会被浏览10次你将它缓存1 秒99%请求的页面都会直接从缓存提取。如果你将将数据分成静态内容,甚至新生成的页面可能都是由这些缓存构成的。
下面由是web 应用发明的三种主要的缓存技术:
- **缩短数据与用户的距离**。把一份内容的拷贝放的离用户更近点来减少传输时间。
- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。
- **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。
对web 应用的缓存机制可以web 应用服务器内部实现。第一,缓存动态内容是用来减少应用服务器加载动态内容的时间。然后,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。
改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说静态数据比如大图像文件构成了超过一半的内容。如果没有缓存那么这可能会花费几秒的时间来提取和传输这类数据但是采用了缓存之后不到1秒就可以完成。
举一个在实际中缓存是如何使用的例子, NGINX 和NGINX Plus使用了两条指令来[设置缓存机制][16]proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小文件在缓存中保存的最长时间和其他一些参数。使用第三条而且是相当受欢迎的一条指令proxy_cache_use_stale如果服务器提供新鲜内容是忙或者挂掉之类的信息你甚至可以让缓存提供旧的内容这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的上线时间。
NGINX plus 拥有[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。
要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。
**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。
### Tip #4: 压缩数据 ###
压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片JPEG 和PNG、视频MPEG-4和音乐MP3等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。
文本数据 —— 包括HTML包含了纯文本和HTL 标签CSS和代码比如Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生不成比例的影响。
这是因为文本数据经常是用户与网页交互的有效数据而多媒体数据可能更多的是起提供支持或者装饰的作用。聪明的内容压缩可以减少HTMLJavascriptCSS和其他文本内容对贷款的要求通常可以减少30% 甚至更多的带宽和相应的页面加载时间。
如果你是用SSL压缩可以减少需要进行SSL 编码的的数据量而这些编码操作会占用一些CPU时间而抵消了压缩数据减少的时间。
压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的[HTTP/2 部分]就专门为适应头数据。另一个例子是可以在NGINX 里打开使用GZIP 压缩文本。你在你的服务里[预压缩文本数据][25]之后你就可以直接使用gzip_static 指令来处理压缩过的.gz 版本。
### Tip #5: 优化 SSL/TLS ###
安全套接字([SSL][26]) 协议和它的继承者传输层安全TLS协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是Google 正在使用SSL/TLS这在搜索引擎排名上是一个正面的影响因素。
尽管SSL/TLS 越来越流行但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二:
1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。
2. 数据在传输过程中需要不断的在服务器加密、在客户端解密。
要鼓励使用SSL/TLSHTTP/2 和SPDY在[下一章][27]会描述的作者设计新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述两个原因中的一个浪费的时间。然而现在可以用来提高应用程序使用SSL/TLS 传输数据的性能的方法不止这些。
web 服务器有对应的机制优化SSL/TLS 传输。举个例子NGINX 使用[OpenSSL][28]运行在普通的硬件上提供接近专用硬件的传输性能。NGINX [SSL 性能][29] 有详细的文档而且把对SSL/TLS 数据进行加解密的时间和CPU 占用率降低了很多。
更进一步,在这篇[blog][30]有详细的说明如何提高SSL/TLS 性能,可以总结为一下几点:
- **会话缓冲**。使用指令[ssl_session_cache][31]可以缓存每个新的SSL/TLS 连接使用的参数。
- **会话票据或者ID**。把SSL/TLS 的信息保存在一个票据或者ID 里可以流畅的复用而不需要重新握手。
- **OCSP 分割**。通过缓存SSL/TLS 证书信息来减少握手时间。
NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的加密和解密,而同时和其他服务器进行明文通信。使用[这几步][32] 来设置NGINX 和NGINX Plus 处理SSL/TLS 终止。同时这里还有一些NGINX Plus 和接收TCP 连接的服务器一起使用时的[特有的步骤][33]
### Tip #6: 使用 HTTP/2 或 SPDY ###
对于已经使用了SSL/TLS 的站点HTTP/2 和SPDY 可以很好的提高性能因为每个连接只需要一次握手。而对于没有使用SSL/TLS 的站点来说HTTP/2 和SPDY会在响应速度上有些影响通常会将度效率
Google 在2012年开始把SPDY 作为一个比HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前IETF 标准他也基于SPDY。SPDY 已经被广泛的支持了但是很快就会被HTTP/2 替代。
SPDY 和HTTP/2 的关键是用单连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。
通过使用一个连接这些协议可以避免过多的设置和管理多个连接就像浏览器实现了HTTP/1.x 一样。单连接在对SSL 特别有效这是因为它可以最小化SSL/TLS 建立安全链接时的握手时间。
SPDY 协议需要使用SSL/TLS 而HTTP/2 官方并不需要但是目前所有支持HTTP/2的浏览器只有在使能了SSL/TLS 的情况下才会使用它。这就意味着支持HTTP/2 的浏览器只有在网站使用了SSL 并且服务器接收HTTP/2 流量的情况下才会启用HTTP/2。否则的话浏览器就会使用HTTP/1.x 协议。
当你实现SPDY 或者HTTP/2时你不再需要通常的HTTP 性能优化方案比如域分隔资源聚合以及图像登记。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。
![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
作为支持这些协议的一个样例NGINX 已经从一开始就支持了SPDY而且[大部分使用SPDY 协议的网站][35]都运行的是NGINX。NGINX 同时也[很早][36]对HTTP/2 的提供了支持,从2015 年9月开始开源NGINX 和NGINX Plus 就[支持][37]它了。
经过一段时间我们NGINX 希望更多的站点完全是能SSL 并且向HTTP/2 迁移。这将会提高安全性,同时新的优化手段也会被发现和实现,更简单的代码表现的更加优异。
### Tip #7: 升级软件版本 ###
一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说因为高性能组件的开发者更愿意追求更高的性能和解决bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。
稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新可以非常简单的保持软件保持最佳的优化解决掉bug以及安全性的提高。
一直使用旧版软件也会组织你利用新的特性。比如上面说到的HTTP/2目前要求OpenSSL 1.0.1.在2016 年中期开始将会要求1.0.2 而这是在2015年1月才发布的。
NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX Plus][39];他们都包含了罪行的能力如socket分区和线程池见下文这些都已经为性能优化过了。然后好好看看的你软件栈把他们升级到你能能升级道德最新版本吧。
### Tip #8: linux 系统性能调优 ###
linux 是大多数web 服务器使用操作系统而且作为你的架构的基础Linux 表现出明显可以提高性能的机会。默认情况下很多linux 系统都被设置为使用很少的资源匹配典型的桌面应用负载。这就意味着web 应用需要最少一些等级的调优才能达到最大效能。
Linux 优化是转变们针对web 服务器方面的。以NGINX 为例这里有一些在加速linux 时需要强调的变化:
- **缓冲队列**。如果你有挂起的连接那么你应该考虑增加net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接线直太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数知道错误信息停止出现。
- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接你可能就需要提高sys.fs.file_max ,增加系统对文件描述符数量整体的限制,这样子才能支持不断增加的负载需求。
- **临时端口**。当使用代理时NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。
对于NGINX 来说,可以查阅[NGINX 性能调优指南][40]来学习如果优化你的Linux 系统,这样子它就可以很好的适应大规模网络流量而不会超过工作极限。
### Tip #9: web 服务器性能调优 ###
无论你是用哪种web 服务器你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何web 服务器但是一些设置是针对NGINX的。关键的优化手段包括
- **f访问日志**。不要把每个请求的日志都直接写回磁盘你可以在内存将日志缓存起来然后一批写回磁盘。对于NGINX 来说添加给指令*access_log* 添加参数 *buffer=size* 可以让系统在缓存满了的情况下才把日志写到此哦按。如果你添加了参数**flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。
- **缓存**。缓存掌握了内存中的部分资源知道满了位置这可以让与客户端的通信更加高效。与内存中缓存不匹配的响应会写回磁盘而这就会降低效能。当NGINX [启用][42]了缓存机制后,你可以使用指令*proxy_buffer_size* 和 *proxy_buffers* 来管理缓存。
- **客户端保活**。保活连接可以减少开销特别是使用SSL/TLS时。对于NGINX 来说,你可以增加*keepalive_requests* 的值从默认值100 开始修改,这样一个客户端就可以转交一个指定的连接,而且你也可以通过增加*keepalive_timeout* 的值来允许保活连接存活更长时间,结果就是让后来的请求处理的更快速。
- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会收益于连接保活。对于上游连接老说,你可以增加*保活时间*,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新的连接次数。更多关于保活连接的信息可以参见[blog][41].
- **限制**。限制客户端使用的资源可以提高性能和安全性。对于NGINX 来说指令*limit_conn* 和 *limit_conn_zone* 限制了每个源的连接数量,而*limit_rate* 限制了带宽。这些限制都可以阻止合法用户*攫取* 资源,同时夜避免了攻击。指令*limit_req* 和 *limit_req_zone* 限制了客户端请求。对于上游服务器来说可以在上游服务器的配置块里使用max_conns 可以限制连接到上游服务器的连接。 这样可以避免服务器过载。关联的队列指令会创建一个队列来在连接数抵达*max_conn* 限制时在指定的长度的时间内保存特定数量的请求。
- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和依赖操作系统的机制来有效的讲请求分发给不同的工人进程。这条建议推荐设置每个CPU 的参数*worker_processes* 。如果需要的话工人连接的最大数默认512可以安全在大部分系统增加是指找到最适合你的系统的值。
- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会未每个工人进程创建一个套接字监听器,这样一来以内核分配连接给套接字就成为可能了。折可以减少锁竞争,并且提高多核系统的性能,要使能[套接字分隔][43]需要在监听指令里面加上复用端口参数。
- **线程池**。一个计算机进程可以处理一个缓慢的操作。对于web 服务器软件来说磁盘访问会影响很多更快的操作比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在NGINX理有两个操作——read()系统调用和sendfile() ——被分配到了[线程池][44]
![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
**技巧**。当改变任务操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快那么就改回去。
在[blog][45]可以看到更详细的NGINX 调优方法。
### Tip #10: 监视系统活动来解决问题和瓶颈 ###
在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的web 基础设施上监控程序活动。
监视活动是最积极的——他会告诉你发生了什么,把问题留给你发现和最终解决掉。
监视可以发现集中不同的问题。它们包括:
- 服务器宕机。
- 服务器出问题一直在丢失连接。
- 服务器出现大量的缓存未命中。
- 服务器没有发送正确的内容。
应用的总体性能监控工具比如New Relic 和Dynatrace可以帮助你监控到从远处加载网页的时间二NGINX 可以帮助你监控到应用发送的时 间。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。
为了帮助开发者快速的发现、解决问题NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能折可以组织当前任务未完成之前不接受新的连接另一个功能是慢启动允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的速度。当有使用得当时健康度检查可以让你在问题变得严重到影响用户体验前就发现它而会话过滤和慢启动可以让你替换服务器并且这个过程不会对性能和正常运行时间产生负面影响。这个表格就展示了NGINX Plus 内建模块在web 基础设施[监视活活动][48]的仪表盘包括了服务器群TCP 连接和缓存等信息。
![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
### 总结: 看看10倍性能提升的效果 ###
这些性能提升方案对任何一个web 应用都可用并且效果都很好而实际效果取决于你的预算如你能花费的时间目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升
为了指导你了解每种优化手段的潜在影响,这里是是上面详述的每个优化方法的关键点,虽然你的里程肯定大不相同:
- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间断的极低性能。增加一个反向代理比如NGINX可以避免web应用程序在内存和磁盘之间抖动。负载均衡可以将过载服务器的任务转移到空闲的服务器还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升很容易就可以比你现在的实现方案的最差性能提高10倍对于总体性能来说可能提高的不多但是也是有实质性的提升。
- **缓存动态和静态数据**。如果你又一个web 服务器负担过重那么毫无疑问肯定是你的应用服务器只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高个位数倍的性能。
- **压缩数据**。使用媒体文件压缩格式比如图像格式JPEG图形格式PNG视频格式MPEG-4音乐文件格式MP3可以极大的提高性能。一旦这些都用上了然后压缩文件数据可以提高初始页面加载速度提高两倍。
- **优化SSL/TLS**。安全握手会对性能产生巨大的影响对他们的优化可能会对初始响应特别是重文本站点产生2倍的提升。优化SSL/TLS 下媒体文件只会产生很小的性能提升。
- **使用HTTP/2 和SPDY*。当你使用了SSL/TLS这些协议就可以提高整个站点的性能。
- **对linux 和web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49].
我们希望你亲自尝试这些技术。我们希望这些提高应用性能的手段可以被你实现。请在下面评论栏分享你的结果 或者在标签#NGINX 和#webperf 下tweet 你的故事。
### 网上资源 ###
[Statista.com Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
[Load Impact How Bad Performance Impacts Ecommerce Sales][51]
[Kissmetrics How Loading Time Affects Your Bottom Line (infographic)][52]
[Econsultancy Site speed: case studies, tips and tools for improving your conversion rate][53]
--------------------------------------------------------------------------------
via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io
作者:[Floyd Smith][a]
译者:[Ezio]](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nginx.com/blog/author/floyd/
[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
[4]:https://www.nginx.com/products/application-health-checks/
[5]:https://www.nginx.com/solutions/load-balancing/
[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/
[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/
[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/
[14]:https://www.nginx.com/resources/admin-guide/load-balancer/
[15]:https://www.nginx.com/products/
[16]:https://www.nginx.com/blog/nginx-caching-guide/
[17]:https://www.nginx.com/products/content-caching-nginx-plus/
[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
[19]:https://www.nginx.com/products/live-activity-monitoring/
[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
[21]:https://www.nginx.com/resources/admin-guide/content-caching
[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
[26]:https://www.digicert.com/ssl.htm
[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
[28]:http://openssl.org/
[29]:https://www.nginx.com/blog/nginx-ssl-performance/
[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
[38]:http://nginx.org/en/download.html
[39]:https://www.nginx.com/products/
[40]:https://www.nginx.com/blog/tuning-nginx/
[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
[45]:https://www.nginx.com/blog/tuning-nginx/
[46]:https://www.nginx.com/products/application-health-checks/
[47]:https://www.nginx.com/products/session-persistence/#session-draining
[48]:https://www.nginx.com/products/live-activity-monitoring/
[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
[52]:https://blog.kissmetrics.com/loading-time/?wide=1
[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/

View File

@ -1,317 +0,0 @@
如何在linux 上配置持续集成服务 - Drone
==============================================================
如果你对一次又一次的克隆、构建、测试和部署代码感到厌倦了可以考虑一下持续集成。持续集成也就是CI是软件工程的像我们一样的频繁提交的代码库构建、测试和部署的实践。CI 帮助我们快速的集成新代码到已有的代码基线。如果这个过程是自动化进行的,那么就会提高开发的速度,因为这可以减少开发人员手工构建和测试的时间。[Drone][1] 是一个免费的开源项目用来提供一个非常棒的持续集成服务的环境采用了Apache 2.0 协议。它已经集成近很多代码库提供商比如Github、Bitbucket 以及Google COde并且它可以从代码库提取代码使我们可以编译多种语言包括PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA 等等。它是如此一个强大的平台是因为它每次构建都使用了容器和docker 技术,这让用户可以在保证隔离的条件下完全控制他们自己的构建环境。
### 1. 安装 Docker ###
首先我们要安装docker因为这是Drone 的工作流的最关键的元素。Drone 合理的利用了docker 来构建和测试应用。容器技术提高了应用部署的效率。要安装docker 我们需要在不同的linux 发行版本运行下面对应的命令我们这里会说明Ubuntu 14.04 和CentOS 7 两个版本。
#### Ubuntu ####
要在Ubuntu 上安装Docker ,我们只需要运行下面的命令。
# apt-get update
# apt-get install docker.io
安装之后我们需要使用`service` 命令重启docker 引擎。
# service docker restart
然后我们让docker 在系统启动时自动启动。
# update-rc.d docker defaults
Adding system startup for /etc/init.d/docker ...
/etc/rc0.d/K20docker -> ../init.d/docker
/etc/rc1.d/K20docker -> ../init.d/docker
/etc/rc6.d/K20docker -> ../init.d/docker
/etc/rc2.d/S20docker -> ../init.d/docker
/etc/rc3.d/S20docker -> ../init.d/docker
/etc/rc4.d/S20docker -> ../init.d/docker
/etc/rc5.d/S20docker -> ../init.d/docker
#### CentOS ####
第一,我们要更新机器上已经安装的软件包。我们可以使用下面的命令。
# sudo yum update
要在centos 上安装docker我们可以简单的运行下面的命令。
# curl -sSL https://get.docker.com/ | sh
安装好docker 引擎之后我么只需要简单实用下面的`systemd` 命令启动docker因为centos 7 的默认init 系统是systemd。
# systemctl start docker
然后我们要让docker 在系统启动时自动启动。
# systemctl enable docker
ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'
### 2. 安装 SQlite 驱动 ###
Drone 默认使用SQLite3 数据库服务器来保存数据和信息。它会在/var/lib/drone/ 自动创建名为drone.sqlite 的数据库来处理数据库模式的创建和迁移。要安装SQLite3 我们要完成以下几步。
#### Ubuntu 14.04 ####
因为SQLite3 存在于Ubuntu 14.04 的默认软件库我们只需要简单的使用apt 命令安装它。
# apt-get install libsqlite3-dev
#### CentOS 7 ####
要在Centos 7 上安装选哟使用下面的yum 命令。
# yum install sqlite-devel
### 3. 安装 Drone ###
最后我们安装好依赖的软件我们现在更进一步的接近安装Drone。在这一步里我们值简单的从官方链接下载对应的二进制软件包然后使用默认软件包管理器安装Drone。
#### Ubuntu ####
我们将使用wget 从官方的[Debian 文件下载链接][2]下载drone 的debian 软件包。下面就是下载命令。
# wget downloads.drone.io/master/drone.deb
Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98
Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7722384 (7.4M) [application/x-debian-package]
Saving to: 'drone.deb'
100%[======================================>] 7,722,384 1.38MB/s in 17s
2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384]
下载好之后我们将使用dpkg 软件包管理器安装它。
# dpkg -i drone.deb
Selecting previously unselected package drone.
(Reading database ... 28077 files and directories currently installed.)
Preparing to unpack drone.deb ...
Unpacking drone (0.3.0-alpha-1442513246) ...
Setting up drone (0.3.0-alpha-1442513246) ...
Your system ubuntu 14: using upstart to control Drone
drone start/running, process 9512
#### CentOS ####
在CentOS 机器上我们要使用wget 命令从[下载链接][3]下载RPM 包。
# wget downloads.drone.io/master/drone.rpm
--2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm
Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18
Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7763311 (7.4M) [application/x-redhat-package-manager]
Saving to: drone.rpm
100%[======================================>] 7,763,311 1.18MB/s in 20s
2015-11-06 11:07:06 (374 KB/s) - drone.rpm saved [7763311/7763311]
然后我们使用yum 安装rpm 包。
# yum localinstall drone.rpm
### 4. 配置端口 ###
安装完成之后我们要使它工作要先进行配置。drone 的配置文件在**/etc/drone/drone.toml** 。默认情况下drone 的web 接口使用的是80而这也是http 默认的端口如果我们要下面所示的修改配置文件里server 块对应的值。
[server]
port=":80"
### 5. 集成 Github ###
为了运行Drone 我们必须设置最少一个和GitHub、GitHub 企业版GitlabGogsBitbucket 关联的集成点。在本文里我们只集成了github但是如果哦我们要集成其他的我们可以在配置文件做修改。为了集成github 我们需要在[github setting] 创建一个新的应用。
![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png)
要创建一个应用,我们需要在`New Application` 页面点击`Register`,然后如下所示填表。
![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png)
我们应该保证在应用的配置项里设置了**授权了的回调链接**,链接看起来像`http://drone.linoxide.com/api/auth/github.com`。然后我们点击注册应用。所有都做好之后我们会看到我们需要在我们的Drone 配置文件里配置的客户端ID 和客户端密钥。
![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png)
在这些都完成之后我们需要使用文本编辑器编辑drone 配置文件,比如使用下面的命令。
# nano /etc/drone/drone.toml
然后我们会在drone 的配置文件里面找到`[github]` 部分,紧接着的是下面所示的配置内容
[github]
client="3dd44b969709c518603c"
secret="4ee261abdb431bdc5e96b19cc3c498403853632a"
# orgs=[]
# open=false
![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png)
### 6. 配置 SMTP 服务器 ###
如果我们想让drone 使用email 发送通知那么我们需要在SMTP 配置里面设置我们的SMTP 服务器。如果我们已经有了一个SMTP 服务那就只需要简单的使用它的配置文件就行了但是因为我们没有一个SMTP 服务器我们需要安装一个MTA 比如Postfix然后在drone 配置文件里配置好SMTP。
#### Ubuntu ####
在ubuntu 里使用下面的apt 命令安装postfix。
# apt-get install postfix
#### CentOS ####
在CentOS 里使用下面的yum 命令安装postfix。
# yum install postfix
安装好之后我们需要编辑我们的postfix 配置文件。
# nano /etc/postfix/main.cf
然后我们要把myhostname 的值替换为我们自己的FQDN比如drone.linoxide.com。
myhostname = drone.linoxide.com
现在开始配置drone 配置文件里的SMTP 部分。
# nano /etc/drone/drone.toml
找到`[smtp]` 部分补充上下面的内容。
[smtp]
host = "drone.linoxide.com"
port = "587"
from = "root@drone.linoxide.com"
user = "root"
pass = "password"
![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png)
注意:这里的**user** 和 **pass** 参数强烈推荐一定要改成一个用户的配置。
### 7. 配置 Worker ###
如我们所知的drone 利用了docker 完成构建、测试任务我们需要把docker 配置为drone 的worker。要完成这些需要修改drone 配置文件里的`[worker]` 部分。
# nano /etc/drone/drone.toml
然后取消底下几行的注释并且补充上下面的内容。
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
这里我们只设置了两个节点这意味着上面的配置文件只能同时执行2 个构建操作。要提高并发性可以增大节点的值。
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
使用上面的配置文件drone 被配置为使用本地的docker 守护程序可以同时构建4个任务。
### 8. 重启 Drone ###
最后当所有的安装和配置都准备好之后我们现在要在本地的linux 机器上启动drone 服务器。
#### Ubuntu ####
因为ubuntu 14.04 使用了sysvinit 作为默认的init 系统所以只需要简单执行下面的service 命令就可以启动drone 了。
# service drone restart
要让drone 在系统启动时也自动运行,需要运行下面的命令。
# update-rc.d drone defaults
#### CentOS ####
因为CentOS 7使用systemd 作为init 系统所以只需要运行下面的systemd 命令就可以重启drone。
# systemctl restart drone
要让drone 自动运行只需要运行下面的命令。
# systemctl enable drone
### 9. 添加防火墙例外 ###
众所周知drone 默认使用了80 端口而我们又没有修改他所以我们需要配置防火墙程序允许80 端口http开发并允许其他机器可以通过网络连接。
#### Ubuntu 14.04 ####
iptables 是最流行的防火墙程序并且ubuntu 默认安装了它。我们需要修改iptable 暴露端口80这样我们才能让drone 的web 界面在网络上被大家访问。
# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# /etc/init.d/iptables save
#### CentOS 7 ####
因为CentOS 7 默认安装了systemd它使用firewalld 作为防火墙程序。为了在firewalld 上打开80端口http 服务),我们需要执行下面的命令。
# firewall-cmd --permanent --add-service=http
success
# firewall-cmd --reload
success
### 10. 访问web 界面 ###
现在我们将在我们最喜欢的浏览器上通过web 界面打开drone。要完成这些我们要把浏览器指向运行drone 的服务器。因为drone 默认使用80 端口而我们有没有修改过,所以我们只需要在浏览器里根据我们的配置输入`http://ip-address/` 或 `http://drone.linoxide.com` 就行了。在我们正确的完成了上述操作后,我们就可以看到登陆界面了。
![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png)
因为在上面的步骤里配置了Github我们现在只需要简单的选择github然后进入应用授权步骤这些完成后我们就可以进入工作台了。
![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png)
这里它会同步我们在github 上的代码库然后询问我们要在drone 上构建那个代码库。
![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png)
这一步完成后,它会询问我们在代码库里添加`.drone.yml` 文件的新名称并且在这个文件里定义构建的过程和配置项比如使用那个docker 镜像,执行那些命令和脚本来编译,等等。
我们按照下面的内容来配置我们的`.drone.yml`。
image: python
script:
- python helloworld.py
- echo "Build has been completed."
这一步完成后我们就可以使用drone 应用里的YAML 格式的配置文件来构建我们的应用了。所有对代码库的提交和改变此时都会同步到这个仓库。一旦提交完成了drone 就会自动开始构建。
![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png)
所有操作都完成后,我们就能在终端看到构建的结果了。
![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png)
### 总结 ###
在本文中我们学习了如何安装一个可以工作的使用drone 的持续集成平台。如果我们愿意我们甚至可以从drone.io 官方提供的服务开始工作。我们可以根据自己的需求从免费的服务或者收费服务开始。它通过漂亮的web界面和强大的功能改变了持续集成的世界。它可以集成很多第三方应用和部署平台。如果你有任何问题、建议可以直接反馈给我们谢谢。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/
作者:[Arun Pyasi][a]
译者:[ezio](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://drone.io/
[2]:http://downloads.drone.io/master/drone.deb
[3]:http://downloads.drone.io/master/drone.rpm
[4]:https://github.com/settings/developers

View File

@ -0,0 +1,108 @@
装好 openSUSE Leap 42.1 之后要做的 8 件事
================================================================================
![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg)
致谢:[Metropolitan Transportation/Flicrk][1]
> 你已经在你的电脑上安装了 openSUSE。这是你接下来要做的。
[openSUSE Leap 确实是个巨大的飞跃][2],它允许用户运行一个和 SUSE Linux 企业版拥有一样基因的发行版。和其它系统一样,在使用它之前需要做些优化设置。
下面是一些我在安装 openSUSE 到我的电脑上之后做的一些事情(不适用于服务器)。这里面没有强制性要求的设置,基本安装对来说你也可能足够了。但如果你想获得更好的 openSUSE Leap 体验,那就跟着我往下看吧。
### 1. 添加 Packman 仓库 ###
由于专利和授权等原因openSUSE 和许多 Linux 发行版一样不通过官方仓库repos提供一些软件解码器以及驱动等。取而代之的是通过第三方或社区仓库来提供。第一个也是最重要的仓库是“Packman”。因为这些仓库不是默认启用的我们需要添加它们。你可以通过 YaST openSUSE 的特色之一)或者命令行完成(如下方介绍)。
![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png)
添加 Packman 仓库。
使用 YsST打开软件源部分。点击“添加”按钮并选择“社区仓库Community Repositories”。点击“下一步”。一旦仓库列表加载出来了选择 Packman 仓库。点击“确认”,然后点击“信任”导入信任的 GnuPG 密钥。
或者在终端里使用以下命令添加并启用 Packman 仓库:
zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman
仓库添加之后,你就能接触到更多的包了。想安装任意软件或包,打开 YaST 软件管理器,搜索并安装即可。
### 2. 安装 VLC ###
VLC 是媒体播放器里的瑞士军刀,几乎可以播放任何媒体文件。你可以从 YaST 软件管理器 或 software.opensuse.org 安装 VLC。你需要安装两个包vlc 和 vlc-codecs。
如果你用终端,运行以下命令:
sudo zypper install vlc vlc-codecs
### 3. 安装 Handbrake ###
如果你需要转码或转换视频文件格式,[Handbrake 是你的不二之选][3]。Handbrake 就在我们启用的仓库中,所以只要在 YaST 中搜索并安装它。
如果你用终端,运行以下命令:
sudo zypper install handbrake-cli handbrake-gtk
提示VLC 也能转码音频和视频文件。)
### 4. 安装 Chrome ###
openSUSE 的默认浏览器是 Firefox。但是因为 Firefox 不能胜任播放专有媒体,比如 Netflix我推荐安装 Chrome。这需要额外的工作。首先你需要从谷歌导入信任密钥。打开终端执行“wget”命令下载密钥
wget https://dl.google.com/linux/linux_signing_key.pub
然后导入密钥:
sudo rpm --import linux_signing_key.pub
现在到 [Google Chrome 网站][4] 去,下载 64 位 .rpm 文件。下载完成后执行以下命令安装浏览器:
sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm
### 5. 安装 Nvidia 驱动 ###
即便你有 Nvidia 或 ATI 显卡openSUSE Leap 也能够开箱即用。但是,如果你需要专有驱动来游戏或其它目的,你可以安装这些驱动,但需要一点额外的工作。
首先你需要添加 Nvidia 源;它的步骤和使用 YaST 添加 Packman 仓库是一样的。唯一的不同是你需要在社区仓库部分选择 Nvidia。添加好了之后**软件管理 > 附加** 去并选择“附加/安装所有匹配的推荐包”。
![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png)
它会打开一个对话框,显示所有将要安装的包,点击确认后按介绍操作。添加了 Nvidia 源之后你也可以通过命令安装需要的 Nvidia 驱动:
sudo zypper inr
(注:我没使用过 AMD/ATI 显卡,所以这方面我没有经验。)
### 6. 安装媒体解码器 ###
你安装 VLC 之后就不需要安装媒体解码器了,但如果你要使用其它软件来播放媒体的话就需要安装了。一些开发者写了脚本/工具来简化这个过程。打开[这个页面][5]并点击合适的按钮安装完整的包。他会打开 YaST 并自动安装包(当然通常你还需要提供 root 权限密码并信任 GnuPG 密钥)。
### 7. 安装你偏好的电子邮件客户端 ###
openSUSE 自带 Kmail 或 Evolution这取决于你安装的桌面环境。我用的是 Plasma自带 Kmail这个邮件客户端还有许多地方有待改进。我建议可以试试 Thunderbird 或 Evolution。所有主要的邮件客户端都能在官方仓库找到。你还可以看看我的[精心挑选的 Linux 最佳邮件客户端][7]。
### 8. 在防火墙允许 Samba 服务 ###
相比于其它发行版openSUSE 默认提供了更加安全的系统。但对新用户来说它也需要一点设置。如果你正在使用 Samba 协议分享文件到本地网络的话,你需要在防火墙允许该服务。
![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png)
在防火墙设置里允许 Samba 客户端和服务端
打开 YaST 并搜索 Firewall。在防火墙设置里到“允许的服务”那里你会在“要允许的服务”下面看到一个下拉列表。选择“Samba 客户端”然后点击“添加”。对“Samba 服务端”也一样地添加。都添加了之后,点击“下一步”,然后点击“完成”,现在你就可以通过本地网络从你的 openSUSE 分享文件以及访问其它机器了。
这差不多就是我以我喜欢的方式对我的新 openSUSE 系统做的所有设置了。如果你有任何问题,欢迎在评论区提问。
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html
作者:[Swapnil Bhartiya][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
[1]:https://www.flickr.com/photos/mtaphotos/11200079265/
[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution
[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line
[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha
[5]:http://opensuse-community.org/
[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html

View File

@ -1,4 +1,3 @@
【Translating by cposture 2016-03-01】
* * *
# GCC 内联汇编 HOWTO
@ -422,11 +421,11 @@ C 表达式用作 "asm" 内的汇编指令操作数。作为第一双引号内
* * *
## 7. Some Useful Recipes.
## 7. 一些实用的诀窍
Now we have covered the basic theory about GCC inline assembly, now we shall concentrate on some simple examples. It is always handy to write inline asm functions as MACROs. We can see many asm functions in the kernel code. (/usr/src/linux/include/asm/*.h).
现在我们已经介绍了关于 GCC 内联汇编的基础理论现在我们将专注于一些简单的例子。将内联汇编函数写成宏的形式总是非常方便的。我们可以在内核代码里看到许多汇编函数。usr/src/linux/include/asm/*.h
1. First we start with a simple example. Well write a program to add two numbers.
1. 首先我们从一个简单的例子入手。我们将写一个两个数相加的程序。
> `
>
@ -448,7 +447,7 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
Here we insist GCC to store foo in %eax, bar in %ebx and we also want the result in %eax. The = sign shows that it is an output register. Now we can add an integer to a variable in some other way.
这里我们要求 GCC 将 foo 存放于 %eax将 bar 存放于 %ebx同时我们也想要在 %eax 中存放结果。'=' 符号表示它是一个输出寄存器。现在我们可以以其他方式将一个整数加到一个变量。
> `
>
@ -459,7 +458,7 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
> " addl %1,%0 ;\n"
> : "=m" (my_var)
> : "ir" (my_int), "m" (my_var)
> : /* no clobber-list */
> : /* 无修饰寄存器列表 */
> );
> </pre>
>
@ -467,9 +466,9 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
This is an atomic addition. We can remove the instruction lock to remove the atomicity. In the output field, "=m" says that my_var is an output and it is in memory. Similarly, "ir" says that, my_int is an integer and should reside in some register (recall the table we saw above). No registers are in the clobber list.
这是一个原子加法。为了移除原子性,我们可以移除指令 'lock'。在输出域中,"=m" 表明 my_var 是一个输出且位于内存。类似地,"ir" 表明 my_int 是一个整型,并应该存在于其他寄存器(回想我们上面看到的表格)。没有寄存器位于修饰寄存器列表中。
2. Now well perform some action on some registers/variables and compare the value.
2. 现在我们将在一些寄存器/变量上展示一些操作,并比较值。
> `
>
@ -486,13 +485,13 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
Here, the value of my_var is decremented by one and if the resulting value is `0` then, the variable cond is set. We can add atomicity by adding an instruction "lock;\n\t" as the first instruction in assembler template.
这里my_var 的值减 1 ,并且如果结果的值为 0则变量 cond 置 1。我们可以通过添加指令 "lock;\n\t" 作为汇编模板的第一条指令来添加原子性。
In a similar way we can use "incl %0" instead of "decl %0", so as to increment my_var.
以类似的方式,为了增加 my_var我们可以使用 "incl %0" 而不是 "decl %0"。
Points to note here are that (i) my_var is a variable residing in memory. (ii) cond is in any of the registers eax, ebx, ecx and edx. The constraint "=q" guarantees it. (iii) And we can see that memory is there in the clobber list. ie, the code is changing the contents of memory.
这里需要注意的点为imy_var 是一个存储于内存的变量。iicond 位于任何一个寄存器 eax、ebx、ecx、edx。约束 "=q" 保证这一点。iii同时我们可以看到 memory 位于修饰寄存器列表中。也就是说,代码将改变内存中的内容。
3. How to set/clear a bit in a register? As next recipe, we are going to see it.
3. 如何置1或清0寄存器中的一个比特位。作为下一个诀窍我们将会看到它。
> `
>
@ -509,9 +508,9 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
Here, the bit at the position pos of variable at ADDR ( a memory variable ) is set to `1` We can use btrl for btsl to clear the bit. The constraint "Ir" of pos says that, pos is in a register, and its value ranges from 0-31 (x86 dependant constraint). ie, we can set/clear any bit from 0th to 31st of the variable at ADDR. As the condition codes will be changed, we are adding "cc" to clobberlist.
这里ADDR 变量(一个内存变量)的 'pos' 位置上的比特被设置为 1。我们可以使用 'btrl' 来清楚由 'btsl' 设置的比特位。pos 的约束 "Ir" 表明 pos 位于寄存器并且它的值为 0-31x86 相关约束)。也就是说,我们可以设置/清除 ADDR 变量上第 0 到 31 位的任一比特位。因为条件码会被改变,所以我们将 "cc" 添加进修饰寄存器列表。
4. Now we look at some more complicated but useful function. String copy.
4. 现在我们看看一些更为复杂而有用的函数。字符串拷贝。
> `
>
@ -535,9 +534,9 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
The source address is stored in esi, destination in edi, and then starts the copy, when we reach at **0**, copying is complete. Constraints "&S", "&D", "&a" say that the registers esi, edi and eax are early clobber registers, ie, their contents will change before the completion of the function. Here also its clear that why memory is in clobberlist.
源地址存放于 esi目标地址存放于 edi同时开始拷贝当我们到达 **0** 时,拷贝完成。约束 "&S"、"&D"、"&a" 表明寄存器 esi、edi和 eax 早期的修饰寄存器,也就是说,它们的内容在函数完成前会被改变。这里很明显可以知道为什么 "memory" 会放在修饰寄存器列表。
We can see a similar function which moves a block of double words. Notice that the function is declared as a macro.
我们可以看到一个类似的函数,它能移动双字块数据。注意函数被声明为一个宏。
> `
>
@ -558,9 +557,9 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
Here we have no outputs, so the changes that happen to the contents of the registers ecx, esi and edi are side effects of the block movement. So we have to add them to the clobber list.
这里我们没有输出,所以寄存器 ecx、esi和 edi 的内容发生改变,这是块移动的副作用。因此我们必须将它们添加进修饰寄存器列表。
5. In Linux, system calls are implemented using GCC inline assembly. Let us look how a system call is implemented. All the system calls are written as macros (linux/unistd.h). For example, a system call with three arguments is defined as a macro as shown below.
5. 在 Linux 中,系统调用使用 GCC 内联汇编实现。让我们看看如何实现一个系统调用。所有的系统调用被写成宏linux/unistd.h。例如带有三个参数的系统调用被定义为如下所示的宏。
> `
>
@ -581,10 +580,10 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
Whenever a system call with three arguments is made, the macro shown above is used to make the call. The syscall number is placed in eax, then each parameters in ebx, ecx, edx. And finally "int 0x80" is the instruction which makes the system call work. The return value can be collected from eax.
Every system calls are implemented in a similar way. Exit is a single parameter syscall and lets see how its code will look like. It is as shown below.
无论何时调用带有三个参数的系统调用,以上展示的宏用于执行调用。系统调用号位于 eax 中,每个参数位于 ebx、ecx、edx 中。最后 "int 0x80" 是一条用于执行系统调用的指令。返回值被存储于 eax 中。
每个系统调用都以类似的方式实现。Exit 是一个单一参数的系统调用,让我们看看它的代码看起来会是怎样。它如下所示。
> `
>
> * * *
@ -601,23 +600,23 @@ Now we have covered the basic theory about GCC inline assembly, now we shall con
>
> `
The number of exit is "1" and here, its parameter is 0\. So we arrange eax to contain 1 and ebx to contain 0 and by `int $0x80`, the `exit(0)` is executed. This is how exit works.
Exit 的系统调用号是 1 同时它的参数是 0。因此我们分配 eax 包含 1ebx 包含 0同时通过 `int $0x80` 执行 `exit(0)`。这就是 exit 的工作原理。
* * *
## 8. Concluding Remarks.
## 8. 结束语
This document has gone through the basics of GCC Inline Assembly. Once you have understood the basic concept it is not difficult to take steps by your own. We saw some examples which are helpful in understanding the frequently used features of GCC Inline Assembly.
这篇文档已经将 GCC 内联汇编过了一遍。一旦你理解了基本概念,你便不难采取自己的行动。我们看了许多例子,它们有助于理解 GCC 内联汇编的常用特性。
GCC Inlining is a vast subject and this article is by no means complete. More details about the syntaxs we discussed about is available in the official documentation for GNU Assembler. Similarly, for a complete list of the constraints refer to the official documentation of GCC.
GCC 内联是一个极大的主题,这篇文章是不完整的。更多关于我们讨论过的语法细节可以在 GNU 汇编器的官方文档上获取。类似地,对于一个完整的约束列表,可以参考 GCC 的官方文档。
And of-course, the Linux kernel use GCC Inline in a large scale. So we can find many examples of various kinds in the kernel sources. They can help us a lot.
当然Linux 内核 大规模地使用 GCC 内联。因此我们可以在内核源码中发现许多各种各样的例子。它们可以帮助我们很多。
If you have found any glaring typos, or outdated info in this document, please let us know.
如果你发现任何的错别字,或者本文中的信息已经过时,请告诉我们。
* * *
## 9. References.
## 9. 参考
1. [Brennans Guide to Inline Assembly](http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html)
2. [Using Assembly Language in Linux](http://linuxassembly.org/articles/linasm.html)
@ -628,6 +627,6 @@ If you have found any glaring typos, or outdated info in this document, please l
* * *
via: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html
作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[](https://github.com/) 校对:[]()
作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[cposture](https://github.com/cposture) 校对:[]()
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,55 @@
一个Linux驱动的微波炉
================================================================================
[linux.conf.au](http://linux.conf.au/)里的人们都有一种想到什么就动手去实现的想法。随着硬件开源运动不断地发展壮大这种想法越来越多与现实世界联系的越来越紧密而不仅仅存在于数字世界中。David Tulloh用他制作的[Linux驱动的微波炉 [WebM]](http://mirror.linux.org.au/linux.conf.au/2016/04_Thursday/D4.303_Costa_Theatre/Linux_driven_microwave.webm)来展示一个差劲的微波炉会多么难用以及说明他的项目可以改造这些微波炉使得它们不那么讨人厌。
Tulloh的故事要从他买到了一个公认很便宜的微波炉开始说起它的用户界面比其它微波炉默认的还要糟糕。设定时间时必须使劲按按钮以至于把微波炉都向后推了一段距离——而事实上必须要用力拉仓门把手才能把微波炉拖回原来的位置这形成了一个“优雅”的平衡。当然这只是极端情况。Tulloh很郁闷因为这个微波炉近十年来都没有一丁点明显的改善。他可能买到了一个又小又便宜的微波炉而且特点是大部分人不研究使用手册就不会使用它——和智能手机的对比更加明显智能手机只需知道一点点的操作指南并且被广泛使用。
改造这个微波炉不一定没有前途“让微波炉重获新生”——这个想法成为了一个原型如果Tulloh可以再平衡一下想做的功能和需求之间的关系的话他希望这变成一个众筹项目一个Linux驱动的微波炉。
![](https://static.lwn.net/images/2016/lca-oven-sm.jpg)
## 加一点新奇的小玩意
如果你把“Linux”和“微波炉”联系在一起的话就可能想到给微波炉加上一个智能手机式的触摸屏和网络链接然后再通过社区做一款微波炉的“革命性”的手机应用想到这些就像做菜想到分享食谱一样显而易见。但Tulloh的目标和他的原型远远超过这些他做了两个新奇的功能——热感相机和称量物体质量的称重装置。
这个热感相机提供一个精确度两自由度的八乘八像素的图片这足够发现一杯牛奶是否加热到沸腾或者牛排是否解冻到快不能用来烹饪。不论发生哪种情况功率都可以减小或者关掉。而且在必要的时候会发出警报。这可能不是第一个可以检测温度的微波炉——GE在十年前就开始卖带温度探针的微波炉了——但是一个一直工作的内置传感器比一个手工探针有用多了尤其是有一个可用的API支持的时候。
第二个新发明是一个嵌入的称重装置,它可以在加热之前称量食物(和容器)。很多食谱根据质量给出指导的烹饪时间很多微波炉支持你手动输入质量以便它帮你计算。利用内置的称重装置这一过程可以变成自动化的。在许多微波炉的转盘下面稳固地放置一个称重装置是一个机械方面的挑战不过Tulloh觉得这个问题不难处理。反而他对微波炉的设计是基于“平板”或者“平板挂车”的风格——在四角各放置一个传感器这不仅在机械实现上很简单而且很好的达到了要求。
[用户界面]
一旦你有了这些额外添加的并与逻辑引擎相连的质量温度传感器你可以去尝试更多好玩的可能。一杯刚从冰箱里拿出来的冰牛奶的质量温度分布可能会有适度误差。Tulloh发现这种情况可以被检测到而且提供一些有关的像“煮沸”或者“加热”的选项也是容易做到的下面有一个模拟的界面可点击操作的版本请点击右边链接 [here](http://mwgui.tulloh.id.au/)
![](https://static.lwn.net/images/2016/lca-ovengui-sm.png)
## 改造陈旧的东西
除了才开发出来的新功能Tulloh还想要提升那些原本就提供的功能。可能不是所有微波炉的门把手都像Tulloh那个廉价的一样僵硬但是很少有微波炉将把手设计的让残疾人也能轻松使用。这些缺陷都是可调整的尤其是在美国微波炉应该在仓门关闭的时候给出一个确定关闭的提示。这种确认必须是可靠的以预防那些伪劣产品所以在仓门闭合时固定的槽位里添加一个短杆以确认仓门开闭状态不误使微波炉在仓门开着的时候工作。事实上必须要两个相互联系的机关如果他们提供的结果不一致
保险丝必须断开以便启动一个呼叫服务。Tulloh认为提供一个磁力门闩有更大的灵活性包含简单的软件控制并且像磁控也同样用于[磁性钥匙锁](https://en.wikipedia.org/wiki/Magnetic_keyed_lock),它可以让磁力门闩确认微波炉门是否关闭。
微波炉的另一个痛点是它会发出令人厌烦的声音。Tulloh去掉了蜂鸣器并且使用香蕉派类似于树莓派的单片机开发板控制他的微波炉。这可以通过一个把文本转换成语音的系统来用令人愉悦而且可配置的警报来提示和引导使用者。显然下一步就是装上一个用来控制声音的扩音器。
许多微波炉除了定时和设置功率档位之外还可以做更多的事情——它们为烹饪加热化冻等提供一系列的功率谱。加上一个精确的温度测量装置感觉会为这个图表大大扩展它们的序列。Andrew Tridgell对一个问题很好奇加热巧克力——一个需要非常精确的温度控制的过程——是否是可能的。Tulloh没有过这方面的经验他不敢保证这个一定可以但是这个实验结果的确值得期待。即使没做成这件事它也显出了潜在价值——社区接下来可以更进一步去做这件事。
## 实用性怎么样?
Tulloh十分乐意向全世界分享这个linux驱动的微波炉他希望看到因为这件事形成一个社区并且想看到它接下来的走势。买一个现成的微波炉并且替换掉里面的电子元件看起来不是一个可行的点子。最后的结果可能会很糟而买一个小巧智能的微波炉必然要花掉比自己改造更多的钱但是潜在的顾客不想在他们的厨房里看到乱七八糟又不协调的东西。
许多零件都是现成的可以买到的磁电管处理器板热传感器等等像USB接口的热传感器而且都很容易安装。软件原型当然也开源在[GitHub](https://github.com/lod?tab=repositories)。这个样例和微波炉门有不小的挑战性并且很可能要定制。Tulloh想要通过提供左侧开仓门的微波炉和颜色多样化的选项来转逆境为机遇。
一个对读者的快速调查很少有人会贸然承诺他会为了一个全新的升级过的烤箱付出1000澳大利亚元。当然很难知道是否会有充足的时间和足够多的读者来完成这个调查。这整个项目看起来很有趣。所以Tulloh的[博客](http://david.tulloh.id.au/category/microwave/) (点击这里)也很值得一看。
------------------------------------------------------------------------------
via: https://lwn.net/Articles/674877/
作者Neil Brown
译者yuba0604(https://github.com/yuba0604)
译者水平有限,敬请指正。(lizhengyu@gmail.com)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,135 @@
替代 Copy 的 Linux 最佳云服务
===============================================
![](http://itsfoss.com/wp-content/uploads/2016/02/Linux-cloud-services.jpg)
云存储服务 Copy 即将关闭,我们 Linux 用户是时候该寻找其他优秀的** Copy 之外的 Linux 云存储服务**。
全部文件将会在 2016年5月1号 被删除。如果你是 Copy 的用户,你应该保存你的文件并将它们移至其他地方。
在过去的两年里Copy 已经成为了我最喜爱的云存储。它为我提供了大量的免费空间并且带有桌面平台的原生应用程序,包括 Linux 和移动平台如 iOS 和 Android。
对我来说它是一个非常棒的云存储在应用中我获得了大量的免费存储空间380 GB并且享受着桌面系统和移动系统之间的无缝体验。但是这些'方便免费的存储'15 GB 为注册5 GB 为每次推荐,使我想到如果 Copy 没有获得商业用户,他们将会马上停业。如此巨大的免费存储空间仅意味着他们没有像 Dropbox 所做的一样针对个别用户。
当我从 Copy.com 看到它即将关闭的消息我的担忧成真了。事实上Copy 并不孤独。它的母公司 [Barracuda Networks](https://www.barracuda.com/)正经历一段困难时期并且已经[雇佣 Morgan Stanely 寻找 合适的卖家](http://www.bloomberg.com/news/articles/2016-02-01/barracuda-networks-said-to-work-with-morgan-stanley-to-seek-sale)(s)
无论什么理由,我们所知道的是 Copy 将会成为历史,我们需要寻找相似的**优秀的 Linux 云服务**。我之所以强调 Linux 是因为其他流行的云存储服务,如[微软的OneDrive](https://onedrive.live.com/about/en-us/) 和 [Google Drive](https://www.google.com/drive/) 都没有提供本地 Linux 客户端。这是微软预计的事情,但是谷歌对 Linux 的冷漠令人震惊。
## Linux 下 Copy 的最佳替代者
现在,作为一个 Linux 存储,在云存储中你需要什么?让我们猜猜:
- 大量的免费空间。毕竟,个人用户无法每月支付巨额款项。
- 原生的 Linux 客户端。因此你能够使用提供的服务,方便地同步文件,而不用做一些特殊的调整或者定时执行脚本。
- 其他桌面系统的客户端,比如 Windows 和 OS X。便携性是必要的并且同步设备间的文件是一种很好的缓解。
- Android 和 iOS 的移动应用程序。在今天的现代世界里,你需要连接所有设备。
我不将自托管的云服务计算在内,比如 OwnCloud 或 [Seafile](https://www.seafile.com/en/home/) ,因为它们需要自己建立和运行一个服务器。这不适合所有想要类似 Copy 的云服务的家庭用户。
让我们看看可以用于替代 Linux 下 Copy.com 的服务有什么。
## Mega
![](http://itsfoss.com/wp-content/uploads/2016/02/Mega-Linux.jpg)
如果你是一个 Its FOSS 的普通读者,你可能已经看过我之前的一篇有关[Mega on Linux](http://itsfoss.com/install-mega-cloud-storage-linux/)的文章。这种云服务由[Megaupload scandal](https://en.wikipedia.org/wiki/Megaupload) 公司下臭名昭著的[Kim Dotcom](https://en.wikipedia.org/wiki/Kim_Dotcom)提供。这也使一些用户怀疑它,因为 Kim Dotcom 已经很长一段时间成为美国当局的目标。
Mega 拥有方便免费云服务下你所期望的一切。它给每个个人用户提供 50 GB 的免费存储空间。提供Linux 和其他平台下的原生客户端,并带有端到端的加密。原生的 Linux 客户端运行良好,可以无缝地跨平台同步。你也能在浏览器上查看操作你的文件。
### 优点:
- 50 GB 的免费存储空间
- 端到端的加密
- Linux 和其他平台下的原生客户端,例如 WindowsMac OS XAndroidiOS
### 缺点:
- Mega 拥有者见不得人的过去
[Mega](https://mega.nz/)
## Hubic
![](http://itsfoss.com/wp-content/uploads/2016/02/hubic.jpeg)
Hubic 是一个来自法国公司的云服务。Hubic 在注册时也提供了 25 GB 免费存储空间。你可以通过推荐Hubic给朋友将空间扩大至 50 GB (对免费用户来说)。
Hubic 提供 Linux 客户端,其还是 beta 版本至今已经两年了。Hubic 拥有官方的 Linux 客户端,但是它局限在命令行。我没有去测试移动版本。
Hubic 拥有一些不错的功能。除了简单的用户界面、文件共享等等,它还有备份的功能,你可以定期地归档你的重要文件。
### 优点:
- 25 GB 免费存储空间,可扩大至 50 GB
- 支持多个平台
- 备份功能
### 缺点:
- beta 版本的 Linux 客户端,只支持命令行
[Hubic](https://hubic.com/)
## pCloud
![](http://itsfoss.com/wp-content/uploads/2016/02/pCloud-Linux.jpeg)
pCloud 是另一款欧洲的发行软件但这一次从瑞士横跨法国边境。专注于加密和安全pCloud 为每一个注册者提供 10 GB 的免费存储空间。你可以通过邀请好友、在社交媒体上分享链接等方式将空间增加至 20 GB。
它拥有云服务的所有标准特性例如文件共享、同步、选择性同步等等。pCloud 也有跨平台原生客户端,当然包括 Linux。
Linux 客户端 容易使用,并在 Linux Mint 17.3 下的有限测试中表现良好。
### 优点:
- 10 GB 免费存储空间,可扩大至 20 GB
- 运行良好的带有 GUI 的 Linux 客户端
### 缺点:
- 加密是一个付费功能
[pCloud](https://www.pcloud.com/)
## Yandex Disk
![](http://itsfoss.com/wp-content/uploads/2016/02/Yandex.jpg)
俄罗斯互联网巨人 Yandex 拥有 Google 所拥有的一切东西。搜索引擎、分析学、网站管理工具、邮箱、网页浏览器和云存储服务。
Yandex Disk 在注册时提供了 10 GB 的免费云存储空间。它有多平台的原生客户端,包括 Linux。然而官方的 Linux 客户端只是命令行而已。你可以获取[非官方的 GUI 版本的 Yandex Disk 客户端](https://mintguide.org/tools/265-yd-tools-gui-indicator-for-yandexdisk-free-cloud-storage-in-linux-mint.html)。Yandex Disk 支持文件共享链接,同时带有其他标准的云存储特性。
### 优点:
- 10 GB 的免费存储空间,可通过推荐的方式扩大至 20 GB
### 缺点:
- 只有命令行客户端
[Yandex Disk](https://disk.yandex.com/)
## 公正而深思熟虑的删节
我从列表中删减了[Dropbox](https://www.dropbox.com/)、[SpiderOak](https://spideroak.com/)。Dropbox 对 Linux 来说非常优秀,但是它的免费存储空间限制在 2 GB。在过去的几年里我已设法将其扩大超过 21 GB但那又是另一件事了。
SpiderOak 也仅提供了 2 GB 的免费存储空间,你无法在网页浏览器上操作文件。
OwnCloud 需要属于自己的服务器包括建立,因此它并非人见人爱。并且它确切不符合一个典型云服务的标准。
## 结论
如果你问我应该使用什么替代 Copy我的答案是 Mega。它带有大量的免费云存储空间和优秀的 Linux 客户端。在** Linux 下最佳云存储服务**的列表中,你的选择是什么?你更喜欢哪一个呢?
------------------------------------------------------------------------------
via: http://itsfoss.com/cloud-services-linux/
作者:[ABHISHEK][a]
译者:[cposture](https://github.com/cposture)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/

View File

@ -1,58 +0,0 @@
让sudo在用户输错密码时侮辱用户
===========================================================
你在Linux终端中会有很多的乐趣。我今天要讲的不是在[终端中跑火车](http://itsfoss.com/ubuntu-terminal-train/)。
我今天要讲的技巧可以放松你的心情。前面一篇文章中,你学习了[如何在命令行中增加sudo命令的超时](http://itsfoss.com/change-sudo-password-timeout-ubuntu/)。今天的文章中我会向你展示如让sudo在输错密码的时候侮辱你或者其他人
对我讲的感到疑惑这里让我们看下这张gif来了解sudo如何在你输错密码之后侮辱你的。
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux.gif)
现在,你为什要这么做?毕竟,侮辱不会让你的一天开心,不是么?
对我来说,一点小技巧都是有趣的,并且要比以前的“密码错误”的错误提示更有趣。另外,我可以向我的朋友展示娱乐(这个例子中是通过自由开源软件)。我很肯定你有你自己的里有来使用这个技巧的。
## 在sudo中启用侮辱
你可以在`sudo`配置中增加下面的行来启用侮辱功能:
```
Defaults insults
```
让我们看看该如何做。打开终端并使用下面的命令:
```
sudo visudo
```
这会在[nano](http://www.nano-editor.org/)中打开配置文件。使得我知道传统的visudo应该在vi中打开`/etc/sudoers` 文件但是Ubuntu及基于它的发行版会使用nano打开。由于我们再讨论vi这里有一份[vi速查表](http://itsfoss.com/download-vi-cheat-sheet)可以在你决定使用vi的时候使用。
回到编辑sudeors文件界面你需要找出Defaults所在的行。幸运的是只需要在文件的开头加上“Defaults insults”就像这样
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux-Mint.png)
如果你正在使用nano使用`Ctrl+X`来退出编辑器。在退出的时候它会询问你是否保存更改。要保存更改按下“Y”。
一旦你保存了sudoers文件之后打开终端并在任何命令中使用sudo。故意输错密码病享受辱骂
sudo可能会讨厌的。看见没他甚至在我再次输错之后威胁我。哈哈
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux-Mint-1.jpeg)
如果你喜欢这个终端技巧,你也可以查看[其他终端技巧的文章](http://itsfoss.com/category/terminal-tricks/)。如果你有其他有趣的技巧,在评论中分享。
------------------------------------------------------------------------------
via: http://itsfoss.com/sudo-insult-linux/
作者:[ABHISHEK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/

View File

@ -0,0 +1,39 @@
NXP 揭幕了一块超小型物联网64位ARM处理器
=========================================================================
**标签**:[ARM][1], [物联网][2], [NXP][3], [NXP 半导体][4]
![](http://1u88jj3r4db2x4txp44yqfj1.wpengine.netdna-cdn.com/wp-content/uploads/2016/02/nxp-930x556.jpg)
[NXP 半导体][5]揭幕了一块声称世界上最小的用于物联网(IoT)的低功耗64位ARM处理器。
这片小型的QorIQ LS1012A为电池供电大小受限的应用提供了网络级的安全和性能加速。这包括了运行物联网应用或者任何智能及可连接的设备。如果物联网能在2020达到1.7万亿美金的潜力由IDC研究员估算市场得出那么它将需要像NXP这样的处理器该处理器在德国纽伦堡的Embedded World 2016 上揭开了什么的面纱。
该芯片带有64位ARMv8芯片拥有网络包加速及内置的安全。它占用9.6平方毫米的空间并且大约消耗1瓦特的电力。潜在的应用包括下一代的物联网网关、可携带娱乐平台、高性能可携带存储应用、移动硬盘、相机的移动存储、平板及其他可充电的设备。
除此之外LS1012A是第一款为新起的基于对象的存储方案设计的处理器基于对象存储基于智能硬盘它直接连接到以太网数据中心。处理器必须足够小才能直接集成在硬盘的集成电路上。
NXP的高级副总裁及数字网络部的经理Tareq Bustami说“低功耗、占用空间小及网络级性能这些突破性组合的NXP LS1012处理器是消费者、物联网相关应用的理想选择。独有的混合能力解放了物联网设计者及开发者使得他们可以在这个高增长的市场中想象并创造更多创新产品。”
NXP说这是唯一一个能够结合全面的高速外围在一个芯片中的1瓦特、64位处理器这意味着低系统功耗。归功于创新的封装该处理器可以在低成本的电路板中布线。
NXP的LS1012A可以在2016年4月开始发货并且现在可以订货。NXP在全球35个国家拥有超过4,5000名员工。
--------------------------------------------------------------------------------
via: http://venturebeat.com/2016/02/21/nxp-unveils-a-small-and-tiny-64-bit-arm-processor-for-the-internet-of-things/
作者:[DEAN TAKAHASHI][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://venturebeat.com/author/dean-takahashi/
[1]:http://venturebeat.com/tag/arm/
[2]:http://venturebeat.com/tag/internet-of-things/
[3]:http://venturebeat.com/tag/nxp/
[4]:http://venturebeat.com/tag/nxp-semiconductors/
[5]:http://www.nxp.com/

View File

@ -0,0 +1,50 @@
新的Docker数据中心管理套件使容器化变得更加井然有序
===============================================================================
![](https://tctechcrunch2011.files.wordpress.com/2016/02/shutterstock_119411227.jpg?w=738)
[Docker][1]今天宣布了一个新的容器控制中心称为Docker数据中心DDC被设计用于大型和小型企业能够创建、管理和分发容器的一个集成管理控制台。
DDC是由包括Docker Universal Control Plane也是今天发布和Docker Trusted Registry等不同的商业组件组成。它也包括了开源组件比如Docker Engine。这个主意能让公司能够在一个中心管理界面中就能够管理整个Docker化程序的生命周期。
产品SVP Scott Johnston告诉TechCrunch“客户催使了这个新工具的产生。公司不仅喜欢Docker给他们带来的敏捷性它们也希望在创建和分发容器的过程中可以进行行政、安全和管理。”
Johnston说“公司称这个为容器即服务Caas大多是是因为当客户来询问这个管理的类型时它们是这样描述的。”
![](https://tctechcrunch2011.files.wordpress.com/2016/02/screen-shot-2016-02-23-at-7-56-54-am.png?w=680&h=401)
>Docker免费镜像
像许多开源项目那样Docker首先获得了许多开发者的追随但是它也很快在那些想直接追踪管理它们的开发者的公司中流行。
这就是DDC设计的目的。它给开发者创建容器化应用的敏捷性也让运维变得井井有条。
实际中这意味着开发者可以创建一系列容器化的组件,批准部署后就可以获得一个完全认证的镜像。这可以让开发这一系列的程序中拉取他们所需而不必每次重新发明轮子。这可以加速应用的开发和部署(理论上提升了容器提供的灵活性)。
这方面吸引了Beta客户ADP。工资服务业巨头特别喜欢让这个中心镜像仓库提供给开发人员。
ADP的CTO Keith Fulton在声明中称“作为我们将关键业务微服务化倡议的一部分ADP正在研究能够然开发人员可以利用IT审核过的中央库和安全的核心服务进行快速迭代的方案。”
Docker在2010年由dotcloud的Solomon Hykes发布。他在2013年将公司的重心移到Docker上并在[8月dotCloud][2]2014年完全聚焦在Docker上。
根据CrunchBase的消息公司几年来在5轮融资后势如破竹般获得了1亿8000万美元融资自从成为Docker后获得了1亿6千8百万美元。吸引投资者关注的是Docker提供了一种称为容器的现在分发应用的方式可以构建、管理何分发分布式应用。
容器化可以让开发者创建由多个小的分布在不同服务器上的分布式应用,而不是一个运行在一个单独服务器上的独立应用。
DDC每月每节点150美金起。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/calico-virtual-private-networking-docker/
作者:[ Ron Miller][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://techcrunch.com/author/ron-miller/
[1]: https://www.docker.com/
[2]: http://techcrunch.com/2014/08/04/docker-sells-dotcloud-to-cloudcontrol-to-focus-on-core-container-business/

View File

@ -0,0 +1,56 @@
怎样将开源经历添加到你的简历中去
==================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/lightning-test.png?itok=aMceg0Vg)
在这篇文章中,我将会分享我的一些方法,目的是让你因为曾经为开源事业作出贡献的经历使得你在技术领域的求职中脱颖而出。
凡事预则立,不预则废。在你即将进入一个新的领域或者正准备花费整个晚上来彻底完善你的简历之前,先来定义你正在寻找的工作的特征是值得的。你的简历是一张有说服力的作品,因此你必须了解你的观众,从而让它发挥出所有的潜力。看你简历的可能是任何需要你的技能并且恰好能在预算之内聘用你的人。当编辑简历的时候,读一读你的简历上的内容,同时想象一下它们最应该被书写的位置。你看起来像是一个你将会聘用的候选人吗?
在我看来,我发现对于我所要求的目标职位的所有理想候选人,针对他们每个人的特点列出一张清单有时候是很有帮助的。我将他们每个人的特点清单合并到他们的个人经历中,并阅读工作记录,然后以同样的角色询问同事们。人际关系网和会议是很好的地方去寻求一些乐意提供这种建议的人。一些人喜欢谈论他们自己,并且邀请他们去讲述他们自己的故事的一部分来帮助你去拓展你的知识面,通过这样方式可以使每个人感觉更好。当你和其他人谈论他们的职业路线时,你不仅将会明白怎样去得到你想要从事的工作,而且还能知道你应该避免哪些容易让你失去工作机会的情况和行为。
例如,对于一个差劲的角色,有关于他的关键特性列表可能看起来像下面这样:
###技术:
- 拥有计算机从业方面的经验更加容易受到Jenkins的青睐。
- 深厚的脚本编写背景如Python和Ruby
- 精通eclipse的使用
- 基本的git和bash知识
###个人而言:
- 自我学习者
- 良好的交流和文档技巧
- 在团队开发方面富有经验
- 精通事件捕捉工作流
###以任意方式应用
记住,你没有必要为了得到一份工作而去满足上面的工作描述列表中列出的每个标准。工作细节描述了任何人都可以离开这个角色,如果你已经知道你即将签约并为之工作几年的公司的全部信息,并且这份工作并不会让你觉得有什么挑战性,或者要求你去拓展你你的技能。如果你对你无法满足清单上的技能列表而感到紧张,那么检查一下自己是否有来自其他经历并能与之媲美的技能。
例如即使有些人从来没有使用过Jenkins那他也可能从之前使用过Buildbot或者travis CI的项目经验中明白持续集成测试的原则。
如果你正在申请一家大型公司,而他们可能拥有一个专门的部门和一套完整的筛选过程来确保他们不会聘用任何不能胜任职位的候选人。也就是说,在你求职的过程中,你所能做的只是提交申请,而决定是否拒绝你是公司管理层的工作。不要过早的将工作拒之门外。
现在你已经知道了你的任务是什么,并且还知道你将需要哪些技能去完成这次面试。下一步要做的取决与你已经获取到的经验。
### 制造已经存在的事物之间的关联
列出一张你过去几年曾经参与过的所有项目。下面是一条快速得到这张清单的方式生成一个跳转到你的github上的库的导航器但是需要过滤掉你从其他地方复制过来的项目。除此之外检查下你的清单上是否有曾经处于领导地位的项目。如果你已经有了一份简历那么请确保你已经将你所有的经历都列在了上面。
考虑下任何一个你曾经作为一个潜在的领导经历并拥有过特权的IRC项目。检查你的会唔和聚会并将你曾经组织过或者作为志愿者参与过的事情添加到你的清单上面。略过你前几年的日程并且标注所有志愿行动或者有作为导师的经历又或者有接触过的公共演讲.
现在进入了比较艰难的环节了,将清单上列出的必备技能与个人经历列表上的内容一一对照,我喜欢给工作需要的每个特性用一个字母或者数字作为标记,然后在每一段你经历或参与过并表现出了某一特性的地方标记相同的符号。当你产生怀疑的时候,无论如何也要加上它,尽管这样做更像是在吹嘘,但也好过显示出你的无能。
在我们写简历的时候常常被这样的情况所困扰,就是我们不愿冒着过分吹嘘自己的技能的风险。这通常会帮助我们重新考虑这个问题,比如我们会考虑那些组织了会晤的人会表现出更好的领导才能和计划技巧吗?而不是当我组织这个会晤的时候我是否展示出了这些技巧。
如果你已经充分了解了你在过去的几年里的业余时间都是怎么度过的或者你写了很多代码,那么你可能现在正面临着一个令人奇怪的问题,你已经拥有了太多的经验以至于一张纸的简历已经无法容纳下这些经验了。那么,如果那些列在你的清单上的经验无法证明你尝试去表现的任何技能的话,那么请扔掉它们吧。如果这份已经被缩短的简历清单上的内容仍然超过一张单页纸的容量的话,那么将你的经验按照一定的优先级排序,例如根据你获得了相关的故事或丰富的经验与所需的技术。
在这一方面显而易见如果你想要磨练一个独特的技能那么你就需要一个不错的经历。考虑使用一个类似OpenHatch的问题聚合器并用它来寻找一个通过使用你从没使用过的工具和技术来锻炼你的技能的开源项目。
让你的简历更加漂亮
一份简历是否美观取决于它的简洁度,清晰度和布局。每一段经历都应该通过足够的信息来展示给读者并让他们立刻就能明白为什么你要将它包含进去,而且恰到好处。每种类型的信息都应该使用一致的文档格式来表示,一份含有斜体格式或者右对齐格式的或者其他与整体风格不协调的数据绝对会让看的人十分反感。
使用工具来给你的简历排版会使之前设定的目标更加容易实现。我喜欢使用LaTeX因为它
的宏系统能够使可视化一致性变得更加容易并且大量的面试官都能一眼就认出他。你的工具的选择可能是LibreOffice或者HTML这取决于你的技能和你希望怎样去发布你的简历。
记住一点,一份以数字方式提交的简历可以通过关键字被浏览到。因此,当你需要描述你的工作经历的时候使用和工作招聘告示一样的英文缩写对你的求职会有很大的帮助。为了让你的简历更加容易被面试官所使用,首先就要放上最重要的信息。
程序员通常难以在为文档排版时量化平衡和布局。我最喜欢的技术是回退并评估我的文档的空格是否处于正确的位置而这样做的目的就是我的PDF可以全屏显示或者打印出来然后
在镜像里面查看它。如果你正在使用LibreOffice Writer保存一份你的简历的副本然后将你的简历中的字体换成一种你无法阅读的语言。这两种技术都强制将你从阅读的内容中脱离出来让你以一种新的方式查看文档的整体布局。他们把你从一个”那句话措辞不当”对注意到的事物的批评比如“在这条线上只有一个字看起来很有趣”。
最后再次检查你的简历是否在它将要的展示的多媒体上看起来完全正确。如果你以网页的形式发布它那么在不同屏幕大小的浏览器中测试它的效果。如果它是一份PDF文档那么在你的手机或者你的朋友的电脑上面打开它并确保它所需要的字体都是可用的。
接下来的步骤
最后不要让你辛苦做出来的简历被浪费了利用你的LinkedIn帐号把它做成一个镜像。当招聘人员找到你的时候也不要感到奇怪。尽管他们描述的工作内容并不是恰好适合你但是你可以利用他们的时间和兴趣来得到关于你的简历中有哪些地方好与不好的反馈信息。
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/2/add-open-source-to-your-resume
作者:[edunham][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/edunham
[1]: https://jenkins-ci.org/
[2]: http://buildbot.net/
[3]: https://travis-ci.org/
[4]: https://github.com/settings/organizations
[5]: https://www.latex-project.org/
[6]: https://www.libreoffice.org/download/libreoffice-fresh/

View File

@ -0,0 +1,215 @@
用Python打造你的Eclipse
==============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa)
Eclipse高级脚本环境([EASE][1])项目虽然还在开发中,但是不是不得承认它非常强大,让我们可以快速打造自己的Eclipse开发环境.
依据Eclipse强大的框架,可以通过其内建的插件系统全方面的扩展Eclipse.然而,编写和部署一个新的插件还是十分笨重,即使只是需要一个额外的小功能。但是现在依托EASE可以方便实用Python或者Javascript脚本语言来扩展。
本文中根据我在今年北美的EclipseCon大会上的[演讲][2]我介绍包括安装Eclipse的Python和EASE环境并包括使用强力Python来增压你的IDE。
### 安装并运行 "Hello World"
本文中的例子使用Java实现的Python解释Jython。Eclipse可以直接安装EASE环境。本例中使用Eclipse[Mars][3]它已经自带了EASE环境包和Jython引擎。
使用Eclipse安装对话框(`Help>Install New Software`...),安装EASE[http://download.eclipse.org/ease/update/nightly][4]
选择下列组件:
- EASE Core feature
- EASE core UI feature
- EASE Python Developer Resources
- EASE modules (Incubation)
包括了EASE和模组。但是我们比较关心Resource包此包可以访问Eclipse工作空间项目和文件API。
![](https://opensource.com/sites/default/files/1_installease_nightly.png)
成功安装后接下来安装Jython引擎[https://dl.bintray.com/pontesegger/ease-jython/][5].完成后测试下。新建一个项目并新建一个hello.py文件输入
```
print "hello world"
```
选中这个文件,右击,选中'Run as -> EASE script'.这样就可以在控制台看到"Hello world"的输出.
配置完成,现在就可以轻松使用Python来控制工作空间和项目了.
### 提升你的代码质量
管理良好的代码质量本身是一件非常烦恼的事情,尤其是当需要处理一个大量代码库和要许多工程师参与的时候.而这些痛苦可以通过脚本来减轻,比如大量文字排版,或者[去掉文件中的unix行结束符][6]来使更容易比较.其他很棒的事情包括使用脚本让Eclipse markers高亮代码.这里有一些例子,你可以加入到task markers ,用"printStackTrace"方法在java文件中探测.请看[源码][7]
运行,拷贝文件到工作空间,右击运行.
```
loadModule('/System/Resources')
```
from org.eclipse.core.resources import IMarker
```
for ifile in findFiles("*.java"):
file_name = str(ifile.getLocation())
print "Processing " + file_name
with open(file_name) as f:
for line_no, line in enumerate(f, start=1):
if "printStackTrace" in line:
marker = ifile.createMarker(IMarker.TASK)
marker.setAttribute(IMarker.TRANSIENT, True)
marker.setAttribute(IMarker.LINE_NUMBER, line_no)
marker.setAttribute(IMarker.MESSAGE, "Fix in Sprint 2: " + line.strip())
```
如果你的任何java文件中包含了printStackTraces,你就可以看见编辑器的侧边栏上自动加上的标记.
![](https://opensource.com/sites/default/files/2_codequality.png)
### 自动构建繁琐任务
当同时工作在多个项目的时候,肯定需要需要完成许多繁杂,重复的任务.可能你需要在所有源文件头上加入CopyRight, 或者采用新框架时候自动更新文件.例如,当从Tycho迁移到Maven时候,我们给每一个项目必须添加pom.xml文件.使用Python可以很轻松的完成这个任务.只从Tycho提供无pom构建后,我们也需要移除不要的pom文件.同样,只需要几行代码就可以完成这个任务.例如,这里有个脚本可以在每一个打开的工作空间项目上加入README.md.请看源代码[add_readme.py][8].
拷贝文件到工作空间,右击并选择"Run as -> EASE script"
loadModule('/System/Resources')
```
for iproject in getWorkspace().getProjects():
if not iproject.isOpen():
continue
ifile = iproject.getFile("README.md")
if not ifile.exists():
contents = "# " + iproject.getName() + "\n\n"
if iproject.hasNature("org.eclipse.jdt.core.javanature"):
contents += "A Java Project\n"
elif iproject.hasNature("org.python.pydev.pythonNature"):
contents += "A Python Project\n"
writeFile(ifile, contents)
```
脚本结果会在打开的项目中加入README.md,java和Python项目还会自动加上一行描述.
![](https://opensource.com/sites/default/files/3_tedioustask.png)
### 构建新功能
Python脚本可以快速构建一些需要的附加功能,或者给团队和用户快速构建demo.例如,一个现在Eclipse目前不支持的功能,自动保存工作的文件.即使这个功能将会很快提供,但是你现在就可以马上拥有一个能30秒自动保存的编辑器.以下是主方法的片段.请看下列代码:[autosave.py][9]
```
def save_dirty_editors():
workbench = getService(org.eclipse.ui.IWorkbench)
for window in workbench.getWorkbenchWindows():
for page in window.getPages():
for editor_ref in page.getEditorReferences():
part = editor_ref.getPart(False)
if part and part.isDirty():
print "Auto-Saving", part.getTitle()
part.doSave(None)
```
在运行脚本之前,你需要勾选'Allow Scripts to run code in UI thread'设定,这个设定在Window > Preferences > Scripting中.然后添加脚本到工作空间,右击和选择"Run as > EASE Script".每10秒自动保存的信息就会在控制台输出.关掉自动保存脚本,只需要在点击控制台的红色方框.
![](https://opensource.com/sites/default/files/4_prototype.png)
### 快速扩展用户界面
EASE最棒的事情是可以通过脚本与UI元素挂钩,可以调整你的IDE,例如,在菜单中新建一个按钮.不需要编写java代码或者新的插件,只需要增加几行代码.
下面是一个简单的基脚本示例,用来产生三个新项目.
```
# name : Create fruit projects
# toolbar : Project Explorer
# description : Create fruit projects
loadModule("/System/Resources")
for name in ["banana", "pineapple", "mango"]:
createProject(name)
```
上述特别的EASE增加了一个按钮到项目浏览工具条.下面这个脚本是用来删除这三个项目.请看源码[createProjects.py][10]和[deleteProjects.py][11].
```
# name :Delete fruit projects
# toolbar : Project Explorer
# description : Get rid of the fruit projects
loadModule("/System/Resources")
for name in ["banana", "pineapple", "mango"]:
project = getProject(name)
project.delete(0, None)
```
为了使脚本启动生效按钮,增加脚本到'ScriptsProject'文件夹.然后选择Windows > Preference > Scripting > Script 中定位到文件夹.点击'Add Workspace'按钮和选择ScriptProject项目.这个项目现在将会在启动时默认加载.你可以发现Project Explorer上出现了这两个按钮,这样你就可以通过这两个按钮快速增加删除项目.
![](https://opensource.com/sites/default/files/5_buttons.png)
### 整合三方工具
无论何时你可能需要除了Eclipse生态系统以外的工具.这些时候你会发将他们包装在一个脚本来调用会非常方便.这里有一个简单的例子让你整合explorer.exe,并加入它到右键菜单栏,这样点击图标就可以打开浏览器浏览当前文件.请看源码[explorer.py][12]
```
# name : Explore from here
# popup : enableFor(org.eclipse.core.resources.IResource)
# description : Start a file browser using current selection
loadModule("/System/Platform")
loadModule('/System/UI')
selection = getSelection()
if isinstance(selection, org.eclipse.jface.viewers.IStructuredSelection):
selection = selection.getFirstElement()
if not isinstance(selection, org.eclipse.core.resources.IResource):
selection = adapt(selection, org.eclipse.core.resources.IResource)
if isinstance(selection, org.eclipse.core.resources.IFile):
selection = selection.getParent()
if isinstance(selection, org.eclipse.core.resources.IContainer):
runProcess("explorer.exe", [selection.getLocation().toFile().toString()])
```
为了让菜单显示增加,像之前一样加入'ScriptProject'.在文件上右击,你看弹出菜单是不是出现了图标.选择Explore from here.
![](https://opensource.com/sites/default/files/6_explorer.png)
Eclipse高级基本环境提供一套很棒的扩展功能,使得Eclipse IDE能使用Python来轻易扩展.虽然这个项目还在婴儿期,但是[关于这个项目][13]更多更棒的功能也正在加紧开发中,如果你想为这个贡献,请到[论坛][14]讨论.
2016年[Eclipsecon North America][15]会议将会发布更多EASE细节.我的演讲[Scripting Eclipse with Python][16]也会不单介绍Jython,也包括C-Python和其他功能性扩展的实战例子.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/2/how-use-python-hack-your-ide
作者:[Tracy Miranda][a]
译者:[VicYu/Vic020](http://vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tracymiranda
[1]: https://eclipse.org/ease/
[2]: https://www.eclipsecon.org/na2016/session/scripting-eclipse-python
[3]: https://www.eclipse.org/downloads/packages/eclipse-ide-eclipse-committers-451/mars1
[4]: http://download.eclipse.org/ease/update/nightly
[5]: https://dl.bintray.com/pontesegger/ease-jython/
[6]: http://code.activestate.com/recipes/66434-change-line-endings/
[7]: https://gist.github.com/tracymiranda/6556482e278c9afc421d
[8]: https://gist.github.com/tracymiranda/f20f233b40f1f79b1df2
[9]: https://gist.github.com/tracymiranda/e9588d0976c46a987463
[10]: https://gist.github.com/tracymiranda/55995daaea9a4db584dc
[11]: https://gist.github.com/tracymiranda/baa218fc2c1a8e898194
[12]: https://gist.github.com/tracymiranda/8aa3f0fc4bf44f4a5cd3
[13]: https://eclipse.org/ease/
[14]: https://dev.eclipse.org/mailman/listinfo/ease-dev
[15]: https://www.eclipsecon.org/na2016
[16]: https://www.eclipsecon.org/na2016/session/scripting-eclipse-python

View File

@ -0,0 +1,71 @@
# 在NASA中使用开源工具进行图像处理
关键词NASA图像处理Node.jsOpenCV
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/nasa_spitzer_space_pink_spiral.jpg?itok=3XEUstkl)
这个已逝的夏天,我是位于格伦的 [NASA](http://www.nasa.gov/centers/glenn/home/index.html) [GVIS](https://ocio.grc.nasa.gov/gvis/) 实验室的研究生,我将我对开源的热情带到了那里。我的任务是提高我们实验室对 Dan Schroeder 开发的一个开源流体动力学模拟器的贡献。原本的模拟器为用户呈现出可以用鼠标绘制的障碍,用来计算流体动力学模型。我们团队的贡献是加入图像处理的代码,这些代码分析实况视频的每一帧以显示一个物体如何与液体相互作用。而且,我们还要做更多事情。
我们想要让图像处理部分更加健全,所以我致力于改善图像处理库。
基于新的库,模拟器可以检测轮廓、进行空间坐标变换以及找到物体的质心。图像处理并不直接与流体动力学模拟器物理相关。它用摄像头检测物体,并且获取物体轮廓,为流体模拟器创建一个障碍物。然后,流体模拟器启动,输出结果会被投射到真实物体上。
我的目标是通过以下三种方式改进模拟器:
1. 找寻物体的轮廓
2. 找寻物体的质心
3. 能对物体中心进行相关的精确转换
我的导师建议我安装 [Node.js](http://nodejs.org/), [OpenCV](http://opencv.org/), 和 [Node.js bindings for OpenCV](https://github.com/peterbraden/node-opencv).。在等待软件安装的过程中,我查看了 OpenCV 的 [GitHub 主页](https://github.com/peterbraden/node-opencv) 上的示例源码。我发现示例源码使用 JavaScript 写的,而我还不懂 JavaScript ,所以我在 Codecademy 上学了一些课程。两天后,我对 JavaScript 依旧生疏不过我还是开始了我的项目。。。它包含了更多的JavaScript。
示例的轮廓检测代码工作得很好。事实上,它使得我用几个小时就完成了第一个目标!为了获取一幅图片的轮廓,它看起来像这样:
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_1-520x293.jpg)
> 包括所有轮廓的原始图,
示例的检测轮廓的代码工作得有点好过头了。不仅物体的轮廓被检测到了,整个图片中的轮廓都检测到了。这会导致模拟器要与那些没用的轮廓打交道。这是一个严重的问题,因为它会返回错误的数据。为了避免模拟器接触到不想要的轮廓,我加了一个区域约束。轮廓要位于一定的区域范围内才会被画出来。区域约束使得轮廓变干净了。
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_2-520x293.jpg)
> 过滤后的轮廓,包含了阴影轮廓
虽然无关的轮廓没有了,但是图像还有个问题。图像本该只有一个轮廓,但是它来回绕了自己两次,没有完整地圈起来。区域在这里不能作为决定因素,所以必须试试其他方式。
这一次,我不是直接去找寻轮廓,而是先将图片转换成二值图。二值图是转换之后只有黑白像素的图片。为了获取到二值图我先把彩色图转成灰度图。转换之后我再用阈值函数对图片进行处理。阈值函数遍历图片每个像素点的值,如果值小于 30 ,像素的颜色就会改成黑色。否则则反。在原始图片转换成二值图之后,结果变成这样:
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_3-520x293.jpg)
> 二值图。
然后我获取了二值图的轮廓,结果是一个更干净的轮廓,没有了阴影轮廓。
![](https://opensource.com/sites/default/files/image_processing_nasa_4.jpg)
> 最后的干净轮廓。
这个时候,我可以获取干净的轮廓、计算质心了。可惜的是,我没有足够的时间去完成质心的相关变换。因为我的实习时间不多了,我开始考虑我在这段有限时间内能做的其它事情。其中一个就是边界矩形。边界矩形是包含了图片轮廓的最小四边形。边界矩形很重要,因为它是在页面上缩放轮廓的关键。虽然很遗憾我没时间利用边界矩形做更多事情,但是我仍然想去了解更多,因为这是个很有用的工具。
最后,经过以上的努力,我完成了对图像的处理!
![](https://opensource.com/sites/default/files/resize/image_processing_nasa_5-521x293.jpg)
> 最后图像,红色的边界矩形和质心。
当这些图像处理代码写完之后,我用我的代码替代了模拟器中的老代码。非常意外的,它可以工作。
嗯,基本可以。
程序有内存泄露,每 1/10 秒泄露 100MB 。我很高兴原因不是我的代码。坏消息是修复它并不是我能控制的。好消息是有个解决方法我可以使用。它并不是最理想的,方法是不断检查模拟器使用的内存,当使用内存超过 1 GB重新启动模拟器。
在 NASA 实验室,我们使用很多的开源软件,没有这些开源软件的帮助,我不可能完成这些工作。
* * *
via: [https://opensource.com/life/16/3/image-processing-nasa](https://opensource.com/life/16/3/image-processing-nasa)
作者:[Lauren Egts](https://opensource.com/users/laurenegts)
译者:[willowyoung](https://github.com/willowyoung)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,50 @@
Docker 1.11采纳了开源容器项目组件
=======================================================
![](http://images.techhive.com/images/article/2015/01/docker-620x465-100559026-primary.idge.jpg)
>Docker参与的开源项目完成了一个闭环最新构建的Docker采用了Docker贡献给OCP的组件。
[Docker 1.11][1]最大的新闻并不是它的功能而是它使用了在OCP支持下的标准化的组件版本。
去年Docker贡献了它的[runC][2]核心给OCP作为构建构建容器工具的基础。同样还有[containerd][3]作为守护进程或者服务端用于控制runC的实例。Docker 1.11现在使用的是捐赠和公开的版本。
>在InfoWorld的[Docker初学者指南][4]中深入这个热门开源框架。今天就拿来看!|在[InfoWorld每日简讯][5]中获取今日的技术新闻。
>Docker此举挑战了它的容器生态仍[主要由Docker自身决定][6]的传说。它并不是为了作秀才将容器规范和运行时细节贡献给OCP。它希望项目将来的开发越开放和广泛越好。
![](http://images.techhive.com/images/article/2016/04/docker-runc-100656060-large.idge.png)
>Docker 1.11已经用贡献给OCP的runC和containerd进行了重构。runC如果需要可以被交换出去并被替换。
runC的[两位主要提交者][7]来自Docker但是来自Virtuozzo(Parallels fame)、OpenShift、Project Atomic、华为、GE Healthcare、Suse Linux也都是提交的常客。
Docker 1.11中一个更明显的变化是先前Docker运行时在Docker中是唯一可用的并且评论家认为这个会限制用户的选择。runC运行时现在是可交换的虽然Docker在发布时将runC作为默认的引擎但是任何兼容的引擎都可以被交换进入。Docker同样希望它可以不用杀死并重启现在运行的容器但是这个作为今后的改进规划。
Docker正在将基于OCP开发流程作为内部更好的方式去创建它的产品。在它的发布1.11的[官方博客中称][8]“将Docker切分成独立的工具意味着更专注的维护者最终有更好的软件质量。”
除了修复长期以来存在的问题何确保Docker的runC/containerd跟上步伐Docker还在Docker 1.11中加入了一些改进。Docker Engine现在支持VLAN和IPv6服务发现并且会自动在多个相同别名容器间执行DNS轮询负载均衡。
------------------------------------------------------------------------------
via: http://www.infoworld.com/article/3055966/open-source-tools/docker-111-adopts-open-container-project-components.html
作者:[Serdar Yegulalp][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.infoworld.com/author/Serdar-Yegulalp/
[1]: https://blog.docker.com/2016/04/docker-engine-1-11-runc/
[2]: http://runc.io/
[3]: https://containerd.tools/
[4]: http://www.infoworld.com/resources/16373/application-virtualization/the-beginners-guide-to-docker#tk.ifw-infsb
[5]: http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
[6]: http://www.infoworld.com/article/2876801/application-virtualization/docker-reorganization-grows-up.html
[7]: https://github.com/opencontainers/runc/graphs/contributors
[8]: https://blog.docker.com/2016/04/docker-engine-1-11-runc/

View File

@ -1,284 +0,0 @@
LFCS 系列第六讲组装分区为RAID设备——创建和管理系统备份
=========================================================
Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。
![Linux Foundation Certified Sysadmin Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png)
LFCS 系列第六讲
以下视频介绍了 Linux 基金会认证程序。
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是《十套教程》系列中的第六讲在这一讲里我们将会解释如何组装分区为RAID设备——创建和管理系统备份。这些都是 LFCS 认证中的必备知识。
### 了解RAID ###
一种被称为独立磁盘冗余阵列(RAID)的技术是将多个硬盘组合成一个单独逻辑单元的存储解决方案,它提供了数据冗余功能并且改善硬盘的读写操作性能。
然而实际的容错和磁盘I/O性能硬盘取决于如何将多个硬盘组装成磁盘阵列。根据可用的设备和容错/性能的需求RAID被分为不同的级别你可以在Tecmint.com上参考RAID系列文章以获得每个RAID级别更详细的解释。
- RAID Guide: [What is RAID, Concepts of RAID and RAID Levels Explained][1]
我们选择用于创建、组装、管理、监视软件RAID的工具叫做mdadm(multiple disk admin的简写)。
```
---------------- Debian and Derivatives ----------------
# aptitude update && aptitude install mdadm
```
```
---------------- Red Hat and CentOS based Systems ----------------
# yum update && yum install mdadm
```
```
---------------- On openSUSE ----------------
# zypper refresh && zypper install mdadm #
```
#### 组装分区作为RAID设备 ####
组装已有分区作为RAID设备的过程由以下步骤组成。
**1. 使用mdadm创建阵列**
如果先前其中一个分区被格式化或者作为了另一个RAID阵列的一部分你会被提示以确认创建一个新的阵列。假设你已经采取了必要的预防措施以避免丢失重要数据那么可以安全地输入Y并且按下回车。
```
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
```
![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png)
创建RAID阵列
**2. 检查阵列的创建状态**
在创建了RAID阵列之后你可以检查使用以下命令检查阵列的状态。
# cat /proc/mdstat
or
# mdadm --detail /dev/md0 [More detailed summary]
![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png)
检查RAID阵列的状态
**3. 格式化RAID设备**
如本系列[Part 4][2]所介绍的,按照你的需求/要求采用某种文件系统格式化你的设备。
4. 监控RAID阵列服务
指示监控服务时刻监视你的RAID阵列。把```# mdadm --detail --scan```命令输出结果添加到/etc/mdadm/mdadm.conf(Debian和derivatives)或者/etc/mdadm.conf(Cent0S/openSUSE),如下。
# mdadm --detail --scan
![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png)
监控RAID阵列
# mdadm --assemble --scan [Assemble the array]
为了确保服务能够开机启动需要以root权限运行以下命令。
**Debian 和 Derivatives**
Debian 和 Derivatives能够通过下面步骤使服务默认开机启动
# update-rc.d mdadm defaults
在/etc/default/mdadm文件中添加下面这一行
AUTOSTART=true
**CentOS 和 openSUSE(systemd-based)**
# systemctl start mdmonitor
# systemctl enable mdmonitor
**CentOS 和 openSUSEi(SysVinit-based)**
# service mdmonitor start
# chkconfig mdmonitor on
**5. 检查RAID磁盘故障**
在支持冗余的的RAID级别中在需要时会替换故障的驱动器。当磁盘阵列中的设备出现故障时仅当存在我们第一次创建阵列时预留的备用设备时磁盘阵列会将自动启动重建。
![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png)
检查RAID故障磁盘
否则,我们需要手动连接一个额外的物理驱动器到我们的系统,并且运行。
# mdadm /dev/md0 --add /dev/sdX1
/dev/md0是出现了问题的阵列而/dev/sdx1是新添加的设备。
**6. 分解一个工作阵列**
如果你需要使用工作阵列的设备创建一个新的阵列,你可能不得不去分解已有工作阵列——(可选步骤)
# mdadm --stop /dev/md0 # Stop the array
# mdadm --remove /dev/md0 # Remove the RAID device
# mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes
**7. 设置邮件通知**
你可以配置一个用于发送通知的有效邮件地址或者系统账号(确保在mdadm.conf文件中有下面这一行)。——(可选步骤)
MAILADDR root
在这种情况下来自RAID后台监控程序所有的通知将会发送到你的本地root账号的邮件箱中。其中一个类似的通知如下。
说明此次通知事件和第5步中的例子相关。一个设备被标志为错误并且一个空闲的设备自动地被mdadm加入到阵列。我们用完了所有"健康的"空闲设备,因此我们得到了通知。
![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png)
RAID监控通知
#### 了解RAID级别 ####
** RAID 0 **
阵列总大小是最小分区大小的n倍n是阵列中独立磁盘的个数(你至少需要两个驱动器/磁盘)。运行下面命令,使用/dev/sdb1和/dev/sdc1分区组装一个RAID 0 阵列。
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
常见用途:用于支持性能比容错更重要的实时应用程序的设置
**RAID 1 (又名镜像/Mirroring)**
阵列总大小等于最小分区大小(你至少需要两个驱动器/磁盘)。运行下面命令,使用/dev/sdb1和/dev/sdc1分区组装一个RAID 1 阵列。
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
常见用途:操作系统的安装或者重要的子文件夹,例如 /home
**RAID 5 (又名奇偶校验码盘/drives with Parity)**
阵列总大小将是最小分区大小的(n-1)倍。//用于奇偶校验(冗余)计算(你至少需要3个驱动器/磁盘)。
说明:你可以指定一个空闲设备(/dev/sde1)替换问题出现时的故障部分(分区)。运行下面命令,使用/dev/sdb1, /dev/sdc1, /dev/sdd1/dev/sde1组装一个RAID 5 阵列,其中/dev/sde1作为空闲分区。
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1
常见用途Web和文件服务
**RAID 6 (又名双重奇偶校验码盘/drives with double Parity)**
阵列总大小为(n*s)-2*s其中n为阵列中独立磁盘的个数s为最小磁盘大小。
说明:你可以指定一个空闲分区(在这个例子为/dev/sdf1)替换问题出现时的故障部分(分区)。
运行下面命令,使用/dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1和/dev/sdf1组装RAID 6阵列其中/dev/sdf1作为空闲分区。
# mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1
常见用途:大容量、高可用性要求的文件服务器和备份服务器。
**RAID 1+0 (又名镜像条带/stripe of mirrors)**
因为RAID 1+0是RAID 0 和 RAID 1的组合所以阵列总大小是基于两者的公式计算的。首先计算每一个镜像的大小然后再计算条带的大小。
# mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1
常见用途需要快速IO操作的数据库和应用服务器
#### 创建和管理系统备份 ####
记住RAID其所有的价值不是在于备份的替换者是对你有益的在黑板上写上1000次如果你需要的话但无论何时一定要记住它。在我们开始前我们必须注意的是没有一个放之四海皆准的针对所有系统备份的解决方案但这里有一些东西是你在规划一个备份策略时需要考虑的。
- 你的系统将用于什么?(桌面或者服务器?如果系统是应用于后者,那么最重要的服务是什么——//其配置?)
- 你每隔多久备份你的系统?
- 你需要备份的数据是什么(比如 文件/文件夹/数据库转储)?你还可以考虑是否需要备份大型文件(比如音频和视频文件)。
- 这些备份将会存储在哪里(物理位置和媒体)
**备份你的数据**
方法1使用dd命令备份整个磁盘。你可以在任意时间点通过创建一个准确的镜像来备份一整个硬盘或者是分区。注意当设备是离线时这种方法效果最好也就是说它没有被挂载并且没有任何进程的I/O操作访问它。
这种备份方法的缺点是镜像将具有和磁盘或分区一样的大小即使实际数据占用的是一个很小的比例。比如如果你想要为只使用了10%的20GB的分区创建镜像那么镜像文件将仍旧是20GB。换句话来讲它不仅包含了备份的实际数据而且也包含了整个分区。如果你想完整备份你的设备那么你可以考虑使用这个方法。
**从现有的设备创建一个镜像文件**
# dd if=/dev/sda of=/system_images/sda.img
或者
--------------------- 可选地,你可以压缩镜像文件 -------------------
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
**从镜像文件恢复备份**
# dd if=/system_images/sda.img of=/dev/sda
或者
--------------------- 根据你创建镜像文件时的选择(译者注:比如压缩) ----------------
# gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
方法2使用tar命令备份确定的文件/文件夹——已经在本系列[Part 3][3]中讲了。如果你想要备份指定的文件/文件夹(配置文件,用户主目录等等),你可以使用这种方法。
方法3使用rsync命令同步文件。rsync是一种多功能远程和本地文件复制工具。如果你想要从网络设备备份或同步文件rsync是一种选择。
无论是你是正在同步两个本地文件夹还是本地 < — > 挂载在本地文件系统的远程文件夹,其基本语法是一样的。
# rsync -av source_directory destination directory
在这里,-a 递归遍历子目录(如果它们存在的话),维持符号链接、时间戳、权限以及原本的属主/属组,-v 显示详细过程。
![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png)
rsync 同步文件
除此之外如果你想增加在网络上传输数据的安全性你可以通过rsync使用ssh协议。
**通过ssh同步本地 → 远程文件夹**
# rsync -avzhe ssh backups root@remote_host:/remote_directory/
这个示例本地主机上的backups文件夹将与远程主机上的/root/remote_directory的内容同步。
在这里,-h 选项以人可读的格式显示文件的大小,-e 标志用于表示一个ssh连接。
![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png)
rsync 同步远程文件
**通过ssh同步远程 → 本地 文件夹**
在这种情况下交换前面示例中的source和destination文件夹。
# rsync -avzhe ssh root@remote_host:/remote_directory/ backups
请注意这些只是rsync用法的三个示例而已(你可能遇到的最常见的情形)。对于更多有关rsync命令的示例和用法 ,你可以查看下面的文章。
- Read Also: [10 rsync Commands to Sync Files in Linux][4]
### Summary ###
作为一个系统管理员你需要确保你的系统表现得尽可能好。如果你做好了充分准备并且如果你的数据完整性能被诸如RAID和系统日常备份的存储技术支持那你将是安全的。
如果你有有关完善这篇文章的问题、评论或者进一步的想法,可以在下面畅所欲言。除此之外,请考虑通过你的社交网络简介分享这系列文章。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/
作者:[Gabriel Cánepa][a]
译者:[cpsoture](https://github.com/cposture)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
[2]:http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/
[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/