pull the newest lctt master 2017.05.31

This commit is contained in:
zhousiyu325 2017-05-31 12:12:41 +08:00
commit 2c1b422fb0
5 changed files with 161 additions and 163 deletions

View File

@ -1,54 +0,0 @@
alim0x translating
Faster machine learning is coming to the Linux kernel
============================================================
### The addition of heterogenous memory management to the Linux kernel will unlock new ways to speed up GPUs, and potentially other kinds of machine learning hardware
![Faster machine learning is coming to a Linux kernel near you](http://images.techhive.com/images/article/2015/12/machine_learning-100633721-primary.idge.jpg)
>Credit: Thinkstock
It's been a long time in the works, but a memory management feature intended to give machine learning or other GPU-powered applications a major performance boost is close to making it into one of the next revisions of the kernel.
Heterogenous memory management (HMM) allows a devices driver to mirror the address space for a process under its own memory management. As Red Hat developer Jérôme Glisse [explains][10], this makes it easier for hardware devices like GPUs to directly access the memory of a process without the extra overhead of copying anything. It also doesn't violate the memory protection features afforded by modern OSes.
One class of application that stands to benefit most from HMM is GPU-based machine learning. Libraries like OpenCL and CUDA would be able to get a speed boost from HMM. HMM does this in much the same way as [speedups being done to GPU-based machine learning][11], namely by leaving data in place near the GPU, operating directly on it there, and moving it around as little as possible.
These kinds of speed-ups for CUDA, Nvidias library for GPU-based processing, would only benefit operations on Nvidia GPUs, but those GPUs currently constitute the vast majority of the hardware used to accelerate number crunching. However, OpenCL was devised to write code that could target multiple kinds of hardware—CPUs, GPUs, FPGAs, and so on—so HMM could provide much broader benefits as that hardware matures.
There are a few obstacles to getting HMM into a usable state in Linux. First is kernel support, which has been under wraps for quite some time. HMM was first proposed as a Linux kernel patchset [back in 2014][12], with Red Hat and Nvidia both involved as key developers. The amount of work involved wasnt trivial, but the developers believe code could be submitted for potential inclusion within the next couple of kernel releases.
The second obstacle is video driver support, which Nvidia has been working on separately. According to Glisses notes, AMD GPUs are likely to support HMM as well, so this particular optimization wont be limited to Nvidia GPUs. AMD has been trying to ramp up its presence in the GPU market, potentially by [merging GPU and CPU processing][13] on the same die. However, the software ecosystem still plainly favors Nvidia; there would need to be a few more vendor-neutral projects like HMM, and OpenCL performance on a par with what CUDA can provide, to make real choice possible.
The third obstacle is hardware support, since HMM requires the presence of a replayable page faults hardware feature to work. Only Nvidias Pascal line of high-end GPUs supports this feature. In a way thats good news, since it means Nvidia will only need to provide driver support for one piece of hardware—requiring less work on its part—to get HMM up and running.
Once HMM is in place, there will be pressure on public cloud providers with GPU instances to [support the latest-and-greatest generation of GPU][14]. Not just by swapping old-school Nvidia Kepler cards for bleeding-edge Pascal GPUs; as each succeeding generation of GPU pulls further away from the pack, support optimizations like HMM will provide strategic advantages.
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html
作者:[Serdar Yegulalp][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/Serdar-Yegulalp/
[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&via=infoworld&text=Faster+machine+learning+is+coming+to+the+Linux+kernel
[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
[7]:http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html#email
[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
[10]:https://lkml.org/lkml/2017/4/21/872
[11]:http://www.infoworld.com/article/3195437/machine-learning-analytics-get-a-boost-from-gpu-data-frame-project.html
[12]:https://lwn.net/Articles/597289/
[13]:http://www.infoworld.com/article/3099204/hardware/amd-mulls-a-cpugpu-super-chip-in-a-server-reboot.html
[14]:http://www.infoworld.com/article/3126076/artificial-intelligence/aws-machine-learning-vms-go-faster-but-not-forward.html

View File

@ -1,108 +0,0 @@
Top 4 CDN services for hosting open source libraries
============================================================
### Content delivery networks accelerate your website's images, CSS files, JS files, and other static content.
![Top 4 CDN services for hosting open source libraries](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB "Top 4 CDN services for hosting open source libraries")
>Image credits : [Open Clip Art Library][3], which released it explicitly into the **[public domain][1]** ([see here][4]). Modified by Jen Wike Huger.
A CDN, or content delivery network, is a network of strategically placed servers located around the world used for the purpose of delivering files faster to users. A traditional CDN will allow you to accelerate your website's images, CSS files, JS files, and any other piece of static content. This allows website owners to accelerate all of their own content as well as provide them with additional features and configuration options. These premium services typically require payment based on the amount of bandwidth a project uses.
However, if your project doesn't justify the cost of implementing a traditional CDN, the use of an open source CDN may be more suitable. Typically, these types of CDNs allow you to link to popular web-based libraries (CSS/JS frameworks, for example), which are then delivered to your web visitors from the free CDN's servers. Although CDN services for open source libraries do not allow you to upload your own content to their servers, they can help you accelerate libraries globally and improve your website's redundancy.
CDNs host projects on a vast network of servers, so website maintainers need to modify their asset links in the website's HTML code to reflect the open source CDN's URL followed by the path to the resource. Depending upon whether you're linking to a JavaScript or CSS library, the links you'll include will live in either a <script> or <link> tag.
Let's explore four popular CDN services for open source libraries.
### JsDelivr
[JsDelivr][5] is an open source CDN provider that uses the networks of premium CDN providers (KeyCDN, Stackpath, and Cloudflare) to deliver open source project assets. A few highlights of jsDelivr include:
* Search from over 2,100 libraries
* 110 POP locations
* CDN is accessible in Asia and China
* API support
* No traffic limits
* Full HTTPS support
All snippets start off with the custom jsDelivr URL [https://cdn.jsdelivr.net/][6], and are then followed by the name of the project, version number, etc. You can also configure jsDelivr to generate the URL with the script tags and enable SRI (subresource Integrity) for added security.
### **Cdnjs**
[Cdnjs][7] is another popular open source CDN provider that's similar to jsDelivr. This service also offers an array of popular JavaScript and CSS libraries that you can choose from to link within your web project. This service is sponsored by CDN providers Cloudflare and [KeyCDN][8]. A few highlights of cdnjs include:
* Search from over 2,900 libraries
* Used by over 1 million websites
* Supports HTTP/2
* Supports HTTPS
Similar to jsDelivr, with cdnjs you also have the option to simply copy the asset URL with or without the script tag and SRI.
### Google Hosted Libraries
The [Google's Hosted Libraries][9] site allows you to link to popular JavaScript libraries that are hosted on Google's powerful open source CDN network. This open source CDN solution doesn't offer as many libraries or features as jsDelivr or cdnjs; however, a high level of reliability and trust can be expected when linking to Google's Hosted Libraries. A few highlights of Google's open source CDN include:
* HTTPS support
* Files are served with CORS and Timing-Allow headers
* Offers the latest version of each library
All of Google's Hosted libraries files start with the URL [https://ajax.googleapis.com/][10], and are followed by the project's name, version number, and file name.
### Microsoft Ajax CDN
The [Microsoft Ajax Content Delivery Network][11] is quite similar to Google's Hosted Libraries in that it only hosts popular libraries. However, two major differences that separate Microsoft Ajax CDN from Google's Hosted Libraries are that Microsoft provides both CSS as well as JS libraries and also offers various versions of each library. A few highlights of the Microsoft Ajax CDN include:
* HTTPS support
* Previous versions of each library are often available
All Microsoft Ajax files begin with the URL [http://ajax.aspnetcdn.com/ajax/][12], and like the others, are followed by the library's name, version number, etc.
If your project or website isn't ready to take advantage of a premium CDN service, but you would still like to accelerate vital aspects of your site, then using an open source CDN can be a great solution. They allow you to accelerate the delivery of third-party libraries that would otherwise be delivered from your origin server causing unnecessary load and slower speeds for distant users.
_Which open source CDN provider do you prefer to use and why?_
--------------------------------------------------------------------------------
作者简介:
Cody Arsenault - Cody is passionate about all things web performance, SEO and entrepreneurship. He is a web performance advocate at KeyCDN and works towards making the web faster.
------------
via: https://opensource.com/article/17/4/top-cdn-services
作者:[Cody Arsenault ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/codya
[1]:https://en.wikipedia.org/wiki/public_domain
[2]:https://opensource.com/article/17/4/top-cdn-services?rate=lgZwEmWt7QXtuMhB-lnHWQ-jxknQ0Kh4YOfqdFGer5w
[3]:https://en.wikipedia.org/wiki/Open_Clip_Art_Library
[4]:https://openclipart.org/share
[5]:http://www.jsdelivr.com/
[6]:https://cdn.jsdelivr.net/
[7]:https://cdnjs.com/
[8]:https://www.keycdn.com/
[9]:https://developers.google.com/speed/libraries/
[10]:https://ajax.googleapis.com/
[11]:https://www.asp.net/ajax/cdn
[12]:http://ajax.aspnetcdn.com/ajax/
[13]:https://opensource.com/user/128076/feed
[14]:https://opensource.com/users/codya

View File

@ -1,3 +1,4 @@
Translating by chenxinlong
An introduction to Linux's EXT4 filesystem
============================================================
@ -260,7 +261,7 @@ David Both - David Both is a Linux and Open Source advocate who resides in Ralei
via: https://opensource.com/article/17/5/introduction-ext4-filesystem
作者:[David Both ][a]
译者:[译者ID](https://github.com/译者ID)
译者:[译者ID](https://github.com/chenxinlong)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,52 @@
更快的机器学习即将来到 Linux 内核
============================================================
### Linux 内核新增的异构内存管理将解锁加速 GPU 的新途径,并挖掘其它机器学习硬件的潜能
![更快的机器学习正在来到你身边的 Linux 内核 Faster machine learning is coming to a Linux kernel near you](http://images.techhive.com/images/article/2015/12/machine_learning-100633721-primary.idge.jpg)
>Credit: Thinkstock
一项开发了很久的内存管理技术将会给机器学习和其它 GPU 驱动的程序很大幅度的提升,而它也将在接下来的几个版本中进入 Linux 内核。
异构内存管理HMM可以允许设备驱动为在其自身内存管理下的进程镜像地址空间。正如红帽的开发者 Jérôme Glisse [所解释的][10],这让像 GPU 这样的硬件设备可以直接访问进程内存,而不用花费复制带来的额外开销。它还不违反现代才做系统提供的内存保护功能。
一类会从 HMM 中获益最多的应用是基于 GPU 的机器学习。像 OpenCL 和 CUDA 这样的库能够从 HMM 中获得速度的提升。HMM 实现这个的方式和[加速基于 GPU 的机器学习][11]相似,就是让数据留在原地,靠近 GPU在那里直接操作数据尽可能少地移动数据。
像这样的加速对于 CUDA英伟达基于 GPU 的处理库)来说,只会有益于在英伟达 GPU 上的操作,这些 GPU 也是目前加速数据处理的主要硬件。但是OpenCL 设计用来编写可以针对多种硬件的代码——CPUGPUFPGA等等——随着这些硬件的成熟HMM 能够提供更加广泛的益处。
要让 Linux 中的 HMM 处于可用状态还有一些阻碍。第一个是内核支持,在很长一段时间里都很不明了。[早在 2014][12]年HMM 最初作为 Linux 内核补丁集提出,红帽和英伟达都是关键开发者。需要做的工作不少,但是开发者相信可以提交代码,也许接下来的几个内核版本就能把它包含进去。
第二个阻碍是显卡驱动支持,英伟达一直在自己单独做一些工作。据 Glisse 的说法AMD 的 GPU 可能也会支持 HMM所以这种特殊优化不会仅限于英伟达的 GPU。AMD 一直都在尝试提升它的 GPU 市场占有率,有可能会[将 GPU 和 CPU 整合][13]到同一模具。但是,软件生态系统依然更偏向英伟达;要让可以选择成为现实,还需要更多的像 HMM 这样的中立项目,以及让 OpenCL 提供和 CUDA 相当的性能。
第三个阻碍是硬件支持,因为 HMM 的工作需要一项称作可重现页面故障replayable page faults的硬件特性。只有英伟达的帕斯卡系列高端 GPU 才支持这项特性。从某些意义上来说这是个好消息,因为这意味着英伟达只需要提供单一硬件的驱动支持就能让 HMM 正常使用,工作量就少了。
一旦 HMM 到位,对于提供 GPU 实例的公有云提供商就会面临压力,他们需要[支持最新最好一代的 GPU][14]。这并不是仅仅将老款的开普勒架构显卡换成最新的帕斯卡架构显卡就行了,因为后续的每一代显卡都会更加优秀,像 HMM 这样的支持优化将提供战略优势。
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html
作者:[Serdar Yegulalp][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/Serdar-Yegulalp/
[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&via=infoworld&text=Faster+machine+learning+is+coming+to+the+Linux+kernel
[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
[7]:http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html#email
[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
[10]:https://lkml.org/lkml/2017/4/21/872
[11]:http://www.infoworld.com/article/3195437/machine-learning-analytics-get-a-boost-from-gpu-data-frame-project.html
[12]:https://lwn.net/Articles/597289/
[13]:http://www.infoworld.com/article/3099204/hardware/amd-mulls-a-cpugpu-super-chip-in-a-server-reboot.html
[14]:http://www.infoworld.com/article/3126076/artificial-intelligence/aws-machine-learning-vms-go-faster-but-not-forward.html

View File

@ -0,0 +1,107 @@
个用于托管开源库的顶级 CDN 服务
============================================================
### 内容分发网络加速你的网站图片、CSS、JS、以及其他静态内容。
![Top 4 CDN services for hosting open source libraries](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/file_system.jpg?itok=s2b60oIB "Top 4 CDN services for hosting open source libraries")
>图片版权:[Open Clip Art Library][3],它明确将其公开于**[公共领域][1]**[见此处][4])。由 Jen Wike Huger 修改。
CDN 或称内容分发网络是位于世界各地的策略性放置的服务器网络用于更快地向用户传输文件。传统CDN 能够加速你的网站的图像、CSS、JS和任何其他静态内容。它允许网站所有者加速自己的所有内容并为他们提供额外的功能和配置选项。这些高级服务通常需要根据项目使用的带宽量进行支付。
但是,如果你的项目无法证明能够实施传统 CDN 的成本,那么使用开源 CDN 可能更合适。通常这些类型的 CDN 能让你链接到流行的 Web 库(例如 CSS/JS 框架),并从免费的 CDN 服务器上传输给你的访问者。虽然开源库的 CDN 服务不允许你将自己的内容上传到服务器,但它们可以帮助你加速全局库并提高网站的冗余性。
CDN 在庞大的服务器网络上托管项目,因此网站维护者需要修改网站 HTML 代码中的资源链接来反映开源 CDN 的URL后面跟上资源路径。根据你是否链接到 JavaScript 或 CSS 库,链接将包含在 <script> <link>
我们来探讨开源库的四大流行 CDN 服务。
### JsDelivr
[JsDelivr][5] 是一个使用高级 CDN 提供商KeyCDN、Stackpath、Cloudflare的开源 CDN 提供者来分发开源项目资源。jsDelivr 的一点亮点包括:
* 从 2,100 多个库中搜索
* 110 个接入点
* CDN 可在亚洲和中国使用
* API 支持
* 没有流量限制
* 完整的 HTTPS 支持
所有片段都以自定义 jsDelivr URL [https://cdn.jsdelivr.net/][6]开始,然后是项目名称、版本号等。你还可以配置 jsDelivr 生成带脚本标签的 URL 并启用SRI子资源完整性以增加安全性。
### **Cdnjs**
[Cdnjs][7] 是另一个流行的开源 CDN 提供者,类似于 jsDelivr。此服务还提供了一系列流行的 JavaScript 和 CSS 库,你可以在 Web 项目中进行链接。 该服务由 CDN 提供商 Cloudflare 和 [KeyCDN][8] 赞助。cdnjs 的一些亮点包括:
* 从 2,900 多个库搜索
* 超过一百万个网站使用
* 支持 HTTP/2
* 支持 HTTPS
与 jsDelivr 类似,使用 cdnjs你也可以选择使用或者不使用脚本标签和 SRI 来复制资源 URL。
### Google 托管库
[Google 托管库][9]网站允许你链接到托管在 Google 强大的开源 CDN 网络上的流行 JavaScript 库。这个开源的 CDN 解决方案不提供像 jsDelivr 或 cdnjs 一样多的库或者功能。然而,当连接到 Google 托管库时你可以期待高度的可靠性和信任。Google 开源 CDN 的几个亮点包括:
* HTTPS 支持
* 文件提供 CORS 和 Timing-Allow 头
* 提供每个库的最新版本
所有 Google 的托管库文件都以URL [https://ajax.googleapis.com/][10] 开头,后跟项目的名称、版本号和文件名。
### Microsoft Ajax CDN
[Microsoft Ajax 内容分发网络][11]与 Google 托管库非常类似,因为它只托管流行的库。但是,将 Microsoft Ajax CDN 与 Google 托管库分开的两个主要区别是 Microsoft 提供了 CSS 和 JS 库并且还提供了各种库的各种版本。Microsoft Ajax CDN 的几个亮点包括:
* HTTPS 支持
* 每个库的以前版本通常都可用
所有的 Microsoft Ajax 文件都以 URL [http://ajax.aspnetcdn.com/ajax/][12] 开头,并且和其他文件一样,后面是库的名字,版本号等。
如果你的项目或网站尚未准备好利用优质的 CDN 服务,但你仍然希望加速网站的重要方面,那么使用开源 CDN 是一个很好的解决方案。它能够加速第三方库的传输,否则它们将从原始服务器发送,从而导致远方用户不必要的加载以及更慢的速度。
 _你喜欢使用哪个开源 CDN 提供商为什么_
--------------------------------------------------------------------------------
作者简介:
Cody Arsenault - Cody 热衷于网络性能SEO 以及创业活动。他是 KeyCDN 的网络性能倡导者,致力于使网络更快。
------------
via: https://opensource.com/article/17/4/top-cdn-services
作者:[Cody Arsenault ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/codya
[1]:https://en.wikipedia.org/wiki/public_domain
[2]:https://opensource.com/article/17/4/top-cdn-services?rate=lgZwEmWt7QXtuMhB-lnHWQ-jxknQ0Kh4YOfqdFGer5w
[3]:https://en.wikipedia.org/wiki/Open_Clip_Art_Library
[4]:https://openclipart.org/share
[5]:http://www.jsdelivr.com/
[6]:https://cdn.jsdelivr.net/
[7]:https://cdnjs.com/
[8]:https://www.keycdn.com/
[9]:https://developers.google.com/speed/libraries/
[10]:https://ajax.googleapis.com/
[11]:https://www.asp.net/ajax/cdn
[12]:http://ajax.aspnetcdn.com/ajax/
[13]:https://opensource.com/user/128076/feed
[14]:https://opensource.com/users/codya