Merge pull request #3 from LCTT/master

更新
This commit is contained in:
Lv Feng 2016-11-17 22:23:52 +08:00 committed by GitHub
commit 7819d066d3
35 changed files with 2962 additions and 437 deletions

View File

@ -1,11 +1,9 @@
已校对。
为满足当今和未来 IT 需求,培训员工还是雇佣新人?
================================================================
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf)
在数字化时代由于IT工具不断更新技术公司紧随其后对 IT 技能的需求也不断变化。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间,而时间要求常常是紧迫的。
在数字化时代,由于 IT 工具不断更新,技术公司紧随其后,对 IT 技能的需求也不断变化。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间,而时间要求常常是紧迫的。
[Sandy Hill][1] 对 IT 涉及到的多项技术都很熟悉。她作为 [Pegasystems][2] 项目的 IT 总监,负责多个 IT 团队从应用的部署到数据中心的运营都要涉及。更重要的是Pegasystems 开发帮助销售、市场、服务以及运营团队流水化操作,以及客户联络的应用。这意味着她需要掌握使用 IT 内部资源的最佳方法,面对公司客户遇到的 IT 挑战。
@ -19,7 +17,7 @@
**TEP说说培训方法吧怎样帮助你的员工发展他们的技能**
**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术目标则随意,比如着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。
**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术目标则随意,比如着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。
**TEP你的训练计划能够在多大程度上减轻招聘工作量, 保持职员的稳定性?**

View File

@ -1,9 +1,11 @@
小模块的开销
JavaScript 小模块的开销
====
大约一年之前,我在将一个大型 JavaScript 代码库重构为小模块时发现了 Browserify 和 Webpack 中一个令人沮丧的事实:
更新2016/10/30我写完这篇文章之后我在[这个基准测试中发了一个错误](https://github.com/nolanlawson/cost-of-small-modules/pull/8),会导致 Rollup 比它预期的看起来要好一些。不过整体结果并没有明显的不同Rollup 仍然击败了 Browserify 和 Webpack虽然它并没有像 Closure 十分好),所以我只是更新了图表。该基准测试包括了 [RequireJS 和 RequireJS Almond 打包器](https://github.com/nolanlawson/cost-of-small-modules/pull/5),所以文章中现在也包括了它们。要看原始帖子,可以查看[历史版本](https://web.archive.org/web/20160822181421/https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/)。
> “代码越模块化,代码体积就越大。”- Nolan Lawson
大约一年之前,我在将一个大型 JavaScript 代码库重构为更小的模块时发现了 Browserify 和 Webpack 中一个令人沮丧的事实:
> “代码越模块化,代码体积就越大。:< ”- Nolan Lawson
过了一段时间Sam Saccone 发布了一些关于 [Tumblr][1] 和 [Imgur][2] 页面加载性能的出色的研究。其中指出:
@ -15,9 +17,9 @@
一个页面中包含的 JavaScript 脚本越多,页面加载也将越慢。庞大的 JavaScript 包会导致浏览器花费更多的时间去下载、解析和执行,这些都将加长载入时间。
即使当你使用如 Webpack [code splitting][3]、Browserify [factor bundles][4] 等工具将代码分解为多个包,时间的花费也仅仅是被延迟到页面生命周期的晚些时候。JavaScript 迟早都将有一笔开销。
即使当你使用如 Webpack [code splitting][3]、Browserify [factor bundles][4] 等工具将代码分解为多个包,该开销也仅仅是被延迟到页面生命周期的晚些时候。JavaScript 迟早都将有一笔开销。
此外,由于 JavaScript 是一门动态语言,同时流行的 [CommonJS][5] 模块也是动态的,所以这就使得在最终分发给用户的代码中剔除无用的代码变得异常困难。譬如你可能只使用到 jQuery 中的 $.ajax但是通过载入 jQuery 包,你将以整个包为代价。
此外,由于 JavaScript 是一门动态语言,同时流行的 [CommonJS][5] 模块也是动态的,所以这就使得在最终分发给用户的代码中剔除无用的代码变得异常困难。譬如你可能只使用到 jQuery 中的 $.ajax但是通过载入 jQuery 包,你将付出整个包的代价。
JavaScript 社区对这个问题提出的解决办法是提倡 [小模块][6] 的使用。小模块不仅有许多 [美好且实用的好处][7] 如易于维护,易于理解,易于集成等,而且还可以通过鼓励包含小巧的功能而不是庞大的库来解决之前提到的 jQuery 的问题。
@ -66,7 +68,7 @@ $ browserify node_modules/qs | browserify-count-modules
顺带一提,我写过的最大的开源站点 [Pokedex.org][21] 包含了 4 个包,共 311 个模块。
让我们先暂时忽略这些 JavaScript 包的实际大小,我认为去探索一下一定数量的模块本身开销会一件有意思的事。虽然 Sam Saccone 的文章 [“2016 年 ES2015 转译的开销”][22] 已经广为流传,但是我认为他的结论还未到达足够深度,所以让我们挖掘的稍微再深一点吧。
让我们先暂时忽略这些 JavaScript 包的实际大小,我认为去探索一下一定数量的模块本身开销会一件有意思的事。虽然 Sam Saccone 的文章 [“2016 年 ES2015 转译的开销”][22] 已经广为流传,但是我认为他的结论还未到达足够深度,所以让我们挖掘的稍微再深一点吧。
### 测试环节!
@ -86,13 +88,13 @@ console.log(total)
module.exports = 1
```
我测试了五种打包方法Browserify, 带 [bundle-collapser][24] 插件的 Browserify, Webpack, Rollup 和 Closure Compiler。对于 Rollup 和 Closure Compiler 我使用了 ES6 模块,而对于 Browserify 和 Webpack 则用的 CommonJS目的是为了不涉及其各自缺点而导致测试的不公平由于它们可能需要做一些转译工作如 Babel 一样,而这些工作将会增加其自身的运行时间)。
我测试了五种打包方法Browserify、带 [bundle-collapser][24] 插件的 Browserify、Webpack、Rollup 和 Closure Compiler。对于 Rollup 和 Closure Compiler 我使用了 ES6 模块,而对于 Browserify 和 Webpack 则用的 CommonJS目的是为了不涉及其各自缺点而导致测试的不公平由于它们可能需要做一些转译工作如 Babel 一样,而这些工作将会增加其自身的运行时间)。
为了更好地模拟一个生产环境,我将带 -mangle 和 -compress 参数的 Uglify 用于所有的包,并且使用 gzip 压缩后通过 GitHub Pages 用 HTTPS 协议进行传输。对于每个包,我一共下载并执行 15 次,然后取其平均值,并使用 performance.now() 函数来记录载入时间(未使用缓存)与执行时间。
为了更好地模拟一个生产环境,我对所有的包采用带 `-mangle``-compress` 参数的 `Uglify` ,并且使用 gzip 压缩后通过 GitHub Pages 用 HTTPS 协议进行传输。对于每个包,我一共下载并执行 15 次,然后取其平均值,并使用 `performance.now()` 函数来记录载入时间(未使用缓存)与执行时间。
### 包大小
在我们查看测试结果之前,我们有必要先来看一眼我们要测试的包文件。下是每个包最小处理后但并未使用 gzip 压缩时的体积大小单位Byte
在我们查看测试结果之前,我们有必要先来看一眼我们要测试的包文件。下是每个包最小处理后但并未使用 gzip 压缩时的体积大小单位Byte
| | 100 个模块 | 1000 个模块 | 5000 个模块 |
| --- | --- | --- | --- |
@ -110,7 +112,7 @@ module.exports = 1
| rollup | 300 | 2145 | 11510 |
| closure | 302 | 2140 | 11789 |
Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空间,然后声明一个全局载入器,并在每次 require() 函数调用时定位到正确的模块处。下面是我们的 Browserify 包的样子:
Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空间,然后声明一个全局载入器,并在每次 `require()` 函数调用时定位到正确的模块处。下面是我们的 Browserify 包的样子:
```
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o
@ -144,7 +146,7 @@ Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空
在 100 个模块时,各包的差异是微不足道的,但是一旦模块数量达到 1000 个甚至 5000 个时差异将会变得非常巨大。iPod Touch 在不同包上的差异并不明显,而对于具有一定年代的 Nexus 5 来说Browserify 和 Webpack 明显耗时更多。
与此同时,我发现有意思的是 Rollup 和 Closure 的运行开销对于 iPod 而言几乎可以忽略,并且与模块的数量关系也不大。而对于 Nexus 5 来说,运行的开销并非完全可以忽略,但它们仍比 Browserify 或 Webpack 低很多。后者若未在几百毫秒内完成加载则将会占用主线程的好几帧的时间,这就意味着用户界面将冻结并且等待直到模块载入完成。
与此同时,我发现有意思的是 Rollup 和 Closure 的运行开销对于 iPod 而言几乎可以忽略,并且与模块的数量关系也不大。而对于 Nexus 5 来说,运行的开销并非完全可以忽略,但 Rollup/Closure 仍比 Browserify/Webpack 低很多。后者若未在几百毫秒内完成加载则将会占用主线程的好几帧的时间,这就意味着用户界面将冻结并且等待直到模块载入完成。
值得注意的是前面这些测试都是在千兆网速下进行的,所以在网络情况来看,这只是一个最理想的状况。借助 Chrome 开发者工具,我们可以认为地将 Nexus 5 的网速限制到 3G 水平,然后来看一眼这对测试产生的影响([查看表格][30]
@ -152,13 +154,13 @@ Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空
一旦我们将网速考虑进来Browserify/Webpack 和 Rollup/Closure 的差异将变得更为显著。在 1000 个模块规模(接近于 Reddit 1050 个模块的规模Browserify 花费的时间比 Rollup 长大约 400 毫秒。然而 400 毫秒已经不是一个小数目了,正如 Google 和 Bing 指出的,亚秒级的延迟都会 [对用户的参与产生明显的影响][32] 。
还有一件事需要指出,那就是这个测试并非测量 100 个、1000 个或者 5000 个模块的每个模块的精确运行时间。因为这还与你对 require() 函数的使用有关。在这些包中,我采用的是对每个模块调用一次 require() 函数。但如果你每个模块调用了多次 require() 函数(这在代码库中非常常见)或者你多次动态调用 require() 函数(例如在子函数中调用 require() 函数),那么你将发现明显的性能退化。
还有一件事需要指出,那就是这个测试并非测量 100 个、1000 个或者 5000 个模块的每个模块的精确运行时间。因为这还与你对 `require()` 函数的使用有关。在这些包中,我采用的是对每个模块调用一次 `require()` 函数。但如果你每个模块调用了多次 `require()` 函数(这在代码库中非常常见)或者你多次动态调用 `require()` 函数(例如在子函数中调用 `require()` 函数),那么你将发现明显的性能退化。
Reddit 的移动站点就是一个很好的例子。虽然该站点有 1050 个模块,但是我测量了它们使用 Browserify 的实际执行时间后发现比“1000 个模块”的测试结果差好多。当使用那台运行 Chrome 的 Nexus 5 时,我测出 Reddit 的 Browserify require() 函数耗时 2.14 秒。而那个“1000 个模块”脚本中的等效函数只需要 197 毫秒(在搭载 i7 处理器的 Surface Book 上的桌面版 Chrome我测出的结果分别为 559 毫秒与 37 毫秒,虽然给出桌面平台的结果有些令人惊讶)。
这结果提示我们有必要对每个模块使用多个 require() 函数的情况再进行一次测试。不过,我并不认为这对 Browserify 和 Webpack 会是一个公平的测试,因为 Rollup 和 Closure 都会将重复的 ES6 库导入处理为一个的顶级变量声明同时也阻止了顶层空间以外的其他区域的导入。所以根本上来说Rollup 和 Closure 中一个导入和多个导入的开销是相同的,而对于 Browserify 和 Webpack运行开销随 require() 函数的数量线性增长。
这结果提示我们有必要对每个模块使用多个 `require()` 函数的情况再进行一次测试。不过,我并不认为这对 Browserify 和 Webpack 会是一个公平的测试,因为 Rollup 和 Closure 都会将重复的 ES6 库导入处理为一个的顶级变量声明同时也阻止了顶层空间以外的其他区域的导入。所以根本上来说Rollup 和 Closure 中一个导入和多个导入的开销是相同的,而对于 Browserify 和 Webpack运行开销随 `require()` 函数的数量线性增长。
为了我们这个分析的目的我认为最好假设模块的数量是性能的短板。而事实上“5000 个模块”也是一个比“5000 个 require() 函数调用”更好的度量标准。
为了我们这个分析的目的我认为最好假设模块的数量是性能的短板。而事实上“5000 个模块”也是一个比“5000 个 `require()` 函数调用”更好的度量标准。
### 结论
@ -168,11 +170,11 @@ Reddit 的移动站点就是一个很好的例子。虽然该站点有 1050 个
给出这些结果之后,我对 Closure Compiler 和 Rollup 在 JavaScript 社区并没有得到过多关注而感到惊讶。我猜测或许是因为(前者)需要依赖 Java后者仍然相当不成熟并且未能做到开箱即用详见 [Calvins Metcalf 的评论][37] 中作的不错的总结)。
即使没有足够数量的 JavaScript 开发者加入到 Rollup 或 Closure 的队伍中,我认为 npm 包作者们也已准备好了去帮助解决这些问题。如果你使用 npm 安装 lodash你将会发其现主要的导入是一个巨大的 JavaScript 模块,而不是你期望的 Lodash 的超模块hyper-modular特性require('lodash/uniq')require('lodash.uniq') 等等)。对于 PouchDB我们做了一个类似的声明以 [使用 Rollup 作为预发布步骤][38],这将产生对于用户而言尽可能小的包。
即使没有足够数量的 JavaScript 开发者加入到 Rollup 或 Closure 的队伍中,我认为 npm 包作者们也已准备好了去帮助解决这些问题。如果你使用 npm 安装 lodash你将会发其现主要的导入是一个巨大的 JavaScript 模块,而不是你期望的 Lodash 的超模块hyper-modular特性`require('lodash/uniq')``require('lodash.uniq')` 等等)。对于 PouchDB我们做了一个类似的声明以 [使用 Rollup 作为预发布步骤][38],这将产生对于用户而言尽可能小的包。
同时,我创建了 [rollupify][39] 来尝试将这过程变得更为简单一些,只需拖动到已存在的 Browserify 工程中即可。其基本思想是在你自己的项目中使用导入import和导出export可以使用 [cjs-to-es6][40] 来帮助迁移),然后使用 require() 函数来载入第三方包。这样一来,你依旧可以在你自己的代码库中享受所有模块化的优点,同时能导出一个适当大小的大模块来发布给你的用户。不幸的是,你依旧得为第三方库付出一些代价,但是我发现这是对于当前 npm 生态系统的一个很好的折中方案。
所以结论如下:一个大的 JavaScript 包比一百个小 JavaScript 模块要快。尽管这是事实,我依旧希望我们社区能最终发现我们所处的困境————提倡小模块的原则对开发者有利,但是对用户不利。同时希望能优化我们的工具,使得我们可以对两方面都有利。
所以结论如下:**一个大的 JavaScript 包比一百个小 JavaScript 模块要快**。尽管这是事实,我依旧希望我们社区能最终发现我们所处的困境————提倡小模块的原则对开发者有利,但是对用户不利。同时希望能优化我们的工具,使得我们可以对两方面都有利。
### 福利时间!三款桌面浏览器
@ -205,15 +207,15 @@ Firefox 48 ([查看表格][45])
[![Nexus 5 (3G) RequireJS 结果][53]](https://nolanwlawson.files.wordpress.com/2016/08/2016-08-20-14_45_29-small_modules3-xlsx-excel.png)
更新 3: 我写了一个 [optimize-js](http://github.com/nolanlawson/optimize-js) ,它会减少一些函数内的函数的解析成本。
--------------------------------------------------------------------------------
via: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/?utm_source=javascriptweekly&utm_medium=email
via: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/
作者:[Nolan][a]
译者:[Yinr](https://github.com/Yinr)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,57 @@
宽松开源许可证的崛起意味着什么
====
为什么像 GNU GPL 这样的限制性许可证越来越不受青睐。
“如果你用了任何开源软件, 那么你软件的其他部分也必须开源。” 这是微软前 CEO 巴尔默 2001 年说的, 尽管他说的不对, 还是引发了人们对自由软件的 FUD (恐惧, 不确定和怀疑fear, uncertainty and doubt。 大概这才是他的意图。
对开源软件的这些 FUD 主要与开源许可有关。 现在有许多不同的许可证, 当中有些限制比其他的更严格(也有人称“更具保护性”)。 诸如 GNU 通用公共许可证 GPL 这样的限制性许可证使用了 copyleft 的概念。 copyleft 赋予人们自由发布软件副本和修改版的权力, 只要衍生工作保留同样的权力。 bash 和 GIMP 等开源项目就是使用了 GPL(v3)。 还有一个 AGPL Affero GPL 许可证, 它为网络上的软件(如 web service提供了 copyleft 许可。
这意味着, 如果你使用了这种许可的代码, 然后加入了你自己的专有代码, 那么在一些情况下, 整个代码, 包括你的代码也就遵从这种限制性开源许可证。 Ballmer 说的大概就是这类的许可证。
但宽松许可证不同。 比如, 只要保留版权声明和许可声明且不要求开发者承担责任, MIT 许可证允许任何人任意使用开源代码, 包括修改和出售。 另一个比较流行的宽松开源许可证是 Apache 许可证 2.0,它还包含了贡献者向用户提供专利授权相关的条款。 使用 MIT 许可证的有 JQuery、.NET Core 和 Rails 使用 Apache 许可证 2.0 的软件包括安卓, Apache 和 Swift。
两种许可证类型最终都是为了让软件更有用。 限制性许可证促进了参与和分享的开源理念, 使每一个人都能从软件中得到最大化的利益。 而宽松许可证通过允许人们任意使用软件来确保人们能从软件中得到最多的利益, 即使这意味着他们可以使用代码, 修改它, 据为己有,甚至以专有软件出售,而不做任何回报。
开源许可证管理公司黑鸭子软件的数据显示, 去年使用最多的开源许可证是限制性许可证 GPL 2.0,份额大约 25%。 宽松许可证 MIT 和 Apache 2.0 次之, 份额分别为 18% 和 16% 再后面是 GPL 3.0, 份额大约 10%。 这样来看, 限制性许可证占 35% 宽松许可证占 34% 几乎是平手。
但这份当下的数据没有揭示发展趋势。黑鸭子软件的数据显示, 从 2009 年到 2015 年的六年间, MIT 许可证的份额上升了 15.7% Apache 的份额上升了 12.4%。 在这段时期, GPL v2 和 v3 的份额惊人地下降了 21.4%。 换言之, 在这段时期里, 大量软件从限制性许可证转到宽松许可证。
这个趋势还在继续。 黑鸭子软件的[最新数据][1]显示, MIT 现在的份额为 26% GPL v2 为 21% Apache 2 为 16% GPL v3 为 9%。 即 30% 的限制性许可证和 42% 的宽松许可证--与前一年的 35% 的限制许可证和 34% 的宽松许可证相比, 发生了重大的转变。 对 GitHub 上使用许可证的[调查研究][2]证实了这种转变。 它显示 MIT 以压倒性的 45% 占有率成为最流行的许可证, 与之相比, GPL v2 只有 13% Apache 11%。
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### 引领趋势
从限制性许可证到宽松许可证,这么大的转变背后是什么呢? 是公司害怕如果使用了限制性许可证的软件,他们就会像巴尔默说的那样,失去自己私有软件的控制权了吗? 事实上, 可能就是如此。 比如, Google 就[禁用了 Affero GPL 软件][3]。
[Instructional Media + Magic][4] 的主席 Jim Farmer 是一个教育开源技术的开发者。 他认为很多公司为避免法律问题而不使用限制性许可证。 “问题就在于复杂性。 许可证的复杂性越高, 被他人因为某行为而告上法庭的可能性越高。 高复杂性更可能带来诉讼”, 他说。
他补充说, 这种对限制性许可证的恐惧正被律师们驱动着, 许多律师建议自己的客户使用 MIT 或 Apache 2.0 许可证的软件, 并明确反对使用 Affero 许可证的软件。
他说, 这会对软件开发者产生影响, 因为如果公司都避开限制性许可证软件的使用,开发者想要自己的软件被使用, 就更会把新的软件使用宽松许可证。
但 SalesAgility开源 SuiteCRM 背后的公司)的 CEO Greg Soper 认为这种到宽松许可证的转变也是由一些开发者驱动的。 “看看像 Rocket.Chat 这样的应用。 开发者本可以选择 GPL 2.0 或 Affero 许可证, 但他们选择了宽松许可证,” 他说。 “这样可以给这个应用最大的机会, 因为专有软件厂商可以使用它, 不会伤害到他们的产品, 且不需要把他们的产品也使用开源许可证。 这样如果开发者想要让第三方应用使用他的应用的话, 他有理由选择宽松许可证。”
Soper 指出, 限制性许可证致力于帮助开源项目获得成功,方式是阻止开发者拿了别人的代码、做了修改,但不把结果回报给社区。 “Affero 许可证对我们的产品健康发展很重要, 因为如果有人利用了我们的代码开发,做得比我们好, 却又不把代码回报回来, 就会扼杀掉我们的产品,” 他说。 “ 对 Rocket.Chat 则不同, 因为如果它使用 Affero 那么它会污染公司的知识产权, 所以公司不会使用它。 不同的许可证有不同的使用案例。”
曾在 Gnome、OpenOffice 工作过,现在是 LibreOffice 的开源开发者的 Michael Meeks 同意 Jim Farmer 的观点,认为许多公司确实出于对法律的担心,而选择使用宽松许可证的软件。 “copyleft 许可证有风险, 但同样也有巨大的益处。 遗憾的是人们都听从律师的, 而律师只是讲风险, 却从不告诉你有些事是安全的。”
巴尔默发表他的错误言论已经过去 15 年了, 但它产生的 FUD 还是有影响--即使从限制性许可证到宽松许可证的转变并不是他的目的。
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens][a]
译者:[willcoderwang](https://github.com/willcoderwang)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -1,7 +1,7 @@
在 Linux 上检测硬盘坏道和坏块
在 Linux 上检测硬盘上的坏道和坏块
===
让我们从定义坏道和坏块开始说起,它们是一块磁盘或闪存上不再能够被读写的部分,一般是由于磁盘表面特定的[物理损坏][7]或闪存晶体管失效导致的。
让我们从坏道和坏块的定义开始说起,它们是一块磁盘或闪存上不再能够被读写的部分,一般是由于磁盘表面特定的[物理损坏][7]或闪存晶体管失效导致的。
随着坏道的继续积累,它们会对你的磁盘或闪存容量产生令人不快或破坏性的影响,甚至可能会导致硬件失效。
@ -13,7 +13,7 @@
### 在 Linux 上使用坏块工具检查坏道
坏块工具可以让用户扫描设备检查坏道或坏块。设备可以是一个磁盘或外置磁盘,由一个如 /dev/sdc 这样的文件代表。
坏块工具可以让用户扫描设备检查坏道或坏块。设备可以是一个磁盘或外置磁盘,由一个如 `/dev/sdc` 这样的文件代表。
首先,通过超级用户权限执行 [fdisk 命令][5]来显示你的所有磁盘或闪存的信息以及它们的分区信息:
@ -24,9 +24,9 @@ $ sudo fdisk -l
[![列出 Linux 文件系统分区](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png)][4]
列出 Linux 文件系统分区
*列出 Linux 文件系统分区*
然后用这个命令检查你的 Linux 硬盘上的坏道/坏块:
然后用如下命令检查你的 Linux 硬盘上的坏道/坏块:
```
$ sudo badblocks -v /dev/sda10 > badsectors.txt
@ -35,15 +35,15 @@ $ sudo badblocks -v /dev/sda10 > badsectors.txt
[![在 Linux 上扫描硬盘坏道](http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png)][3]
在 Linux 上扫描硬盘坏道
*在 Linux 上扫描硬盘坏道*
上面的命令中badblocks 扫描设备 /dev/sda10记得指定你的实际设备-v 选项让它显示操作的详情。另外,这里使用了输出重定向将操作结果重定向到了文件 badsectors.txt。
上面的命令中badblocks 扫描设备 `/dev/sda10`(记得指定你的实际设备),`-v` 选项让它显示操作的详情。另外,这里使用了输出重定向将操作结果重定向到了文件 `badsectors.txt`
如果你在你的磁盘上发现任何坏道,卸载磁盘并像下面这样让系统不要将数据写入回报的扇区中。
你需要执行 e2fsck针对 ext2/ext3/ext4 文件系统)或 fsck 命令,命令中还需要用到 badsectors.txt 文件和设备文件。
你需要执行 `e2fsck`(针对 ext2/ext3/ext4 文件系统)或 `fsck` 命令,命令中还需要用到 `badsectors.txt` 文件和设备文件。
`-l` 选项告诉命令将指定文件名文件badsectors.txt中列出的扇区号码加入坏块列表。
`-l` 选项告诉命令将在指定的文件 `badsectors.txt` 中列出的扇区号码加入坏块列表。
```
------------ 针对 for ext2/ext3/ext4 文件系统 ------------
@ -60,7 +60,7 @@ $ sudo fsck -l badsectors.txt /dev/sda10
这个方法对带有 S.M.A.R.TSelf-Monitoring, Analysis and Reporting Technology自我监控分析报告技术系统的现代磁盘ATA/SATA 和 SCSI/SAS 硬盘以及固态硬盘更加的可靠和高效。S.M.A.R.T 系统能够帮助检测,报告,以及可能记录它们的健康状况,这样你就可以找出任何可能出现的硬件失效。
你可以使用以下命令安装 smartmontools
你可以使用以下命令安装 `smartmontools`
```
------------ 在基于 Debian/Ubuntu 的系统上 ------------
@ -71,7 +71,7 @@ $ sudo yum install smartmontools
```
安装完成之后,使用 smartctl 控制磁盘集成的 S.M.A.R.T 系统。你可以这样查看它的手册或帮助:
安装完成之后,使用 `smartctl` 控制磁盘集成的 S.M.A.R.T 系统。你可以这样查看它的手册或帮助:
```
$ man smartctl
@ -79,7 +79,7 @@ $ smartctl -h
```
然后执行 smartctrl 命令并在命令中指定你的设备作为参数,以下命令包含了参数 `-H``--health` 以显示 SMART 整体健康自我评估测试结果。
然后执行 `smartctrl` 命令并在命令中指定你的设备作为参数,以下命令包含了参数 `-H``--health` 以显示 SMART 整体健康自我评估测试结果。
```
$ sudo smartctl -H /dev/sda10
@ -88,7 +88,7 @@ $ sudo smartctl -H /dev/sda10
[![检查 Linux 硬盘健康](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png)][2]
检查 Linux 硬盘健康
*检查 Linux 硬盘健康*
上面的结果指出你的硬盘很健康,近期内不大可能发生硬件失效。
@ -102,10 +102,8 @@ via: http://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks/
作者:[Aaron Kili][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,12 +1,13 @@
# 如何在 Linux 中将文件编码转换为 UTF-8
如何在 Linux 中将文件编码转换为 UTF-8
===============
在这篇教程中,我们将解释字符编码的含义,然后给出一些使用命令行工具将使用某种字符编码的文件转化为另一种编码的例子。最后,我们将一起看一看如何在 Linux 下将使用各种字符编码的文件转化为 UTF-8 编码。
你可能已经知道,计算机是不会理解和存储字符、数字或者任何人类能够理解的东西的,除了二进制数据。一个二进制位只有两种可能的值,也就是 `0``1``真`或`假``对`或`错`。其它的任何事物,比如字符、数据和图片,必须要以二进制的形式来表现,以供计算机处理。
你可能已经知道,计算机除了二进制数据,是不会理解和存储字符、数字或者任何人类能够理解的东西的。一个二进制位只有两种可能的值,也就是 `0``1``真`或`假``是`或`否`。其它的任何事物,比如字符、数据和图片,必须要以二进制的形式来表现,以供计算机处理。
简单来说,字符编码是一种可以指示电脑来将原始的 0 和 1 解释成实际字符的方式,在这些字符编码中,字符都可以用数字串来表示。
简单来说,字符编码是一种可以指示电脑来将原始的 0 和 1 解释成实际字符的方式,在这些字符编码中,字符都以一串数字来表示。
字符编码方案有很多种,比如 ASCII, ANCI, Unicode 等等。下面是 ASCII 编码的一个例子。
字符编码方案有很多种,比如 ASCII、ANCI、Unicode 等等。下面是 ASCII 编码的一个例子。
```
字符 二进制
@ -22,11 +23,9 @@ B 01000010
$ file -i Car.java
$ file -i CarDriver.java
```
[
![在 Linux 中查看文件的编码](http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png)
][3]
![在 Linux 中查看文件的编码](http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png)
在 Linux 中查看文件的编码
*在 Linux 中查看文件的编码*
iconv 工具的使用方法如下:
@ -34,25 +33,21 @@ iconv 工具的使用方法如下:
$ iconv option
$ iconv options -f from-encoding -t to-encoding inputfile(s) -o outputfile
```
在这里,`-f` 或 `--from-code` 标明了输入编码,而 `-t``--to-encoding` 指定了输出编码。
在这里,`-f` 或 `--from-code` 表明了输入编码,而 `-t``--to-encoding` 指定了输出编码。
为了列出所有已有编码的字符集,你可以使用以下命令:
```
$ iconv -l
```
[
![列出所有已有编码字符集](http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png)
][2]
![列出所有已有编码字符集](http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png)
列出所有已有编码字符集
*列出所有已有编码字符集*
### 将文件从 ISO-8859-1 编码转换为 UTF-8 编码
下面,我们将学习如何将一种编码方案转换为另一种编码方案。下面的命令将会将 ISO-8859-1 编码转换为 UTF-8 编码。
Consider a file named `input.file` which contains the characters:
考虑如下文件 `input.file`,其中包含这几个字符:
```
@ -70,17 +65,15 @@ $ iconv -f ISO-8859-1 -t UTF-8//TRANSLIT input.file -o out.file
$ cat out.file
$ file -i out.file
```
[
![在 Linux 中将 ISO-8859-1 转化为 UTF-8](http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png)
][1]
![在 Linux 中将 ISO-8859-1 转化为 UTF-8](http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png)
在 Linux 中将 ISO-8859-1 转化为 UTF-8
*在 Linux 中将 ISO-8859-1 转化为 UTF-8*
注意:如果输出编码后面添加了 `//IGNORE` 字符串,那些不能被转换的字符将不会被转换,并且在转换后,程序会显示一条错误信息。
好,如果字符串 `//TRANSLIT` 被添加到了上面例子中的输出编码之后 (UTF-8//TRANSLIT),待转换的字符会尽量采用形译原则。也就是说,如果某个字符在输出编码方案中不能被表示的话,它将会被替换为一个形状比较相似的字符。
好,如果字符串 `//TRANSLIT` 被添加到了上面例子中的输出编码之后 (`UTF-8//TRANSLIT`),待转换的字符会尽量采用形译原则。也就是说,如果某个字符在输出编码方案中不能被表示的话,它将会被替换为一个形状比较相似的字符。
而且,如果一个字符不在输出编码中,而且不能被形译,它将会在输出文件中被一个问号标记 `(?)` 代替。
而且,如果一个字符不在输出编码中,而且不能被形译,它将会在输出文件中被一个问号标记 `?` 代替。
### 将多个文件转换为 UTF-8 编码
@ -88,13 +81,13 @@ $ file -i out.file
```
#!/bin/bash
# 将 values_here 替换为输入编码
### 将 values_here 替换为输入编码
FROM_ENCODING="value_here"
# 输出编码 (UTF-8)
### 输出编码 (UTF-8)
TO_ENCODING="UTF-8"
# 转换命令
### 转换命令
CONVERT=" iconv -f $FROM_ENCODING -t $TO_ENCODING"
# 使用循环转换多个文件
### 使用循环转换多个文件
for file in *.txt; do
$CONVERT "$file" -o "${file%.txt}.utf8.converted"
done
@ -122,13 +115,11 @@ $ man iconv
--------------------------------------------------------------------------------
via: http://www.tecmint.com/convert-files-to-utf-8-encoding-in-linux/#
via: http://www.tecmint.com/convert-files-to-utf-8-encoding-in-linux/
作者:[Aaron Kili][a]
译者:[StdioA](https://github.com/StdioA)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,116 @@
如何在 Linux 中压缩及解压缩 .bz2 文件
============================================================
对文件进行压缩,可以通过使用较少的字节对文件中的数据进行编码来显著地减小文件的大小,并且在跨网络的[文件的备份和传送][1]时很有用。 另一方面,解压文件意味着将文件中的数据恢复到初始状态。
Linux 中有几个[文件压缩和解压缩工具][2]比如gzip、7-zip、Lrzip、[PeaZip][3] 等等。
本篇教程中,我们将介绍如何在 Linux 中使用 bzip2 工具压缩及解压缩`.bz2`文件。
bzip2 是一个非常有名的压缩工具,并且在大多数主流 Linux 发行版上都有,你可以在你的发行版上用合适的命令来安装它。
```
$ sudo apt install bzip2 [On Debian/Ubuntu]
$ sudo yum install bzip2 [On CentOS/RHEL]
$ sudo dnf install bzip2 [On Fedora 22+]
```
使用 bzip2 的常规语法是:
```
$ bzip2 option(s) filenames
```
### 如何在 Linux 中使用“bzip2”压缩文件
你可以如下压缩一个文件,使用`-z`标志启用压缩:
```
$ bzip2 filename
或者
$ bzip2 -z filename
```
要压缩一个`.tar`文件,使用的命令为:
```
$ bzip2 -z backup.tar
```
重要bzip2 默认会在压缩及解压缩文件时删除输入文件(原文件),要保留输入文件,使用`-k`或者`--keep`选项。
此外,`-f`或者`--force`标志会强制让 bzip2 覆盖已有的输出文件。
```
------ 要保留输入文件 ------
$ bzip2 -zk filename
$ bzip2 -zk backup.tar
```
你也可以设置块的大小,从 100k 到 900k分别使用`-1`或者`--fast`到`-9`或者`--best`
```
$ bzip2 -k1 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
$ bzip2 -k9 Etcher-linux-x64.AppImage
$ bzip2 -kf9 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
```
下面的截屏展示了如何使用选项来保留输入文件,强制 bzip2 覆盖输出文件,并且在压缩中设置块的大小。
![Compress Files Using bzip2 in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Compress-Files-Using-bzip2-in-Linux.png)
*在 Linux 中使用 bzip2 压缩文件*
### 如何在 Linux 中使用“bzip2”解压缩文件
要解压缩`.bz2`文件,确保使用`-d`或者`--decompress`选项:
```
$ bzip2 -d filename.bz2
```
注意:这个文件必须是`.bz2`的扩展名,上面的命令才能使用。
```
$ bzip2 -vd Etcher-linux-x64.AppImage.bz2
$ bzip2 -vfd Etcher-linux-x64.AppImage.bz2
$ ls -l Etcher-linux-x64.AppImage
```
![Decompress bzip2 File in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Decompression-bzip2-File-in-Linux.png)
*在 Linux 中解压 bzip2 文件*
要浏览 bzip2 的帮助及 man 页面,输入下面的命令:
```
$ bzip2 -h
$ man bzip2
```
最后,通过上面简单的阐述,我相信你现在已经可以在 Linux 中压缩及解压缩`bz2`文件了。然而,有任何的问题和反馈,可以在评论区中留言。
重要的是,你可能想在 Linux 中查看一些重要的 [tar 命令示例][6],以便学习使用 tar 命令来[创建压缩归档文件][7]。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-compress-decompress-bz2-files-using-bzip2
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
[2]:http://www.tecmint.com/command-line-archive-tools-for-linux/
[3]:http://www.tecmint.com/peazip-linux-file-manager-and-file-archive-tool/
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Compress-Files-Using-bzip2-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Decompression-bzip2-File-in-Linux.png
[6]:http://www.tecmint.com/18-tar-command-examples-in-linux/
[7]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/

View File

@ -0,0 +1,241 @@
在 Kali Linux 下实战 Nmap网络安全扫描器
========
在这第二篇 Kali Linux 文章中, 将讨论称为 [nmap][30] 的网络工具。虽然 nmap 不是 Kali 下唯一的一个工具,但它是最[有用的网络映射工具][29]之一。
- [第一部分-为初学者准备的Kali Linux安装指南][4]
Nmap 是 Network Mapper 的缩写,由 Gordon Lyon 维护(更多关于 Mr. Lyon 的信息在这里: [http://insecure.org/fyodor/][28]) ,并被世界各地许多的安全专业人员使用。
这个工具在 Linux 和 Windows 下都能使用,并且是用命令行驱动的。相对于那些令人害怕的命令行,对于 nmap在这里有一个美妙的图形化前端叫做 zenmap。
强烈建议个人去学习 nmap 的命令行版本,因为与图形化版本 zenmap 相比,它提供了更多的灵活性。
对服务器进行 nmap 扫描的目的是什么很好的问题。Nmap 允许管理员快速彻底地了解网络上的系统,因此,它的名字叫 Network MAPper 或者 nmap。
Nmap 能够快速找到活动的主机和与该主机相关联的服务。Nmap 的功能还可以通过结合 Nmap 脚本引擎(通常缩写为 NSE进一步被扩展。
这个脚本引擎允许管理员快速创建可用于确定其网络上是否存在新发现的漏洞的脚本。已经有许多脚本被开发出来并且包含在大多数的 nmap 安装中。
提醒一句 - 使用 nmap 的人既可能是善意的,也可能是恶意的。应该非常小心,确保你不要使用 nmap 对没有明确得到书面许可的系统进行扫描。请在使用 nmap 工具的时候注意!
#### 系统要求
1. [Kali Linux][3] (nmap 可以用于其他操作系统,并且功能也和这个指南里面讲的类似)。
2. 另一台计算机,并且装有 nmap 的计算机有权限扫描它 - 这通常很容易通过软件来实现,例如通过 [VirtualBox][2] 创建虚拟机。
1. 想要有一个好的机器来练习一下,可以了解一下 Metasploitable 2。
2. 下载 MS2 [Metasploitable2][1]。
3. 一个可以工作的网络连接,或者是使用虚拟机就可以为这两台计算机建立有效的内部网络连接。
### Kali Linux 使用 Nmap
使用 nmap 的第一步是登录 Kali Linux如果需要就启动一个图形会话本系列的第一篇文章安装了 [Kali Linux 的 Enlightenment 桌面环境] [27])。
在安装过程中安装程序将提示用户输入用来登录的“root”用户和密码。 一旦登录到 Kali Linux 机器,使用命令`startx`就可以启动 Enlightenment 桌面环境 - 值得注意的是 nmap 不需要运行桌面环境。
```
# startx
```
![Start Desktop Environment in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png)
*在 Kali Linux 中启动桌面环境*
一旦登录到 Enlightenment将需要打开终端窗口。通过点击桌面背景将会出现一个菜单。导航到终端可以进行如下操作应用程序 -> 系统 -> 'Xterm' 或 'UXterm' 或 '根终端'。
作者是名为 '[Terminator] [25]' 的 shell 程序的粉丝,但是这可能不会显示在 Kali Linux 的默认安装中。这里列出的所有 shell 程序都可用于使用 nmap 。
![Launch Terminal in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png)
*在 Kali Linux 下启动终端*
一旦终端启动nmap 的乐趣就开始了。 对于这个特定的教程,将会创建一个 Kali 机器和 Metasploitable机器之间的私有网络。
这会使事情变得更容易和更安全,因为私有的网络范围将确保扫描保持在安全的机器上,防止易受攻击的 Metasploitable 机器被其他人攻击。
### 怎样在我的网络上找到活动主机
在此示例中,这两台计算机都位于专用的 192.168.56.0/24 网络上。 Kali 机器的 IP 地址为 192.168.56.101,要扫描的 Metasploitable 机器的 IP 地址为 192.168.56.102。
假如我们不知道 IP 地址信息,但是可以通过快速 nmap 扫描来帮助确定在特定网络上哪些是活动主机。这种扫描称为 “简单列表” 扫描,将 `-sL`参数传递给 nmap 命令。
```
# nmap -sL 192.168.56.0/24
```
![Nmap - Scan Network for Live Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png)
*Nmap 扫描网络上的活动主机*
悲伤的是,这个初始扫描没有返回任何活动主机。 有时,这是某些操作系统处理[端口扫描网络流量][22]的一个方法。
###在我的网络中找到并 ping 所有活动主机
不用担心,在这里有一些技巧可以使 nmap 尝试找到这些机器。 下一个技巧会告诉 nmap 尝试去 ping 192.168.56.0/24 网络中的所有地址。
```
# nmap -sn 192.168.56.0/24
```
![Nmap - Ping All Connected Live Network Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png)
*Nmap Ping 所有已连接的活动网络主机*
这次 nmap 会返回一些潜在的主机来进行扫描! 在此命令中,`-sn` 禁用 nmap 的尝试对主机端口扫描的默认行为,只是让 nmap 尝试 ping 主机。
### 找到主机上的开放端口
让我们尝试让 nmap 端口扫描这些特定的主机,看看会出现什么。
```
# nmap 192.168.56.1,100-102
```
![Nmap - Network Ports Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png)
*Nmap 在主机上扫描网络端口*
哇! 这一次 nmap 挖到了一个金矿。 这个特定的主机有相当多的[开放网络端口][19]。
这些端口全都代表着在此特定机器上的某种监听服务。 我们前面说过192.168.56.102 的 IP 地址会分配给一台易受攻击的机器,这就是为什么在这个主机上会有这么多[开放端口][18]。
在大多数机器上打开这么多端口是非常不正常的,所以赶快调查这台机器是个明智的想法。管理员可以检查下网络上的物理机器,并在本地查看这些机器,但这不会很有趣,特别是当 nmap 可以为我们更快地做到时!
### 找到主机上监听端口的服务
下一个扫描是服务扫描,通常用于尝试确定机器上什么[服务监听在特定的端口][17]。
Nmap 将探测所有打开的端口,并尝试从每个端口上运行的服务中获取信息。
```
# nmap -sV 192.168.56.102
```
![Nmap - Scan Network Services Listening of Ports](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png)
*Nmap 扫描网络服务监听端口*
请注意这次 nmap 提供了一些关于 nmap 在特定端口运行的建议(在白框中突出显示),而且 nmap 也试图确认运行在这台机器上的[这个操作系统的信息][15]和它的主机名(也非常成功!)。
查看这个输出,应该引起网络管理员相当多的关注。 第一行声称 VSftpd 版本 2.3.4 正在这台机器上运行! 这是一个真正的旧版本的 VSftpd。
通过查找 ExploitDB对于这个版本早在 2001 年就发现了一个非常严重的漏洞ExploitDB ID 17491
### 发现主机上上匿名 ftp 登录
让我们使用 nmap 更加清楚的查看这个端口,并且看看可以确认什么。
```
# nmap -sC 192.168.56.102 -p 21
```
![Nmap - Scan Particular Post on Machine](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png)
*Nmap 扫描机器上的特定端口*
使用此命令,让 nmap 在主机上的 FTP 端口(`-p 21`)上运行其默认脚本(`-sC`)。 虽然它可能是、也可能不是一个问题,但是 nmap 确实发现在这个特定的服务器[是允许匿名 FTP 登录的][13]。
### 检查主机上的漏洞
这与我们早先知道 VSftd 有旧漏洞的知识相匹配,应该引起一些关注。 让我们看看 nmap有没有脚本来尝试检查 VSftpd 漏洞。
```
# locate .nse | grep ftp
```
![Nmap - Scan VSftpd Vulnerability](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png)
*Nmap 扫描 VSftpd 漏洞*
注意 nmap 已有一个 NSE 脚本已经用来处理 VSftpd 后门问题!让我们尝试对这个主机运行这个脚本,看看会发生什么,但首先知道如何使用脚本可能是很重要的。
```
# nmap --script-help=ftp-vsftd-backdoor.nse
```
![Learn Nmap NSE Script Usage](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png)
*了解 Nmap NSE 脚本使用*
通过这个描述,很明显,这个脚本可以用来试图查看这个特定的机器是否容易受到先前识别的 ExploitDB 问题的影响。
让我们运行这个脚本,看看会发生什么。
```
# nmap --script=ftp-vsftpd-backdoor.nse 192.168.56.102 -p 21
```
![Nmap - Scan Host for Vulnerable](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png)
*Nmap 扫描易受攻击的主机*
Nmap 的脚本返回了一些危险的消息。 这台机器可能面临风险,之后可以进行更加详细的调查。虽然这并不意味着机器缺乏对风险的抵抗力和可以被用于做一些可怕/糟糕的事情,但它应该给网络/安全团队带来一些关注。
Nmap 具有极高的选择性,非常平稳。 到目前为止已经做的大多数扫描, nmap 的网络流量都保持适度平稳,然而以这种方式扫描对个人拥有的网络可能是非常耗时的。
Nmap 有能力做一个更积极的扫描,往往一个命令就会产生之前几个命令一样的信息。 让我们来看看积极的扫描的输出(注意 - 积极的扫描会触发[入侵检测/预防系统][9]!)。
```
# nmap -A 192.168.56.102
```
![Nmap - Complete Network Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png)
*Nmap 在主机上完成网络扫描*
注意这一次使用一个命令nmap 返回了很多关于在这台特定机器上运行的开放端口、服务和配置的信息。 这些信息中的大部分可用于帮助确定[如何保护本机][7]以及评估网络上可能运行的软件。
这只是 nmap 可用于在主机或网段上找到的许多有用信息的很短的一个列表。强烈敦促个人在个人拥有的网络上继续[以nmap][6] 进行实验。(不要通过扫描其他主机来练习!)。
有一个关于 Nmap 网络扫描的官方指南,作者 Gordon Lyon可从[亚马逊](http://amzn.to/2eFNYrD)上获得。
方便的话可以留下你的评论和问题(或者使用 nmap 扫描器的技巧)。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/nmap-network-security-scanner-in-kali-linux/
作者:[Rob Turner][a]
译者:[DockerChen](https://github.com/DockerChen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://sourceforge.net/projects/metasploitable/files/Metasploitable2/
[2]:http://www.tecmint.com/install-virtualbox-on-redhat-centos-fedora/
[3]:http://www.tecmint.com/kali-linux-installation-guide
[4]:http://www.tecmint.com/kali-linux-installation-guide
[5]:http://amzn.to/2eFNYrD
[6]:http://www.tecmint.com/nmap-command-examples/
[7]:http://www.tecmint.com/security-and-hardening-centos-7-guide/
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png
[9]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png
[13]:http://www.tecmint.com/setup-ftp-anonymous-logins-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png
[15]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png
[17]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
[18]:http://www.tecmint.com/find-open-ports-in-linux/
[19]:http://www.tecmint.com/find-open-ports-in-linux/
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png
[22]:http://www.tecmint.com/audit-network-performance-security-and-troubleshooting-in-linux/
[23]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png
[25]:http://www.tecmint.com/terminator-a-linux-terminal-emulator-to-manage-multiple-terminal-windows/
[26]:http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png
[27]:http://www.tecmint.com/kali-linux-installation-guide
[28]:http://insecure.org/fyodor/
[29]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
[30]:http://www.tecmint.com/nmap-command-examples/

View File

@ -0,0 +1,87 @@
LINUX NOW RUNS ON 99.6% OF TOP 500 SUPERCOMPUTERS
============================================================
[
![Linux rules the world of supercomputers](https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg)
][12]
_Brief: Linux may have just 2% in the desktop market share, but when it comes to supercomputers, Linux is simply ruling it with over 99% of the share._
Linux running on more than 99% of the top 500 fastest supercomputers in the world is no surprise. If you followed our previous reports, in the year 2015, [Linux was running on more than 97% of the top 500 supercomputers][13]. This year, it just got better.
[#Linux now runs on more than 99% of top 500 #supercomputers in the world][4]
[CLICK TO TWEET][5]
This information is collected by an independent organization [Top500][14] that publishes the details about the top 500 fastest supercomputers known to them, twice a year. You can [go the website and filter out the list][15] based on country, OS type used, vendors etc. Dont worry, Ill do it for you to present some of the most interesting facts from this years list.
### LINUX GOT 498 OUT OF 500
If I have to break it down in numbers, 498 out of the top 500 supercomputers run Linux. Rest of the two supercomputers run Unix-based OS. Windows, which was running on 1 supercomputer until last year, is nowhere in the list this year. Perhaps, none of the supercomputers can run Windows 10 (pun intended).
To summarize the list of top 500 supercomputers based on OS this year:
* Linux: 498
* Unix: 2
* Windows: 0
To give you a year wise summary of Linux shares on the top 500 supercomputers:
* In 2012: 94%
* In [2013][6]: 95%
* In [2014][7]: 97%
* In [2015][8]: 97.2%
* In 2016: 99.6%
* In 2017: ???
In addition to that, first 380 fastest supercomputers run Linux, including of course the fastest supercomputer based in China. Unix is used by the 386th and 387th ranked supercomputers also based in China.
### SOME OTHER INTERESTING STATS ABOUT FASTEST SUPERCOMPUTERS
[
![List of top 10 fastest supercomputers in the world in 2016](https://itsfoss.com/wp-content/uploads/2016/11/fastest-supercomputers.png)
][16]
Moving Linux aside, I was looking at the list and thought of sharing some other interesting stats with you.
* Worlds fastest supercomputer is [Sunway TaihuLight][9]. It based in [National Supercomputing Center in Wuxi][10], China. It has a speed of 93PFLOPS.
* Worlds second fastest supercomputer is also based in China ([Tianhe-2][11]) while the third spot is taken by US based Titan.
* Out of the top 10 fastest supercomputers, USA has 5, Japan and China have 2 each while Switzerland has 1.
* US and China both have 171 supercomputers each in the list of the top 500 supercomputers.
* Japan has 27, France has 20, while India, Russia and Saudi Arabia has 5 supercomputers in the list.
[Suggested ReaddigiKam 5.0 Released! Install It In Ubuntu Linux][17]
Some interesting facts, isnt it? You can filter out your own list [here][18] to further details. For the moment I am happy to brag about Linux running on 99% of the top 500 supercomputers and look forward to a perfect score of 100% next year.
While you are reading it, do share this article on social media. Its an achievement for Linux and we got to show off :P
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-99-percent-top-500-supercomputers
作者:[Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Linux+Now+Runs+On+99.6%25+Of+Top+500+Supercomputers&url=https%3A%2F%2Fitsfoss.com%2Flinux-99-percent-top-500-supercomputers%2F&via=%40itsfoss
[2]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[3]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/&description=Linux+Now+Runs+On+99.6%25+Of+Top+500+Supercomputers&media=https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg
[4]:https://twitter.com/share?text=%23Linux+now+runs+on+more+than+99%25+of+top+500+%23supercomputers+in+the+world&via=itsfoss&related=itsfoss&url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[5]:https://twitter.com/share?text=%23Linux+now+runs+on+more+than+99%25+of+top+500+%23supercomputers+in+the+world&via=itsfoss&related=itsfoss&url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[6]:https://itsfoss.com/95-percent-worlds-top-500-supercomputers-run-linux/
[7]:https://itsfoss.com/97-percent-worlds-top-500-supercomputers-run-linux/
[8]:https://itsfoss.com/linux-runs-97-percent-worlds-top-500-supercomputers/
[9]:https://en.wikipedia.org/wiki/Sunway_TaihuLight
[10]:https://www.top500.org/site/50623
[11]:https://en.wikipedia.org/wiki/Tianhe-2
[12]:https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg
[13]:https://itsfoss.com/linux-runs-97-percent-worlds-top-500-supercomputers/
[14]:https://www.top500.org/
[15]:https://www.top500.org/statistics/sublist/
[16]:https://itsfoss.com/wp-content/uploads/2016/11/fastest-supercomputers.png
[17]:https://itsfoss.com/digikam-5-0-released-install-it-in-ubuntu-linux/
[18]:https://www.top500.org/statistics/sublist/

View File

@ -0,0 +1,84 @@
How To Manually Backup Your SMS / MMS Messages On Android?
============================================================
![Android backup sms mms](https://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-backup-sms-mms.jpg)
If youre switching a device or upgrading your system, making a backup of your data might be of crucial importance.
One of the places where our important data may lie, is in our SMS / MMS messages, be it of sentimental or utilizable value, backing it up might prove quite useful.
However, unlike our photos, videos or song files which can be transferred and backed up with relative ease, backing our SMS / MMS usually proves to be a bit more complicated task that commonly require involving a third party-app or service.
### Why Do It Manually?
Although there currently exist quite a bit of different apps that might take care of backing SMS and MMS for you, you may want to consider doing it manually for the following reasons:
1. Apps **may not work** on different devices or different Android versions.
2. Apps may backup your data by uploading it to the Internet cloud therefore requiring you to **jeopardize the safety** of your content.
3. By backing up manually, you have complete control over where your data goes and what it goes through, thus **limiting the risk of spyware** in the process.
4. Doing it manually can be overall **less time consuming, easier and more straightforward**than any other way.
### How To Backup SMS / MMS Manually?
To backup your SMS / MMS messages manually youll need to have an Android tool called [adb][1]installed on your computer.
Now, the important thing to know regarding SMS / MMS is that Android stores them in a database commonly named **mmssms.db.**
Since the location of that database may differ between one device to another and also because other SMS apps can create databases of their own, such as, gommssms.db created by GO SMS app, the first thing youd want to do is to search for these databases.
So, open up your CLI tool (I use Linux Terminal, you may use Windows CMD or PowerShell) and issue the following commands:
Note: below is a series of commands needed for the task and later is the explanation of what each command does.
`
adb root
adb shell
find / -name "*mmssms*"
exit
adb pull /PATH/TO/mmssms.db /PATH/TO/DESTINATION/FOLDER
`
#### Explanation:
We start with adb root command in order to start adb in root mode so that well have permissions to reach system protected files as well.
“adb shell” is used to get inside the device shell.
Next, the “find” command is used to search for the databases. (in my case its found in: /data/data/com.android.providers.telephony/databases/mmssms.db)
* Tip: if your Terminal prints too many irrelevant results, try refining your “find” parameters (google it).
[
![Android SMS&MMS databases](http://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-SMSMMS-databases-730x726.jpg)
][2]
Android SMS&MMS databases
Then we use exit command in order to exit back to our local system directory.
Lastly, adb pull is used to copy the database files into a folder on our computer.
Now, once youre ready to restore your SMS / MMS messages, whether its on a new device or a new system version, simply search again for the location of mmssms on the new system and replace it with the one youve backed.
Use adb push to replace it, e.g: adb push ~/Downloads/mmssms.db /data/data/com.android.providers.telephony/databases/mmssms.db
--------------------------------------------------------------------------------
via: https://iwf1.com/how-to-manually-backup-your-sms-mms-messages-on-android/
作者:[Liron ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/android
[1]:http://developer.android.com/tools/help/adb.html
[2]:http://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-SMSMMS-databases.jpg

View File

@ -0,0 +1,100 @@
Livepatch Apply Critical Security Patches to Ubuntu Linux Kernel Without Rebooting
============================================================
If you are a system administrator in charge of maintaining critical systems in enterprise environments, we are sure you know two important things:
1) Finding a downtime window to install security patches in order to handle kernel or operating system vulnerabilities can be difficult. If the company or business you work for does not have security policies in place, operations management may end up favoring uptime over the need to solve vulnerabilities. Additionally, internal bureaucracy can cause delays in granting approvals for a downtime. Been there myself.
2) Sometimes you cant really afford a downtime, and should be prepared to mitigate any potential exposures to malicious attacks some other way.
The good news is that Canonical has recently released (actually, a couple of days ago) its Livepatchservice to apply critical kernel patches to Ubuntu 16.04 (64-bit edition / 4.4.x kernel) without the need for a later reboot. Yes, you read that right: with Livepatch, you dont need to restart your Ubuntu 16.04 server in order for the security patches to take effect.
### Signing up for Ubuntu Livepatch
In order to use Canonical Livepatch Service, you need to sign up at [https://auth.livepatch.canonical.com/][1] and indicate if you are a regular Ubuntu user or an Advantage subscriber (paid option). All Ubuntu users can link up to 3 different machines to Livepatch through the use of a token:
[
![Canonical Livepatch Service](http://www.tecmint.com/wp-content/uploads/2016/10/Canonical-Livepatch-Service.png)
][2]
Canonical Livepatch Service
In the next step you will be prompted to enter your Ubuntu One credentials or sign up for a new account. If you choose the latter, you will need to confirm your email address in order to finish your registration:
[
![Ubuntu One Confirmation Mail](http://www.tecmint.com/wp-content/uploads/2016/10/Ubuntu-One-Confirmation-Mail.png)
][3]
Ubuntu One Confirmation Mail
Once you click on the link above to confirm your email address, youll be ready to go back to [https://auth.livepatch.canonical.com/][4] and get your Livepatch token.
### Getting and Using your Livepatch Token
To begin, copy the unique token assigned to your Ubuntu One account:
[
![Canonical Livepatch Token](http://www.tecmint.com/wp-content/uploads/2016/10/Livepatch-Token.png)
][5]
Canonical Livepatch Token
Then go to a terminal and type:
```
$ sudo snap install canonical-livepatch
```
The above command will install the livepatch, whereas
```
$ sudo canonical-livepatch enable [YOUR TOKEN HERE]
```
will enable it for your system. If this last command indicates it cant find canonical-livepatch, make sure `/snap/bin` has been added to your path. A workaround consists of changing your working directory to `/snap/bin` and do.
```
$ sudo ./canonical-livepatch enable [YOUR TOKEN HERE]
```
[
![Install Livepatch in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/10/Install-Livepatch-in-Ubuntu.png)
][6]
Install Livepatch in Ubuntu
Overtime, youll want to check the description and the status of patches applied to your kernel. Fortunately, this is as easy as doing.
```
$ sudo ./canonical-livepatch status --verbose
```
as you can see in the following image:
[
![Check Livepatch Status in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Livepatch-Status.png)
][7]
Check Livepatch Status in Ubuntu
Having enabled Livepatch on your Ubuntu server, you will be able to reduce planned and unplanned downtimes at a minimum while keeping your system secure. Hopefully Canonicals initiative will award you a pat on the back by management or better yet, a raise.
Feel free to let us know if you have any questions about this article. Just drop us a note using the comment form below and we will get back to you as soon as possible.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/livepatch-install-critical-security-patches-to-ubuntu-kernel
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://auth.livepatch.canonical.com/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Canonical-Livepatch-Service.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Ubuntu-One-Confirmation-Mail.png
[4]:https://auth.livepatch.canonical.com/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Livepatch-Token.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-Livepatch-in-Ubuntu.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Livepatch-Status.png

View File

@ -1,3 +1,5 @@
GitFuture get translating
DTrace for Linux 2016
===========

View File

@ -0,0 +1,113 @@
翻译中 by zky001
How to Sort Output of ls Command By Last Modified Date and Time
============================================================
One of the commonest things a Linux user will always do on the command line is [listing the contents of a directory][1]. As we may already know, [ls][2] and [dir][3] are the two commands available on Linux for listing directory content, with the former being more popular and in most cases, preferred by users.
When listing directory contents, the results can be sorted based on several criteria such as alphabetical order of filenames, modification time, access time, version and file size. Sorting using each of these file properties can be enabled by using a specific flag.
In this brief [ls command guide][4], we will look at how to [sort the output of ls command][5] by last modification time (date and time).
Let us start by executing some [basic ls commands][6].
### Linux Basic ls Commands
1. Running ls command without appending any argument will list current working directory contents.
```
$ ls
```
[
![List Content of Working Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Content-of-Working-Directory.png)
][7]
List Content of Working Directory
2. To list contents of any directory, for example /etc directory use:
```
$ ls /etc
```
[
![List Contents of Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Contents-of-Directory.png)
][8]
List Contents of Directory
3. A directory always contains a few hidden files (at least two), therefore, to show all files in a directory, use the `-a` or `--all` flag:
```
$ ls -a
```
[
![List Hidden Files in Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Hidden-Files-in-Directory.png)
][9]
List Hidden Files in Directory
4. You can as well print detailed information about each file in the ls output, such as the file permissions, number of links, owners name and group owner, file size, time of last modification and the file/directory name.
This is activated by the `-l` option, which means a long listing format as in the next screenshot:
```
$ ls -l
```
[
![Long List Directory Contents](http://www.tecmint.com/wp-content/uploads/2016/10/ls-Long-List-Format.png)
][10]
Long List Directory Contents
### Sort Files Based on Time and Date
5. To list files in a directory and [sort them last modified date and time][11], make use of the `-t` option as in the command below:
```
$ ls -lt
```
[
![Sort ls Output by Date and Time](http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-by-Date-and-Time.png)
][12]
Sort ls Output by Date and Time
6. If you want a reverse sorting files based on date and time, you can use the `-r` option to work like so:
```
$ ls -ltr
```
[
![Sort ls Output Reverse by Date and Time](http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-Reverse-by-Date-and-Time.png)
][13]
Sort ls Output Reverse by Date and Time
We will end here for now, however, there is more usage information and options in the [ls command][14], so make it a point to look through it or any other guides offering [ls command tricks every Linux user should know][15] or [use sort command][16]. Last but not least, you can reach us via the feedback section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time
作者:[Aaron Kili][a]
译者:[zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/file-and-directory-management-in-linux/
[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[3]:http://www.tecmint.com/linux-dir-command-usage-with-examples/
[4]:http://www.tecmint.com/tag/linux-ls-command/
[5]:http://www.tecmint.com/sort-command-linux/
[6]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Content-of-Working-Directory.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Contents-of-Directory.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Hidden-Files-in-Directory.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/10/ls-Long-List-Format.png
[11]:http://www.tecmint.com/find-and-sort-files-modification-date-and-time-in-linux/
[12]:http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-by-Date-and-Time.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-Reverse-by-Date-and-Time.png
[14]:http://www.tecmint.com/tag/linux-ls-command/
[15]:http://www.tecmint.com/linux-ls-command-tricks/
[16]:http://www.tecmint.com/linux-sort-command-examples/

View File

@ -0,0 +1,115 @@
3 Ways to Extract and Copy Files from ISO Image in Linux
============================================================
Lets say you have a large ISO file on your Linux server and you wanted to access, extract or copy one single file from it. How do you do it? Well in Linux there are couple ways do it.
For example, you can use standard mount command to mount an ISO image in read-only mode using the loop device and then copy the files to another directory.
### Mount or Extract ISO File in Linux
To do so, you must have an ISO file (I used ubuntu-16.10-server-amd64.iso ISO image) and mount point directory to mount or extract ISO files.
First create an mount point directory, where you will going to mount the image as shown:
```
$ sudo mkdir /mnt/iso
```
Once directory has been created, you can easily mount ubuntu-16.10-server-amd64.iso file and verify its content by running following command.
```
$ sudo mount -o loop ubuntu-16.10-server-amd64.iso /mnt/iso
$ ls /mnt/iso/
```
[
![Mount ISO File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Mount-ISO-File-in-Linux.png)
][1]
Mount ISO File in Linux
Now you can go inside the mounted directory (/mnt/iso) and access the files or copy the files to `/tmp`directory using [cp command][2].
```
$ cd /mnt/iso
$ sudo cp md5sum.txt /tmp/
$ sudo cp -r ubuntu /tmp/
```
[
![Copy Files From ISO File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Copy-Files-From-ISO-File-in-Linux.png)
][3]
Copy Files From ISO File in Linux
Note: The `-r` option used to copy directories recursively, if you want you can also [monitor progress of copy command][4].
### Extract ISO Content Using 7zip Command
If you dont want to mount ISO file, you can simply install 7zip, is an open source archive program used to pack or unpack different number of formats including TAR, XZ, GZIP, ZIP, BZIP2, etc..
```
$ sudo apt-get install p7zip-full p7zip-rar [On Debian/Ubuntu systems]
$ sudo yum install p7zip p7zip-plugins [On CentOS/RHEL systems]
```
Once 7zip program has been installed, you can use 7z command to extract ISO file contents.
```
$ 7z x ubuntu-16.10-server-amd64.iso
```
[
![7zip - Extract ISO File Content in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Extract-ISO-Content-in-Linux.png)
][5]
7zip Extract ISO File Content in Linux
Note: As compared to Linux mount command, 7zip seems much faster and smart enough to pack or unpack any archive formats.
### Extract ISO Content Using isoinfo Command
The isoinfo command is used for directory listings of iso9660 images, but you can also use this program to extract files.
As I said isoinfo program perform directory listing, so first list the content of ISO file.
```
$ isoinfo -i ubuntu-16.10-server-amd64.iso -l
```
[
![List ISO Content in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/List-ISO-Content-in-Linux.png)
][6]
List ISO Content in Linux
Now you can extract a single file from an ISO image like so:
```
$ isoinfo -i ubuntu-16.10-server-amd64.iso -x MD5SUM.TXT > MD5SUM.TXT
```
Note: The redirection is needed as `-x` option extracts to stdout.
[
![Extract Single File from ISO in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Extract-Single-File-from-ISO-in-Linux.png)
][7]
Extract Single File from ISO in Linux
Well, there are many ways to do, if you know any useful command or program to extract or copy files from ISO file do share us via comment section.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/extract-files-from-iso-files-linux
作者:[Ravi Saive][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Mount-ISO-File-in-Linux.png
[2]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Copy-Files-From-ISO-File-in-Linux.png
[4]:http://www.tecmint.com/monitor-copy-backup-tar-progress-in-linux-using-pv-command/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Extract-ISO-Content-in-Linux.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/List-ISO-Content-in-Linux.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Extract-Single-File-from-ISO-in-Linux.png

View File

@ -0,0 +1,100 @@
4 Useful Way to Know Plugged USB Device Name in Linux
============================================================
As a newbie, one of the many [things you should master in Linux][1] is identification of devices attached to your system. It may be your computers hard disk, an external hard drive or removable media such USB drive or SD Memory card.
Using USB drives for file transfer is so common today, and for those (new Linux users) who prefer to use the command line, learning the different ways to identify a USB device name is very important, when you need to format it.
Once you attach a device to your system such as a USB, especially on a desktop, it is automatically mounted to a given directory, normally under /media/username/device-label and you can then access the files in it from that directory. However, this is not the case with a server where you have to[ manually mount a device][2] and specify its mount point.
Linux identifies devices using special device files stored in `/dev` directory. Some of the files you will find in this directory include `/dev/sda` or `/dev/hda` which represents your first master drive, each partition will be represented by a number such as `/dev/sda1` or `/dev/hda1` for the first partition and so on.
```
$ ls /dev/sda*
```
[
![List All Linux Device Names](http://www.tecmint.com/wp-content/uploads/2016/10/List-All-Linux-Device-Names.png)
][3]
List All Linux Device Names
Now lets find out device names using some different command-line tools as shown:
### Find Out Plugged USB Device Name Using df Command
To view each device attached to your system as well as its mount point, you can use the [df command][4](checks Linux disk space utilization) as shown in the image below:
```
$ df -h
```
[
![Find USB Device Name Using df Command](http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name.png)
][5]
Find USB Device Name Using df Command
### Use lsblk Command to Find USB Device Name
You can also use the [lsblk command (list block devices)][6] which lists all block devices attached to your system like so:
```
$ lsblk
```
[
![List Linux Block Devices](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Block-Devices.png)
][7]
List Linux Block Devices
### Identify USB Device Name with fdisk Utility
[fdisk is a powerful utility][8] which prints out the partition table on all your block devices, a USB drive inclusive, you can run it will root privileges as follows:
```
$ sudo fdisk -l
```
[
![List Partition Table of Block Devices](http://www.tecmint.com/wp-content/uploads/2016/10/List-Partition-Table.png)
][9]
List Partition Table of Block Devices
### Determine USB Device Name with dmesg Command
dmesg is an important command that prints or controls the kernel ring buffer, a data structure which [stores information about the kernels operations][10].
Run the command below to view kernel operation messages which will as well print information about your USB device:
```
$ dmesg
```
[
![dmesg - Prints USB Device Name](http://www.tecmint.com/wp-content/uploads/2016/10/dmesg-shows-kernel-information.png)
][11]
dmesg Prints USB Device Name
That is all for now, in this article, we have covered different approaches of how to find out a USB device name from the command line. You can also share with us any other methods for the same purpose or perhaps offer us your thoughts about the article via the response section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/find-usb-device-name-in-linux
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/tag/linux-tricks/
[2]:http://www.tecmint.com/mount-filesystem-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/List-All-Linux-Device-Names.png
[4]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name.png
[6]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Block-Devices.png
[8]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Partition-Table.png
[10]:http://www.tecmint.com/dmesg-commands/
[11]:http://www.tecmint.com/wp-content/uploads/2016/10/dmesg-shows-kernel-information.png

View File

@ -1,3 +1,5 @@
Vic020
# How to design and add your own font on Linux with Glyphr
LibreOffice already offers a galore of fonts, and users can always download and add more. However, if you want to create your own custom font, you can do it easily by using Glyphr. Glyphr is a new open source vector font designer with an intuitive and easy to use graphical interface and a rich set of features that will take care every aspect of the font design. Although the application is still in early development, it is already pretty good. Heres a quick guide showing how to design your own custom fonts on Glyphr, and how to add them on LibreOffice once youre done.

View File

@ -0,0 +1,109 @@
CLOUD FOCUSED LINUX DISTROS FOR PEOPLE WHO BREATHE ONLINE
============================================================
[
![Best Linux distributions for cloud computing](https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg)
][6]
_Brief: We list some _Cloud centric_ Linux distributions that might be termed as real Linux alternatives to Chrome OS._
The world is moving to cloud-based services and we all know the kind of love that Chrome OS got. Well, it does deserve respect. Its super fast, light, power-efficient, minimalistic, beautifully designed and utilizes the full potential of cloud that technology permits today.
Although [Chrome OS][7] is exclusively available only for Googles hardware, there are other means to experience the potential of cloud computing right on your laptop or desktop.
As I have repeatedly said, there is always something for everybody in the Linux domain, be it [Linux distributions that look like Windows][8] or Mac OS. Linux is all about sharing, love and some really bleeding edge computing experience. Lets crack this list right away!
### 1\. CUB LINUX
![Cub Linux Desktop](https://itsfoss.com/wp-content/uploads/2016/10/cub1.jpg)
It is not Chrome OS. But the above image is featuring the desktop of [Cub Linux][9]. Say what?
Cub Linux is no news for Linux users. But if you already did not know, Cub Linux is the web focused Linux distro that is inspired from mainstream Chrome OS. It is also the open source brother of Chrome OS from mother Linux.
Chrome OS has the Chrome Browser as its primary component. Not so long ago, a project by name [Chromixium OS][10] was started to recreate Chrome OS like experience by using the Chromium Browser in place of Chrome Browser. Due to some legal issues, the name was later changed to Cub Linux (Chromium+Ubuntu).
![cub2](https://itsfoss.com/wp-content/uploads/2016/10/cub2.jpg)
Well, history apart, as the name hints, Cub Linux is based on Ubuntu, features the lightweight Openbox Desktop Environment. The Desktop is customized to give a Chrome OS impression and looks really neat.
In the apps department, you can install the web applications from the Chrome web store and all the Ubuntu software. Yup, with all the snappy apps of the Chrome OS, Youll still get the Ubuntu goodies.
As far as the performance is concerned, the operating system is super fast thanks to its Openbox Desktop Environment. Based on Ubuntu Linux, the stability of Cub Linux is unquestionable. The desktop itself is a treat to the eyes with all its smooth animations and beautiful UI.
[Suggested Read[Year 2013 For Linux] 2 Linux Distributions Discontinued][11]
I suggest Cub Linux to anybody who spends most their times on a browser and do some home tasks now and then. Well, a browser is all you need and a browser is exactly what youll get.
### 2\. PEPPERMINT OS
A good number of people look towards Linux because they want a no-BS computing experience. Some people do not really like the hassle of an anti-virus, a defragmenter, a cleaner etcetera as they want an operating system and not a baby. And I must say Peppermint OS is really good at being no-BS. [Peppermint OS][12] developers have put a good amount of effort in understanding the user requirements and needs.
![pep1](https://itsfoss.com/wp-content/uploads/2016/11/pep1.jpg)
There is a very small number of selected software included by default. The traditional ideology of including a couple apps from every software category is ditched here for good. The power to customize the computer as per needs has been given to the user. By the way, do we really need to install so many applications when we can get web alternatives for almost all the applications?
Ice
Ice is a utile little tool that converts your favorite and most used websites into desktop applications that you can directly launch from your desktop or the menu. Its what we call a site-specific browser.
![pep4](https://itsfoss.com/wp-content/uploads/2016/11/pep4.jpg)
Love facebook? Why not make a facebook web app on your desktop for quick launch? While there are people complaining about a decent Google drive application for Linux, Ice allows you to access Drive with just a click. Not just Drive, the functionality of Ice is limited only by your imagination.
Peppermint OS 7 is based on Ubuntu 16.04\. It not only provides a smooth, rock solid performance but also a very swift response. A heavily customizes LXDE will be your home screen. And the customization Im speaking about is driven to achieve both a snappy performance as well as visual appeal.
Peppermint OS hits more of a middle ground in the cloud-native operating system types. Although the skeleton of the OS is designed to support the speedy cloudy apps, the native Ubuntu application play well too. If you are someone like me who wants an OS that is balanced in online-offline capabilities, [Peppermint OS is for you][13].
[Suggested ReadPennsylvania High School Distributes 1,700 Ubuntu Laptops to Students][14]
### 3.APRICITY OS
[Apricity OS][15] stole the show for being one of the top aesthetically pleasing Linux distros out there. Its just gorgeous. Its like the Mona Lisa of the Linux domain. But, theres more to it than just the looks.
![ap2](https://itsfoss.com/wp-content/uploads/2016/11/ap2.jpg)
The prime reason [Apricity OS][16] makes this list is because of its simplicity. While OS desktop design is getting chaotic and congested with elements (and Im not talking only about non-Linux operating systems), Apricity de-clutters everything and simplifies the very basic human-desktop interaction. Gnome desktop environment is customized beautifully here. They made it really simpler.
The pre-installed software list is really small. Almost all Linux distros have the same pre-installed software. But Apricity OS has a completely new set of software. Chrome instead of Firefox. I was really waiting for this.  I mean why not give us whats rocking out there?
Apricity OS also features the Ice tool that we discussed in the last segment. But instead of Firefox, Chrome browser is used in website-desktop integration. Apricity OS has Numix Circle icons by default and everytime you add a popular webapp, there will be a beautiful icon placed on your Dock.
![](https://itsfoss.com/wp-content/uploads/2016/11/ap1.jpg)
See what I mean?
Apricity OS is based on Arch Linux. (So anybody looking for a quick start with Arch, and a beautiful at that one, download that Apricity ISO [here][17]) Apricity fully upholds the Arch principle of freedom of choice. Within just 10 minutes on the Ice, and youll have all your favorite webapps set up.
Gorgeous backgrounds, minimalistic desktop and a great functionality. These make Apricity OS a really great choice for setting up an amazing cloud-centric system. Itll take 5 mins for Apricity OS to make you fall in love with it. I mean it.
There you have it, people. Cloud-centric Linux distros for online dwellers. Do give us your take on the webapp-native app topic. Dont forget to share.
--------------------------------------------------------------------------------
via: https://itsfoss.com/cloud-focused-linux-distros/
作者:[Aquil Roshan ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/aquil/
[1]:https://itsfoss.com/author/aquil/
[2]:https://itsfoss.com/cloud-focused-linux-distros/#comments
[3]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Cloud+Focused+Linux+Distros+For+People+Who+Breathe+Online&url=https%3A%2F%2Fitsfoss.com%2Fcloud-focused-linux-distros%2F&via=%40itsfoss
[4]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/cloud-focused-linux-distros/
[5]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/cloud-focused-linux-distros/&description=Cloud+Focused+Linux+Distros+For+People+Who+Breathe+Online&media=https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg
[6]:https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg
[7]:https://en.wikipedia.org/wiki/Chrome_OS
[8]:https://itsfoss.com/windows-like-linux-distributions/
[9]:https://cublinux.com/
[10]:https://itsfoss.com/chromixiumos-released/
[11]:https://itsfoss.com/year-2013-linux-2-linux-distributions-discontinued/
[12]:https://peppermintos.com/
[13]:https://peppermintos.com/
[14]:https://itsfoss.com/pennsylvania-high-school-ubuntu/
[15]:https://apricityos.com/
[16]:https://itsfoss.com/apricity-os/
[17]:https://apricityos.com/

View File

@ -1,3 +1,5 @@
Translating by rusking
# Kali Linux Fresh Installation Guide
Kali Linux is arguably one of the best out of the box [Linux distributions available for security testing][18]. While many of the tools in Kali can be installed in most Linux distributions, the Offensive Security team developing Kali has put countless hours into perfecting their ready to boot security distribution.

View File

@ -0,0 +1,58 @@
OneNewLife translating
When to use NGINX instead of Apache
=====
>They're both popular open-source web servers but, according to NGINX CEO Gus Robertson, they have different use cases. And Microsoft? Its web server has dropped below 10 percent of all active websites for the first time in 20 years.
![web-server-popularity-october-2016.png](http://zdnet1.cbsistatic.com/hub/i/r/2016/11/07/f38d190e-046c-49e6-b451-096ee0776a04/resize/770xauto/b009f53417e9a4af207eff6271b90c43/web-server-popularity-october-2016.png)
Apache is the top web server, but NGINX continues to gain and Microsoft IIS falls below 10 percent for the first time in decades.
[NGINX][4] has risen to become the number two web server. It surpassed [Microsoft Internet Information Services (IIS)][5] long ago and has been creeping up on long-time top web server [Apache][6]. But, NGINX CEO Gus Roberston said in an interview, Apache and NGINX are not after the same audiences.
"I think Apache is a great web server. NGINX is different use case," said Robertson. "We don't see Apache as a rival. Our customers use NGINX to replace hardware load balancers and build micro-services neither of which is Apache."
Indeed, Robertson finds that many customers use both open-source web services. "Customers will use NGINX in front of Apache for load balancing and applications. Our architecture is quite different and we can do better concurrent web services." He also said NGINX works better in cloud configurations.
He concluded, "We're the only web server still growing, everyone else is still shrinking."
These gains, coupled with Microsoft's loss of 1.2 million active sites, led to Microsoft's share of active sites dropping to 9.27 percent, the first time that it has fallen below 10 percent. Apache increased its market share by 0.19 percentage points and continues to dominate, now with 46.30 percent of active sites. Still, it is true that over the years Apache has been slowly declining, while NGINX is now at 19 percent.That's not quite true. According to the [October Netcraft web server survey][7], Apache saw the largest increase of active sites this month, gaining 1.8 million, while NGINX gained 400,000, the second-largest growth.
NGINX's developers are seeking to make their open-core commercial web server, [NGINX Plus][8], more competitive by continuing to improve it. With the latest release, [NGINX Plus Release 11 (R11][9]), the server is both easier to extend and customize, and support a broader range of deployments.
The biggest addition is binary compatibility for [dynamic modules][10]. This means that dynamic modules that have been compiled against the [open-source NGINX software][11] can be loaded into NGINX Plus.
This means you can leverage the large number of [thirdparty NGINX modules][12] to extend and add functionality to NGINX Plus, drawing from a range of open-source and commercially produced modules. Developers can create custom extensions, addons, and new products based on the supported NGINX Plus core.
NGINX Plus R11 also added other enhancements:
* [Improved TCP/UDP load balancing][1] -- New features include SSL server name routing, new logging functionality, additional variables, and improved PROXY protocol support. These new features enhance debugging capabilities and enable you to support a broader range of enterprise applications.
* [Better geolocation by IP address][2] -- The thirdparty GeoIP2 module is now certified and provided to NGINX Plus customers. This new version provides localized and richer location detail than the original GeoIP module.
* [Enhanced nginScript module][3] -- nginScript is the nextgeneration configuration language for NGINX Plus, based on JavaScript. New features enable you to modify request and response data on the fly in the Stream (TCP/UDP) module.
The end result? NGINX is poised to continue to make the race for the top web server a two-horse race. Microsoft IIS? It continues to slowly fade away.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/when-to-use-nginx-instead-of-apache/
作者:[ Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-tcp-udp-lb
[2]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-geoip2
[3]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-nginScript
[4]:https://www.nginx.com/
[5]:https://www.iis.net/
[6]:https://httpd.apache.org/
[7]:https://news.netcraft.com/archives/2016/10/21/october-2016-web-server-survey.html
[8]:https://www.nginx.com/products/
[9]:https://www.nginx.com/blog/nginx-plus-r11-released/
[10]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-dynamic-modules
[11]:https://www.nginx.com/products/download-oss/
[12]:https://www.nginx.com/resources/wiki/modules/index.html?utm_source=nginx-plus-r11-released&utm_medium=blog

View File

@ -1,249 +0,0 @@
DockerChen翻译中
# A Practical Guide to Nmap (Network Security Scanner) in Kali Linux
In the second Kali Linux article, the network tool known as [nmap][30] will be discussed. While nmap isnt a Kali only tool, it is one of the most [useful network mapping tools][29] in Kali.
1. [Kali Linux Installation Guide for Beginners Part 1][4]
Nmap, short for Network Mapper, is maintained by Gordon Lyon (more about Mr. Lyon here: [http://insecure.org/fyodor/][28]) and is used by many security professionals all over the world.
The utility works in both Linux and Windows and is command line (CLI) driven. However for those a little more timid of the command line, there is a wonderful graphical frontend for nmap called zenmap.
It is strongly recommended that individuals learn the CLI version of nmap as it provides much more flexibility when compared to the zenmap graphical edition.
What purpose does nmap server? Great question. Nmap allows for an administrator to quickly and thoroughly learn about the systems on a network, hence the name, Network MAPper or nmap.
Nmap has the ability to quickly locate live hosts as well as services associated with that host. Nmaps functionality can be extended even further with the Nmap Scripting Engine, often abbreviated as NSE.
This scripting engine allows administrators to quickly create a script that can be used to determine if a newly discovered vulnerability exists on their network. Many scripts have been developed and included with most nmap installs.
A word of caution nmap is a commonly used by people with both good and bad intentions. Extreme caution should be taken to ensure that you arent using nmap against systems that permission has not be explicitlyprovided in a written/legal agreement. Please use caution when using the nmap tool.
#### System Requirements
1. [Kali Linux][3] (nmap is available in other operating systems and functions similar to this guide).
2. Another computer and permission to scan that computer with nmap This is often easily done with software such as [VirtualBox][2] and the creation of a virtual machine.
1. For a good machine to practice with, please read about Metasploitable 2
2. Download for MS2 [Metasploitable2][1]
3. A valid working connection to a network or if using virtual machines, a valid internal network connection for the two machines.
### Kali Linux Working with Nmap
The first step to working with nmap is to log into the Kali Linux machine and if desired, start a graphical session (This first article in this series installed [Kali Linux with the Enlightenment Desktop Environment][27]).
During the installation, the installer would have prompted the user for a root user password which will be needed to login. Once logged in to the Kali Linux machine, using the command startx the Enlightenment Desktop Environment can be started it is worth noting that nmap doesnt require a desktop environment to run.
```
# startx
```
[
![Start Desktop Environment in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png)
][26]
Start Desktop Environment in Kali Linux
Once logged into Enlightenment, a terminal window will need to be opened. By clicking on the desktop background, a menu will appear. Navigating to a terminal can be done as follows: Applications -> System ->Xterm or UXterm or Root Terminal.
The author is a fan of the shell program called [Terminator][25] but this may not show up in a default install of Kali Linux. All shell programs listed will work for the purposes of nmap.
[
![Launch Terminal in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png)
][24]
Launch Terminal in Kali Linux
Once a terminal has been launched, the nmap fun can begin. For this particular tutorial, a private network with a Kali machine and a Metasploitable machine was created.
This made things easier and safer since the private network range would ensure that scans remained on safe machines and prevents the vulnerable Metasploitable machine from being compromised by someone else.
In this example, both of the machines are on a private 192.168.56.0 /24 network. The Kali machine has an IP address of 192.168.56.101 and the Metasploitable machine to be scanned has an IP address of 192.168.56.102.
Lets say though that the IP address information was unavailable. A quick nmap scan can help to determine what is live on a particular network. This scan is known as a Simple List scan hence the `-sL` arguments passed to the nmap command.
```
# nmap -sL 192.168.56.0/24
```
[
![Nmap - Scan Network for Live Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png)
][23]
Nmap Scan Network for Live Hosts
Sadly, this initial scan didnt return any live hosts. Sometimes this is a factor of the way certain Operating Systems handle [port scan network traffic][22].
Not to worry though, there are some tricks that nmap has available to try to find these machines. This next trick will tell nmap to simply try to ping all the addresses in the 192.168.56.0/24 network.
```
# nmap -sn 192.168.56.0/24
```
[
![Nmap - Ping All Connected Live Network Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png)
][21]
Nmap Ping All Connected Live Network Hosts
This time nmap returns some prospective hosts for scanning! In this command, the `-sn` disables nmaps default behavior of attempting to port scan a host and simply has nmap try to ping the host.
Lets try letting nmap port scan these specific hosts and see what turns up.
```
# nmap 192.168.56.1,100-102
```
[
![Nmap - Network Ports Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png)
][20]
Nmap Network Ports Scan on Host
Wow! This time nmap hit a gold mine. This particular host has quite a bit of [open network ports][19].
These ports all indicate some sort of listening service on this particular machine. Recalling from earlier, the 192.168.56.102 IP address is assigned to the metasploitable vulnerable machine hence why there are so many [open ports on this host][18].
Having this many ports open on most machines is highly abnormal so it may be a wise idea to investigate this machine a little closer. Administrators could track down the physical machine on the network and look at the machine locally but that wouldnt be much fun especially when nmap could do it for us much quicker!
This next scan is a service scan and is often used to try to determine what [service may be listening on a particular port][17] on a machine.
Nmap will probe all of the open ports and attempt to banner grab information from the services running on each port.
```
# nmap -sV 192.168.56.102
```
[
![Nmap - Scan Network Services Listening of Ports](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png)
][16]
Nmap Scan Network Services Listening of Ports
Notice this time nmap provided some suggestions on what nmap thought might be running on this particular port (highlighted in the white box). Also nmap also tried to [determine information about the operating system][15]running on this machine as well as its hostname (with great success too!).
Looking through this output should raise quite a few concerns for a network administrator. The very first line claims that VSftpd version 2.3.4 is running on this machine! Thats a REALLY old version of VSftpd.
Searching through ExploitDB, a serious vulnerability was found back in 2011 for this particular version (ExploitDB ID 17491).
Lets have nmap take a closer look at this particular port and see what can be determined.
```
# nmap -sC 192.168.56.102 -p 21
```
[
![Nmap - Scan Particular Post on Machine](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png)
][14]
Nmap Scan Particular Post on Machine
With this command, nmap was instructed to run its default script (-sC) on the FTP port (-p 21) on the host. While it may or may not be an issue, nmap did find out that [anonymous FTP login is allowed][13] on this particular server.
This paired with the earlier knowledge about VSftd having an old vulnerability should raise some concern though. Lets see if nmap has any scripts that attempt to check for the VSftpd vulnerability.
```
# locate .nse | grep ftp
```
[
![Nmap - Scan VSftpd Vulnerability](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png)
][12]
Nmap Scan VSftpd Vulnerability
Notice that nmap has a NSE script already built for the VSftpd backdoor problem! Lets try running this script against this host and see what happens but first it may be important to know how to use the script.
```
# nmap --script-help=ftp-vsftd-backdoor.nse
```
[
![Learn Nmap NSE Script Usage](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png)
][11]
Learn Nmap NSE Script Usage
Reading through this description, it is clear that this script can be used to attempt to see if this particular machine is vulnerable to ExploitDB issue identified earlier.
Lets run the script and see what happens.
```
# nmap --script=ftp-vsftpd-backdoor.nse 192.168.56.102 -p 21
```
[
![Nmap - Scan Host for Vulnerable](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png)
][10]
Nmap Scan Host for Vulnerable
Yikes! Nmaps script returned some dangerous news. This machine is likely a good candidate for a serious investigation. This doesnt mean that the machine is compromised and being used for horrible/terrible things but it should bring some concerns to the network/security teams.
Nmap has the ability to be extremely selective and extremely quite. Most of what has been done so far has attempted to keep nmaps network traffic moderately quiet however scanning a personally owned network in this fashion can be extremely time consuming.
Nmap has the ability to do a much more aggressive scan that will often yield much of the same information but in one command instead of several. Lets take a look at the output of an aggressive scan (Do note an aggressive scan can set off [intrusion detection/prevention systems][9]!).
```
# nmap -A 192.168.56.102
```
[
![Nmap - Complete Network Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png)
][8]
Nmap Complete Network Scan on Host
Notice this time, with one command, nmap has returned a lot of the information it returned earlier about the open ports, services, and configurations running on this particular machine. Much of this information can be used to help determine [how to protect this machine][7] as well as to evaluate what software may be on a network.
This was just a short, short list of the many useful things that nmap can be used to find on a host or network segment. It is strongly urged that individuals continue to [experiment with nmap][6] in a controlled manner on a network that is owned by the individual (Do not practice by scanning other entities!).
There is a official guide on Nmap Network Scanning by author Gordon Lyon, available from Amazon.
<center style="border: 0px; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline;">[
![Nmap Network Scanning Guide](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Network-Security-Scanner-Guide.png)
][5]</center>
--------------------------------------------------------------------------------
via: http://www.tecmint.com/nmap-network-security-scanner-in-kali-linux/
作者:[Rob Turner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://sourceforge.net/projects/metasploitable/files/Metasploitable2/
[2]:http://www.tecmint.com/install-virtualbox-on-redhat-centos-fedora/
[3]:http://www.tecmint.com/kali-linux-installation-guide
[4]:http://www.tecmint.com/kali-linux-installation-guide
[5]:http://amzn.to/2eFNYrD
[6]:http://www.tecmint.com/nmap-command-examples/
[7]:http://www.tecmint.com/security-and-hardening-centos-7-guide/
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png
[9]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png
[13]:http://www.tecmint.com/setup-ftp-anonymous-logins-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png
[15]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png
[17]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
[18]:http://www.tecmint.com/find-open-ports-in-linux/
[19]:http://www.tecmint.com/find-open-ports-in-linux/
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png
[22]:http://www.tecmint.com/audit-network-performance-security-and-troubleshooting-in-linux/
[23]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png
[25]:http://www.tecmint.com/terminator-a-linux-terminal-emulator-to-manage-multiple-terminal-windows/
[26]:http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png
[27]:http://www.tecmint.com/kali-linux-installation-guide
[28]:http://insecure.org/fyodor/
[29]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
[30]:http://www.tecmint.com/nmap-command-examples/

View File

@ -0,0 +1,106 @@
How To Update Wifi Network Password From Terminal In Arch Linux
============================================================
![Update Wifi Network Password From Terminal In Arch Linux](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
After changing the Wifi Network password in my Router, My Arch Linux test machine lost the Internet connection. So I wanted to update the new password from Terminal because my Arch Linux test box doesnt have graphical desktop environment. Changing old wifi password to new password is pretty easy in GUI mode. I will simply open the network manager and update the new password to the wifi in few seconds. However, I am not aware of updating the wifi network password from command line in Arch Linux. So, I started to dig into Google and find a perfect solution from the Arch Linux forum. In case you ever been in the same situation, read on. Its not that difficult.
### Update Wifi Network Password From Terminal
After changing the password in Router, I ran _wifi-menu_ command to update the new password. But It kept throwing the following error.
```
sudo wifi-menu
```
It displayed the list of available wifi networks.
[
![sksk_001](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001-1.png)
][2]
My wifi network name is Murugs9376. Then, I selected my network and hit OK button. Instead of asking the new password (I thought it was going to ask me if the password has been changed.), It showed the following error.
```
Interface 'wlp9s0' is controlled by netctl-auto
WPA association/authentication failed for interface 'wlp9s0'
```
[
![sksk_002](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002-1.png)
][3]
I dont have much experience in Arch based distributions. So I went thorough the Arch linux forum hoping for the solution. Thankfully, someone has posted the same problem and got the workaround from one of the fellow Arch user. Following is the solution to update the wifi network password from Terminal in Arch based distributions.
The network profiles is stored in the /etc/netctl/ folder. For example, here is my Arch Linux test box wifi network profile details.
```
ls /etc/netctl/
Sample Output:
examples ostechnix 'wlp9s0-Chendhan Cell Service' wlp9s0-Pratheesh
hooks wlp9s0 wlp9s0-Murugu9376
interfaces wlp9s0-AndroidAP wlp9s0-none
```
[
![sksk_003](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003-1.png)
][4]
All I need to update the new password is to delete the my wifi network profile (Ex. wlp9s0-Murugs9376) and re-run the _wifi-menu_ command to new password.
So, first let us delete the wifi profile using command:
```
sudo rm /etc/netctl/wlp9s0-Murugu9376
```
After deleting the profile, run wifi-menu command to update the new password.
```
sudo wifi-menu
```
Select the wifi-network and press ENTER.
[
![sksk_004](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004-1.png)
][5]
Enter a name for the profile.
[
![sksk_005](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005-1.png)
][6]
Finally, Enter the security key to the network profile and hit ENTER key.
[
![sksk_006](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006-1.png)
][7]
Thats it. Now, we have updated the password to the wifi network. As you can see, updating password from Terminal in Arch Linux is no big deal. Anyone could do it in a matter of seconds.
If you find this guide useful, please share it on your social networks and support us.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/update-wifi-network-password-terminal-arch-linux/
作者:[ SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://ostechnix.tradepub.com/free/w_pacb38/prgm.cgi?a=1
[2]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002-1.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004-1.png
[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005-1.png
[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006-1.png

View File

@ -0,0 +1,125 @@
How to check if port is in use on Linux or Unix
============================================================
[
![](https://s0.cyberciti.org/images/category/old/linux-logo.png)
][1]
How do I determine if a port is in use under Linux or Unix-like system? How can I verify which ports are listening on Linux server?
It is important you verify which ports are listing on the servers network interfaces. You need to pay attention to open ports to detect an intrusion. Apart from an intrusion, for troubleshooting purposes, it may be necessary to check if a port is already in use by a different application on your servers. For example, you may install Apache and Nginx server on the same system. So it is necessary to know if Apache or Nginx is using TCP port # 80/443\. This quick tutorial provides steps to use the netstat, nmap and lsof command to check the ports in use and view the application that is utilizing the port.
### How to check the listening ports and applications on Linux:
1. Open a terminal application i.e. shell prompt.
2. Run any one of the following command:
```
sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O IP-address-Here
```
Let us see commands and its output in details.
### Option #1: lsof command
The syntax is:
```
$ sudo lsof -i -P -n
$ sudo lsof -i -P -n | grep LISTEN
$ doas lsof -i -P -n | grep LISTEN
```
### [OpenBSD] ###
Sample outputs:
[
![Fig.01: Check the listening ports and applications with lsof command](https://s0.cyberciti.org/uploads/faq/2016/11/lsof-outputs.png)
][2]
Fig.01: Check the listening ports and applications with lsof command
Consider the last line from above outputs:
```
sshd 85379 root 3u IPv4 0xffff80000039e000 0t0 TCP 10.86.128.138:22 (LISTEN)
```
- sshd is the name of the application.
- 10.86.128.138 is the IP address to which sshd application bind to (LISTEN)
- 22 is the TCP port that is being used (LISTEN)
- 85379 is the process ID of the sshd process
### Option #2: netstat command
You can check the listening ports and applications with netstat as follows.
### Linux netstat syntax
```
$ netstat -tulpn | grep LISTEN
```
### FreeBSD/MacOS X netstat syntax
```
$ netstat -anp tcp | grep LISTEN
$ netstat -anp udp | grep LISTEN
```
### OpenBSD netstat syntax
````
$ netstat -na -f inet | grep LISTEN
$ netstat -nat | grep LISTEN
```
### Option #3: nmap command
The syntax is:
```
$ sudo nmap -sT -O localhost
$ sudo nmap -sU -O 192.168.2.13 ##[ list open UDP ports ]##
$ sudo nmap -sT -O 192.168.2.13 ##[ list open TCP ports ]##
```
Sample outputs:
[
![Fig.02: Determines which ports are listening for TCP connections using nmap](https://s0.cyberciti.org/uploads/faq/2016/11/nmap-outputs.png)
][3]
Fig.02: Determines which ports are listening for TCP connections using nmap
You can combine TCP/UDP scan in a single command:
`$ sudo nmap -sTU -O 192.168.2.13`
### A note about Windows users
You can check port usage from Windows operating system using following command:
```
netstat -bano | more
netstat -bano | grep LISTENING
netstat -bano | findstr /R /C:"[LISTING]"
````
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[ VIVEK GITE][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
[1]:https://www.cyberciti.biz/faq/category/linux/
[2]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/lsof-outputs/
[3]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/nmap-outputs/

View File

@ -0,0 +1,114 @@
User Editorial: Steam Machines & SteamOS after a year in the wild
====
On this day, last year, [Valve released Steam Machines onto the world][2], after the typical Valve delays. While the state of the Linux desktop regarding gaming has improved, Steam Machines have not taken off as a platform, and SteamOS remains stagnant. What happened with these projects from Valve? Why were they created, why did they fail, and what could have been done to make them succeed?
**Context**
In 2012, when Windows 8 released, it included an app store, much like iOS and Android. With the new touch-friendly user interface Microsoft debuted, there was a new set of APIs available called “WinRT,” for creating these immersive touch-friendly applications in the UI language called “Metro.” Applications created with this new API, however, could only be distributed via the Windows Store, with Microsoft taking out a 30% cut, just like the other stores. To Gabe Newell, CEO of Valve, this was unacceptable, and he saw the risks of Microsoft using its position to push the Windows Store and Metro applications to crush Valve, like what they had done to Netscape using Internet Explorer.
To Valve, the strength of the PC running Windows was it that was an open platform, where anyone could run whatever they want without control over the operating system or hardware vendor. The alternative to these proprietary platforms closing in on third-party application stores like Steam was to push a truly open platform that grants freedoms to change, to everyone Linux. Linux is just a kernel, but you can easily create an operating system with it and other software like the GNU core utilities and Gnome, such as Ubuntu. While pushing Ubuntu and other Linux distributions would allow Valve a sanctuary platform in case Microsoft or Apple turned hostile, Linux gave them possibilities to create a new platform.
**Conception**
Valve seemed to have found an opportunity in the console space, if we can call Steam Machines consoles. To achieve the user interface expectations of a console, being used on a large screen television from afar, Big Picture Mode was created. A core principle of the machines was openness; the software was able to be swapped out for Windows, as an example, and the CAD designs for the controller are available for peoples projects.
Originally, Valve had planned to create their own box as a “flagship” machine. However, these only shipped as prototypes to testers in 2013\. They would also let other OEMs like Dell create their own Steam Machines as well, and allow a variety of pricing and specification options. A company called “Xi3” showed their small box, small enough to fit into a palm, as a possible candidate to become a premiere Steam Machine, which created more hype around Steam Machines. Ultimately, Valve decided to only go with OEM partners to make and advertise Steam Machines, rather than doing it themselves.
More “out there” ideas were considered. Biometrics, gaze tracking, and motion controllers were considered for the controller. Of them, the released Steam Controller had a gyroscope, and the HTC Vive controllers had various tracking and motion features that may have been originally intended for the original controller concepts. The controller was also originally intended to be more radical in its approach, with a touchscreen in the middle that had customizable, context-sensitive actions. Ultimately, the launch controller was more conservative, but still had features like the dual trackpads and advanced software that gave it flexibility. Valve had also considered making a version of Steam Machines and SteamOS for smaller hardware like laptops. This ultimately never bore any fruit, though the “Smach Z” handheld could be compared to this.
In [September 2013][3], Valve had announced Steam Machines and SteamOS, with an expected release in the middle of 2014\. The aforementioned 300 prototype machines were released to testers in December, and in January, 2000 more machines were provided to developers. SteamOS was released for testers experienced with Linux to try out. With the feedback given, Valve had decided to delay the release until November 2015.
The late launch caused problems with partners; Dells Steam Machine was launched a year early running Windows as the Alienware Alpha, with extra software to improve usability with a controller.
**Launch**
With the launch, Valve and their OEM partners released their machines, and Valve also released the Steam Controller and the Steam Link. A retail presence was established with GameStop and other brick and mortar stores providing space. Before release, some OEMs pulled out of the launch; Origin PC and Falcon Northwest, two high-end boutique builders. They had claimed performance issues and limitations had made them decide not to ship SteamOS.
The machines had launched to mixed reviews. The Steam Link was praised and many had considered buying one for their existing PC instead of buying a Steam Machine for the living room. The Steam Controller reception was muddled, due to its rich feature set but high learning curve. The Steam Machines themselves ultimately launched to the muddiest reception, however. Reviewers like LinusTechTips noticed glaring defects with the SteamOS software, including performance issues. Many of the machines were criticized for their high price point and poor value, especially when compared to the option of building a PC from the perspective of a PC gamer, or the price in comparison to other consoles. The use of SteamOS was criticized over compatibility, bugs, and lower performance than Windows. Of the available options, the Alienware Steam Machine was considered to be the most interesting option due to its value relative to other machines and small form factor.
By using Debian Linux as the base, Valve had many “launch titles” for the platform, as they had a library of pre-existing Linux titles. The initial availability of games was seen as favourable over other consoles. However, many titles originally announced for the platform never came out, or came out much later. Rocket League and Mad Max only came out only recently after the initial announcements a year ago, and titles like The Witcher 3 and Batman: Arkham Knight never came for the platform, despite initial promises from Valve or publishers. In the case of The Witcher 3, the developer, CD Projekt Red, had denied they ever announced a port, despite their game appearing in a list of titles on sale that had or were announced to have Linux and SteamOS support. In addition, many “AAA” titles have not been ported; though this situation continues to improve over time.
**Neglect**
With the Steam Machines launched, developers at Valve had moved on to other projects. Of the projects being worked on, virtual reality was seen as the most important, with about a third of employees working on it as of June. Valve had seen virtual reality as something to develop, and Steam as the prime ecosystem for delivering VR. Using HTC to manufacture, they had designed their own virtual reality headset and controllers, and would continue to develop new revisions. However, Linux and Steam Machines had fallen to the wayside with this focus. SteamVR, until recently, did not support Linux (it's still not public yet, but it was shown off at SteamDevDays on Linux), which put into question Valves commitments to Steam Machines and Linux as an open platform with a future.
There has been little development to SteamOS itself. The last major update, SteamOS 2.0 was mostly synchronizing with upstream Debian and required a reinstallation, and continued patches simply continue synchronizing with upstream sources. While Valve has made improvements to projects like Mesa, which have improved performance for many users, it has done little with Steam Machines as a product.
Many features continue to go undeveloped. Steams built in functionality like chat and broadcasting continue to be weak, but this affects all platforms that Steam runs on. More pressingly, services like Netflix, Twitch, and Spotify are not integrated into the interface like most major consoles, but accessing them requires using the browser, which can be slow and clunky, if it even achieves what is wanted, or to bring in software from third-party sources, which requires using the terminal, and the software might not be very usable using a controller this is a poor UX for whats considered to be an appliance.
Valve put little effort into marketing the platform, preferring to leave this to OEMs. However, most OEMs were either boutique builders or makers or barebones builders. Of the OEMs, only Dell was the major player in the PC market, and the only one who pushed Steam Machines with advertisements.
Sales were not strong. With 500,000 controllers sold 7 months on (stated in June 2016), including those bundled with a Steam Machine. This puts retail Steam Machines, not counting machines people have installed SteamOS on, in the low hundred thousand mark. Compared to the existing PC and console install bases, this is low.
**Post-mortem thoughts**
So, with the story of what happened, can we identify why Steam Machines failed, and ways they could succeed in the future?
_Vision and purpose_
Steam Machines did not make clear what they were in the market, nor did any advantages particularly stand out. On the PC flank, building PCs had become popular and is a cheaper option with better upgrade and flexibility options. On the console flank, they were outflanked by consoles with low initial investment, despite a possibly higher TCO with game prices, and a far simpler user experience.
With PCs, flexibility is seen as a core asset, with users being able to use their machines beyond gaming, doing work and other tasks. While Steam Machines were just PCs running SteamOS with no restrictions, the SteamOS software and marketing had solidified their view as consoles to PC gamers, compounded by the price and lower flexibility in hardware with some options. In the living room where these machines could have made sense to PC gamers, the Steam Link offered a way to access content on a PC in another room, and small form factor hardware like NUCs and Mini-ITX motherboards allowed for custom built PCs that are more socially acceptable in living rooms. The SteamOS software was also available to “convert” these PCs into Steam Machines, but people seeking flexibility and compatibility often opted for a Linux or Windows desktop. Recent strides in Windows and desktop Linux have simplified maintenance tasks associated with desktop-experience computers, automating most of it.
With consoles, simplicity is a virtue. Even as they have expanded in their roles, with media features often a priority, they are still a plug and play experience where compatibility and experience are guaranteed, with a low barrier of entry. Consoles also have long life cycles, ranging from four to seven years, and the fixed hardware during this life cycle allow developers to target and optimize especially for their specifications and features. New mid-life upgrades like “Scorpio” and PlayStation 4 Pro may change the unified experience previously shared by users, but manufactures are requiring games to work on the original model consoles to avoid the most problematic aspects. To keep users attached to the systems, social networks and exclusive games are used. Games also come on discs that can be freely reused and resold, which is a positive for retailers and users. Steam Machines have none of these guarantees; they carry PC complexity and higher initial prices despite a living room friendly exterior.
_Reconciliation_
With this, Steam Machines could be seen as a “worst of both worlds” product, carrying the burdens of both kinds of product, without showing clearly as one or the other, or some kind of new product category. There also exist many deficiencies that neither party experiences, like lack of AAA titles that appear on consoles and Windows PCs, and lack of clients for services like Netflix. Despite this, Valve has shown little effort into improving the product or even trying to resolve the seemingly contradictory goals like the mutual distrust of PC and console gaming.
Some things may make it impossible to reconcile the two concepts into one category or the other, though. Things like graphics settings and mods may make it hard to create a foolproof experience, and the complexity of the underlying system appears from time to time.
One of the most complex parts is the concept of having a lineup users need to evaluate not only the costs and specifications of a system, but its value and value relative to other systems. You need some way for the user to know that their hardware can run any given game, either by some automated benchmark system with comparison, or a grading system, though these need to be simple and need to support (almost) every game in the library. In addition, you also need to worry about how these systems and grades will age what does a “2016 Grade A” machine mean three years from now?
_Valve, effort, and organization_
Valves organizational structure may be detrimental to creating platforms like Steam Machines, let alone maintaining services like Steam. Their mostly leaderless structure with people supposedly moving their desks to ad-hoc units working on projects that they alone decide to work on can be great for creative endeavours, as well as research and development. Its said Valve only hires what they consider to be the “cream of the crop,” with very strict standards, tying them to what they deem more "worthy" work. This view may be inaccurate though; as cliques often exist, the word of Gabe Newell is more important than the “leaked” employee handbook lets on, and people hired and then fired as needed, as a form of contractor working on certain aspects.
However, this leaves projects that arent glamorous or interesting, but need persistent and often mundane work, to wither on the vine. Customer support for Valve has been a constant headache, with enraged users felt ignored, and Valve sometimes only acting when legally required to do so, like with the automated refund system that was forced into action by Australian and European legislation, or the Counter-Strike: Global Offensive item gambling site controversy involving the gambling commission of Washington State thats still ongoing.
This has affected Steam Machines as a result. With the launch delayed by a year, some partners hands were forced, by Dell launching the Alienware Steam Machine a year earlier as the Alienware Alpha causing the hardware to be outdated on launch. These delays may have also affected game availability as well. The opinions of developers and hardware partners as a result of the delayed and non-monumental launch are not clear. Valves platform for virtual reality simply wasnt available on Linux, and as such, SteamOS, until recently, even as SteamVR was receiving significant developer effort.
The “long game”
Valve is seen as playing a “long game” with Steam Machines and SteamOS, though it appears as if there is no roadmap. An example of Valve aiming for the long term is with Steam, from its humble and initially reviled beginnings as a patching platform for their games to the popular distribution and social network it is today. It also helped that Steam was required to play Valves games like Half-Life 2 and Counter-Strike 1.6\. However, it doesnt seem as if Valve is putting in the effort to Steam Machines as they did with Steam before. There is also entrenched competition that Steam in the early days never really dealt with. Their competition includes arguably Valve itself, with Steam on Windows.
_Gambit_
With the lack of developments in Steam Machines, one wonders if the platform was a bargaining chip of sorts. Steam Machines had been originally started over Valves Linux efforts took fruit because of concerns that Microsoft and Apple would have pushed them out of the market with native app stores, and Steam Machines grew so Valve would have a safe haven in case this happened, and a bargaining chip so Valve can remind the developers of its host platforms of possible independence. When these turned out to be non-threatening, Valve slowed down development. I dont see this however; Valve has expended a lot of goodwill with hardware partners and developers trying to push this, only to halt it. You could say both Microsoft and Valve called each others bluffs Microsoft with a locked-down Windows 8, and Valves capability as an independent player.
Even then, who is to say developers wouldnt follow Microsoft with a locked-in platform, if they can offer superior deals to publishers, or better customer relationships? In addition, now Microsoft is pushing Xbox on Windows integration with cross-buy, Xbox Live integration, and Xbox exclusive games on Windows, all while preserving Windows as an open platform arguably more a threat to Steam.
Another point you could argue is that all of this with Steam Machines was simply to push Linux adoption with PC gaming, and Steam Machines were simply to make it more palatable to publishers and developers by implying a large push and continued support. However, this made it an awfully expensive gambit, and developers continued to support Linux before and after Steam Machines, and could have backfired with developers pulling out of Linux due to the lack of the Promised Land of SteamOS coming.
**My opinions on what could have been done**
I think theres an interesting product with Steam Machines, and that there is a market for it, but lack of interest and effort, as well as possible confusion in what it should have been has been damaging for it. I see Steam Machines as a way to cut out the complexity of PC gaming of worrying about parts, life cycles, and maintenance; while giving the advantages like cheap games, mods, and an open platform that can be fiddled with if the user desires. However, they need to get core aspects like pricing, marketing, lineup, and software right.
I think Steam Machines can make compromises on things like upgradability (Though its possible to preserve this but it should be done with attention to user experience.) and choices, to reduce friction. PCs would still exist to these options. The paralysis of choice is a real dilemma, and the sheer amount of poorly valued options available with Steam Machines didn't help. Valve needs a flagship machine to lead Steam Machines. Arguably, the Alienware model was close, but it wasnt made officially so. Theres good industrial design talent in Valve, and if they focused on their own machine, and with effort put in, it might be worth it. A company like Dell or HTC can manufacture for Valve, bringing their experience in. Defining life cycles and only having one or two specifications updated periodically would help, especially if they worked with developers to establish this is a baseline that should be supported. Im not sure with OEMs; if Valve is putting their effort behind one machine, they might be made redundant and ultimately only hindering development of the platform.
Addressing the software issues is essential. The lack of integration with services like Netflix and Twitch that exist fluidly on console and easily put into place on PC, despite living room user interface issues, are holding Steam Machines back. Although Valve has slowly been acquiring movie licenses for distribution on Steam, people will use existing and trusted streaming sources. This needs to be addressed, especially as people use their consoles as parts of their home theatre. Fixing issues with the Steam client and platform are essential, and feature parity with other platforms is a good idea. Performance issues with Linux and its graphics stack are also a problem, but this is slowly improving. Getting ports of games will also be another issue. Game porting shops like Feral Interactive and Aspyr Media help the library, but they need to be contracted by publishers and developers, and they often use wrappers that add overhead. Valve has helped studios directly with porting, such as with Rocket League, but this has rarely happened and when it did, slowly at the typical Valve pace. The monolith of AAA games cant be ignored either the situation has improved dramatically, but studios like Bethesda are still reluctant to port, especially with a small user base, lack of support from Valve with Steam Machines even if Linux is doing relatively well, and the lack of extra DRM like Denuvo.
Valve also needs to put effort into the other bits beyond hardware and software. With one machine, they have an interest and can subsidize the hardware effectively. This would put it into parity with consoles, and possibly cheaper than custom built PCs. Efforts to marketing the product to market segments that would be interested in the machines are essential, whatever they are. (I myself would be interested in the machines. I dont like the hassle of dealing with PC building or the premium on prebuilt machines, but consoles often lack the games I want to play, and I have an existing library of games on Steam I acquired cheaply.) Retail partners may not be effective, due to their interest in selling and reselling physical copies of games.
Even with my suggestions towards the platform and product, Im not sure how effective it would be to help Steam Machines achieve their full potential and do well in the marketplace. Ultimately, learning from not just your own mistakes, but the mistakes of previous entrants like 3DO and Pippin who relied on an open platform or were descended from desktop-experience computing, which are relevant to Valves current situation, and the future of Nintendo's Switch, which steps into the realm of possible confusion between values.
_Note: Clearing up done by liamdawe, all thoughts are from the submitter._ 
This article was submitted by a guest, we encourage anyone to [submit their own articles][1].
--------------------------------------------------------------------------------
via: https://www.gamingonlinux.com/articles/user-editorial-steam-machines-steamos-after-a-year-in-the-wild.8474
作者:[calvin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.gamingonlinux.com/profiles/5163
[1]:https://www.gamingonlinux.com/submit-article/
[2]:https://www.gamingonlinux.com/articles/steam-machines-steam-link-steam-controller-officially-released-steamos-sale.6201
[3]:https://www.gamingonlinux.com/articles/valve-announces-steam-machines-you-can-win-one-too.2469

View File

@ -0,0 +1,47 @@
[XFCE GETS A `DO NOT DISTURB` MODE AND PER APPLICATION NOTIFICATION SETTINGS][7]
============================================================
The Xfce developers are busy [porting][3] Xfce applications and components to GTK3, and in the process, they are also adding new features.
**"Do not disturb"**, a much requested feature, landed in _xfce4-notifyd_ 0.3.4 (the Xfce notification daemon) [recently][4]. Using this, you can suppress notification bubbles for a limited time-frame.
Furthermore, **the latest _xfce4-notifyd_ includes an option to enable or disable notifications on a per-application basis**.
After an application sends a notification, the app is added to a list in the notification settings. From here, you can control which applications can show notifications.
Both the "Do not disturb" mode and the application-specific notification settings can be found in _Settings > Notifications_:
[
![](https://1.bp.blogspot.com/-fvSesp1ukaQ/WCR8JQVgfiI/AAAAAAAAYl8/IJ1CshVQizs9aG2ClfraVaNjKP3OyxvAgCLcB/s400/xfce-do-not-disturb.png)
][5]
Right now there's no way of accessing notifications missed due to the "Do not disturb" mode being enabled. However, **a notification logging / persistence feature is expected in a future release.**
And finally, yet** another feature** in _xfce4-notifyd_ 0.3.4 is an **option display notifications on the primary monitor** (until now, notifications were displayed on the active monitor).This option is not available in the GUI for now, and it must be enabled using Xfconf (Settings Editor), by adding a Boolean property, called "/primary-monitor" (without the quotes), to _xfce4-notifyd_ and set it to "True":
[
![](https://2.bp.blogspot.com/-M8xZpEHMrq8/WCR9EufvsnI/AAAAAAAAYmA/nLI5JQUtmE0J9TgvNM9ZKGHBdwwBhRH3QCLcB/s400/xfce-xfconf.png)
][6]
**_xfce4-notifyd_ 0.3.4 is not available in a PPA right now, but it will probably be added to the [Xfce GTK3 PPA][1] soon.**
**If you want to build it from source, download it from [HERE][2].**
--------------------------------------------------------------------------------
via: http://www.webupd8.org/2016/11/xfce-gets-do-not-disturb-mode-and-per.html
作者:[Andrew ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.webupd8.org/p/about.html
[1]:https://launchpad.net/~xubuntu-dev/+archive/ubuntu/xfce4-gtk3
[2]:http://archive.xfce.org/src/apps/xfce4-notifyd/0.3/
[3]:https://wiki.xfce.org/releng/4.14/roadmap
[4]:http://simon.shimmerproject.org/2016/11/09/xfce4-notifyd-0-3-4-released-do-not-disturb-and-per-application-settings/
[5]:https://1.bp.blogspot.com/-fvSesp1ukaQ/WCR8JQVgfiI/AAAAAAAAYl8/IJ1CshVQizs9aG2ClfraVaNjKP3OyxvAgCLcB/s1600/xfce-do-not-disturb.png
[6]:https://2.bp.blogspot.com/-M8xZpEHMrq8/WCR9EufvsnI/AAAAAAAAYmA/nLI5JQUtmE0J9TgvNM9ZKGHBdwwBhRH3QCLcB/s1600/xfce-xfconf.png
[7]:http://www.webupd8.org/2016/11/xfce-gets-do-not-disturb-mode-and-per.html

View File

@ -0,0 +1,172 @@
### [Can Linux containers save IoT from a security meltdown?][28]
![](http://hackerboards.com/files/internet_of_things_wikimedia1-thm.jpg)
In this final IoT series post, Canonical and Resin.io champion Linux container technology as a solution to IoT security and interoperability challenges.
|
![](http://hackerboards.com/files/samsung_artik710-thm.jpg)
**Artik 7** |
Despite growing security threats, the Internet of Things hype shows no sign of abating. Feeling the FoMo, companies are busily rearranging their roadmaps for IoT. The transition to IoT runs even deeper and broader than the mobile revolution. Everything gets swallowed in the IoT maw, including smartphones, which are often our windows on the IoT world, and sometimes our hubs or sensor endpoints.
New IoT focused processors and embedded boards continue to reshape the tech landscape. Since our [Linux and Open Source Hardware for IoT][5] story in September, weve seen [Intel Atom E3900][6] “Apollo Lake” SoCs aimed at IoT gateways, as well as [new Samsung Artik modules][7], including a Linux-driven, 64-bit Artik 7 COM for gateways and an RTOS-ready, Cortex-M4 Artik 0\. ARM announced [Cortex-M23 and Cortex-M33][8] cores for IoT endpoints featuring ARMv8-M and TrustZone security.
Security is a selling point for these products, and for good reason. The Mirai botnet that recently attacked the Dyn service and blacked out much of the U.S. Internet for a day brought Linux-based IoT into the forefront — and not in a good way. Just as IoT devices can be turned to the dark side via DDoS, the devices and their owners can also be the victimized directly by malicious attacks.
|
![](http://hackerboards.com/files/arm_cortexm33m23-thm.jpg)
**Cortex-M33 and -M23** |
The Dyn attack reinforced the view that IoT will more confidently move forward in more controlled and protected industrial environments rather than the home. Its not that consumer [IoT security technology][9] is unavailable, but unless products are designed for security from scratch, as are many of the solutions in our [smart home hub story][10], security adds cost and complexity.
In this final, future-looking segment of our IoT series, we look at two Linux-based, Docker-oriented container technologies that are being proposed as solutions to IoT security. Containers might also help solve the ongoing issues of development complexity and barriers to interoperability that we explored in our story on [IoT frameworks][11].
We spoke with Canonicals Oliver Ries, VP Engineering Ubuntu Client Platform about his companys Ubuntu Core and its Docker-friendly, container-like Snaps package management technology. We also interviewed Resin.io CEO and co-founder Alexandros Marinos about his companys new Docker-based ResinOS for IoT.
**Ubuntu Core Snaps to**
Canonicals IoT-oriented [Snappy Ubuntu Core][12] version of Ubuntu is built around a container-like snap package management mechanism, and offers app store support. The snaps technology was recently [released on its own][13] for other Linux distributions. On November 3, Canonical released [Ubuntu Core 16][14], which improves white label app store and update control services.
<center>
[
![](http://hackerboards.com/files/canonical_ubuntucore16_diagram-sm.jpg)
][15]
**Classic Ubuntu (left) architecture vs. Ubuntu Core 16**
(click image to enlarge)
</center>
The snap mechanism offers automatic updates, and helps block unauthorized updates. Using transactional systems management, snaps ensure that updates either deploy as intended or not at all. In Ubuntu Core, security is further strengthened with AppArmor, and the fact that all application files are kept in separate silos, and are read-only.
|
![](http://hackerboards.com/files/limesdr-thm.jpg)
**LimeSDR** |
Ubuntu Core, which was part of our recent [survey of open source IoT OSes][16], now runs on Gumstix boards, Erle Robotics drones, Dell Edge Gateways, the [Nextcloud Box][17], LimeSDR, the Mycroft home hub, Intels Joule, and SBCs compliant with Linaros 96Boards spec. Canonical is also collaborating with the Linaro IoT and Embedded (LITE) Segment Group on its [96Boards IoT Edition][18]Initially, 96Boards IE is focused on Zephyr-driven Cortex-M4 boards like Seeeds [BLE Carbon][19], but it will expand to gateway boards that can run Ubuntu Core.
“Ubuntu Core and snaps have relevance from edge to gateway to the cloud,” says Canonicals Ries. “The ability to run snap packages on any major distribution, including Ubuntu Server and Ubuntu for Cloud, allows us to provide a coherent experience. Snaps can be upgraded in a failsafe manner using transactional updates, which is important in an IoT world moving to continuous updates for security, bug fixes, or new features.”
|
![](http://hackerboards.com/files/nextcloud_box3-thm.jpg)
**Nextcloud Box** |
Security and reliability are key points of emphasis, says Ries. “Snaps can run completely isolated from one another and from the OS, making it possible for two applications to securely run on a single gateway,” he says. “Snaps are read-only and authenticated, guaranteeing the integrity of the code.”
Ries also touts the technology for reducing development time. “Snap packages allow a developer to deliver the same binary package to any platform that supports it, thereby cutting down on development and testing costs, deployment time, and update speed,” says Ries. “With snap packages, the developer is in full control of the lifecycle, and can update immediately. Snap packages provide all required dependencies, so developers can choose which components they use.”
**ResinOS: Docker for IoT**
Resin.io, which makes the commercial IoT framework of the same name, recently spun off the frameworks Yocto Linux based [ResinOS 2.0][20]” target=”new”>ResinOS 2.0 as an open source project. Whereas Ubuntu Core runs Docker container engines within snap packages, ResinOS runs Docker on the host. The minimalist ResinOS abstracts the complexity of working with Yocto code, enabling developers to quickly deploy Docker containers.
<center>
[
![](http://hackerboards.com/files/resinio_resinos_arch-sm.jpg)
][21]
**ResinOS 2.0 architecture**
(click image to enlarge)
</center>
Like the Linux-based CoreOS, ResinOS integrates systemd control services and a networking stack, enabling secure rollouts of updated applications over a heterogeneous network. However, its designed to run on resource constrained devices such as ARM hacker boards, whereas CoreOS and other Docker-oriented OSes like the Red Hat based Project Atomic are currently x86 only and prefer a resource-rich server platform. ResinOS can run on 20 Linux devices and counting, including the Raspberry Pi, BeagleBone, and Odroid-C1.
“We believe that Linux containers are even more important for embedded than for the cloud,” says Resin.ios Marinos. “In the cloud, containers represent an optimization over previous processes, but in embedded they represent the long-delayed arrival of generic virtualization.”
|
![](http://hackerboards.com/files/beaglebone-hand-thm.jpg)
**BeagleBone
Black** |
When applied to IoT, full enterprise virtual machines have performance issues and restrictions on direct hardware access, says Marinos. Mobile VMs like OSGi and Androids Dalvik can been used for IoT, but they require Java among other limitations.
Using Docker may seem natural for enterprise developers, but how do you convince embedded hackers to move to an entirely new paradigm? “Rather than transferring practices from the cloud wholesale, ResinOS is optimized for embedded,” answers Marinos. In addition, he says, containers are better than typical IoT technologies at containing failure. “If theres a software defect, the host OS can remain functional and even connected. To recover, you can either restart the container or push an update. The ability to update a device without rebooting it further removes failure opportunities.”
According to Marinos, other benefits accrue from better alignment with the cloud, such as access to a broader set of developers. Containers provide “a uniform paradigm across data center and edge, and a way to easily transfer technology, workflows, infrastructure, and even applications to the edge,” he adds.
The inherent security benefits in containers are being augmented with other technologies, says Marinos. “As the Docker community pushes to implement signed images and attestation, these naturally transfer to ResinOS,” he says. “Similar benefits accrue when the Linux kernel is hardened to improve container security, or gains the ability to better manage resources consumed by a container.”
Containers also fit in well with open source IoT frameworks, says Marinos. “Linux containers are easy to use in combination with an almost endless variety of protocols, applications, languages and libraries,” says Marinos. “Resin.io has participated in the AllSeen Alliance, and we have worked with partners who use IoTivity and Thread.”
**Future IoT: Smarter Gateways and Endpoints**
Marinos and Canonicals Ries agree on several future trends in IoT. First, the original conception of IoT, in which MCU-based endpoints communicate directly with the cloud for processing, is quickly being replaced with a fog computing architecture. That calls for more intelligent gateways that do a lot more than aggregate data and translate between ZigBee and WiFi.
Second, gateways and smart edge devices are increasingly running multiple apps. Third, many of these devices will provide onboard analytics, which were seeing in the latest [smart home hubs][22]. Finally, rich media will soon become part of the IoT mix.
<center>
[
![](http://hackerboards.com/files/eurotech_reliagate2026-sm.jpg)
][23] [
![](http://hackerboards.com/files/advantech_ubc221-sm.jpg)
][24]
**Some recent IoT gateways: Eurotechs [ReliaGate 20-26][1] and Advantechs [UBC-221][2]**
(click images to enlarge)
</center>
“Intelligent gateways are taking over a lot of the processing and control functions that were originally envisioned for the cloud,” says Marinos. “Accordingly, were seeing an increased push for containerization, so feature- and security-related improvements can be deployed with a cloud-like workflow. The decentralization is driven by factors such as the mobile data crunch, an evolving legal framework, and various physical limitations.”
Platforms like Ubuntu Core are enabling an “explosion of software becoming available for gateways,” says Canonicals Ries. “The ability to run multiple applications on a single device is appealing both for users annoyed with the multitude of single-function devices, and for device owners, who can now generate ongoing software revenues.”
<center>
[
![](http://hackerboards.com/files/myomega_mynxg-sm.jpg)
][25] [
![](http://hackerboards.com/files/technexion_ls1021aiot_front-sm.jpg)
][26]
**Two more IoT gateways: [MyOmega MYNXG IC2 Controller (left) and TechNexions ][3][LS1021A-IoT Gateway][4]**
(click images to enlarge)
</center>
Its not only gateways — endpoints are getting smarter, too. “Reading a lot of IoT coverage, you get the impression that all endpoints run on microcontrollers,” says Marinos. “But we were surprised by the large amount of Linux endpoints out there like digital signage, drones, and industrial machinery, that perform tasks rather than operate as an intermediary. We call this the shadow IoT.”
Canonicals Ries agrees that a single-minded focus on minimalist technology misses out on the emerging IoT landscape. “The notion of lightweight is very short lived in an industry thats developing as fast as IoT,” says Ries. “Todays premium consumer hardware will be powering endpoints in a matter of months.”
While much of the IoT world will remain lightweight and “headless,” with sensors like accelerometers and temperature sensors communicating in whisper thin data streams, many of the newer IoT applications use rich media. “Media input/output is simply another type of peripheral,” says Marinos. “Theres always the issue of multiple containers competing for a limited resource, but its not much different than with sensor or Bluetooth antenna access.”
Ries sees a trend of “increasing smartness at the edge” in both industrial and home gateways. “We are seeing a large uptick in AI, machine learning, computer vision, and context awareness,” says Ries. “Why run face detection software in the cloud and incur delays and bandwidth and computing costs, when the same software could run at the edge?”
As we explored in our [opening story][27] of this IoT series, there are IoT issues related to security such as loss of privacy and the tradeoffs from living in a surveillance culture. There are also questions about the wisdom of relinquishing ones decisions to AI agents that may be controlled by someone else. These wont be fully solved by containers, snaps, or any other technology.
Perhaps wed be happier if Alexa handled the details of our lives while we sweat the big stuff, and maybe theres a way to balance privacy and utility. For now, were still exploring, and thats all for the good.
--------------------------------------------------------------------------------
via: http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
作者:[Eric Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
[1]:http://hackerboards.com/atom-based-gateway-taps-new-open-source-iot-cloud-platform/
[2]:http://hackerboards.com/compact-iot-gateway-runs-yocto-linux-on-quark/
[3]:http://hackerboards.com/wireless-crazed-customizable-iot-gateway-uses-arm-or-x86-coms/
[4]:http://hackerboards.com/iot-gateway-runs-linux-on-qoriq-accepts-arduino-shields/
[5]:http://hackerboards.com/linux-and-open-source-hardware-for-building-iot-devices/
[6]:http://hackerboards.com/intel-launches-14nm-atom-e3900-and-spins-an-automotive-version/
[7]:http://hackerboards.com/samsung-adds-first-64-bit-and-cortex-m4-based-artik-modules/
[8]:http://hackerboards.com/new-cortex-m-chips-add-armv8-and-trustzone/
[9]:http://hackerboards.com/exploring-security-challenges-in-linux-based-iot-devices/
[10]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
[11]:http://hackerboards.com/open-source-projects-for-the-internet-of-things-from-a-to-z/
[12]:http://hackerboards.com/lightweight-snappy-ubuntu-core-os-targets-iot/
[13]:http://hackerboards.com/canonical-pushes-snap-as-a-universal-linux-package-format/
[14]:http://hackerboards.com/ubuntu-core-16-gets-smaller-goes-all-snaps/
[15]:http://hackerboards.com/files/canonical_ubuntucore16_diagram.jpg
[16]:http://hackerboards.com/open-source-oses-for-the-internet-of-things/
[17]:http://hackerboards.com/private-cloud-server-and-iot-gateway-runs-ubuntu-snappy-on-rpi/
[18]:http://hackerboards.com/linaro-beams-lite-at-internet-of-things-devices/
[19]:http://hackerboards.com/96boards-goes-cortex-m4-with-iot-edition-and-carbon-sbc/
[20]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/%3Ca%20href=
[21]:http://hackerboards.com/files/resinio_resinos_arch.jpg
[22]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
[23]:http://hackerboards.com/files/eurotech_reliagate2026.jpg
[24]:http://hackerboards.com/files/advantech_ubc221.jpg
[25]:http://hackerboards.com/files/myomega_mynxg.jpg
[26]:http://hackerboards.com/files/technexion_ls1021aiot_front.jpg
[27]:http://hackerboards.com/an-open-source-perspective-on-the-internet-of-things-part-1/
[28]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/

View File

@ -0,0 +1,233 @@
Neofetch Shows Linux System Information with Distribution Logo
============================================================
Neoftech is a cross-platform and easy-to-use [system information command line script][3] that collects your Linux system information and display it on the terminal next to an image, it could be your distributions logo or any ascii art of your choice.
Neoftech is very similar to [ScreenFetch][4] or [Linux_Logo][5] utilities, but highly customizable and comes with some extra features as discussed below.
Its main features include: its fast, prints a full color image your distributions logo in ASCII alongside your system information, its highly customizable in terms of which, where and when information is printed on the terminal and it can take a screenshot of your desktop when closing the script as enabled by a special flag.
#### Required Dependencies:
1. Bash 3.0+ with ncurses support.
2. w3m-img (occasionally packaged with w3m) or iTerm2 or Terminology for printing images.
3. [imagemagick][1]  for thumbnail creation.
4. [Linux terminal emulator][2] should support \033[14t [3] or xdotool or xwininfo + xprop or xwininfo + xdpyinfo .
5. On Linux, you need feh, nitrogen or gsettings for wallpaper support.
Important: You can read more about optional dependencies from the Neofetch Github repository to check if your [Linux terminal emulator][6] actually supports \033[14t or any extra dependencies for the script to work well on your distro.
### How To Install Neofetch in Linux
Neofetch can be easily installed from third-party repositories on almost all Linux distributions by following below respective installation instructions as per your distribution.
#### On Debian
```
$ echo "deb http://dl.bintray.com/dawidd6/neofetch jessie main" | sudo tee -a /etc/apt/sources.list
$ curl -L "https://bintray.com/user/downloadSubjectPublicKey?username=bintray" -o Release-neofetch.key && sudo apt-key add Release-neofetch.key && rm Release-neofetch.key
$ sudo apt-get update
$ sudo apt-get install neofetch
```
#### On Ubuntu and Linux Mint
```
$ sudo add-apt-repository ppa:dawidd0811/neofetch
$ sudo apt-get update
$ sudo apt-get install neofetch
```
#### On RHEL, CentOS and Fedora
You need to have dnf-plugins-core installed on your system, or else install it with the command below:
```
$ sudo yum install dnf-plugins-core
```
Enable COPR repository and install neofetch package.
```
$ sudo dnf copr enable konimex/neofetch
$ sudo dnf install neofetch
```
#### On Arch Linux
You can either install neofetch or neofetch-git from the AUR using packer or Yaourt.
```
$ packer -S neofetch
$ packer -S neofetch-git
OR
$ yaourt -S neofetch
$ yaourt -S neofetch-git
```
#### On Gentoo
Install app-misc/neofetch from Gentoo/Funtoos official repositories. However, in case you need the git version of the package, you can install =app-misc/neofetch-9999.
### How To Use Neofetch in Linux
Once you have installed the package, the general syntax for using it is:
```
$ neofetch
```
Note: If w3m-img or [imagemagick][7] is not installed on your system, [screenfetch][8] will be enabled by default and neofetch will display your [ASCII art logo][9] as in the image below.
#### Linux Mint Information
[
![Linux Mint System Information](http://www.tecmint.com/wp-content/uploads/2016/11/Linux-Mint-System-Information.png)
][10]
Linux Mint System Information
#### Ubuntu Information
[
![Ubuntu System Information](http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information.png)
][11]
Ubuntu System Information
If you want to display the default distribution logo as image, you should install w3m-img or imagemagickon your system as follows:
```
$ sudo apt-get install w3m-img [On Debian/Ubuntu/Mint]
$ sudo yum install w3m-img [On RHEL/CentOS/Fedora]
```
Then run neofetch again, you will see the default wallpaper of your Linux distributions as the image.
```
$ neofetch
```
[
![Ubuntu System Information with Logo](http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information-with-Logo.png)
][12]
Ubuntu System Information with Logo
After running neofetch for the first time, it will create a configuration file with all options and settings: `$HOME/.config/neofetch/config`.
This configuration file will enable you through the `printinfo ()` function to alter the system information that you want to print on the terminal. You can type in new lines of information, modify the information lineup, delete certain lines and also tweak the script it using bash code to manage the information to be printed out.
You can open the configuration file using your favorite editor as follows:
```
$ vi ~/.config/neofetch/config
```
Below is an excerpt of the configuration file on my system showing the `printinfo ()` function.
Neofetch Configuration File
```
#!/usr/bin/env bash
# vim:fdm=marker
#
# Neofetch config file
# https://github.com/dylanaraps/neofetch
# Speed up script by not using unicode
export LC_ALL=C
export LANG=C
# Info Options {{{
# Info
# See this wiki page for more info:
# https://github.com/dylanaraps/neofetch/wiki/Customizing-Info
printinfo() {
info title
info underline
info "Model" model
info "OS" distro
info "Kernel" kernel
info "Uptime" uptime
info "Packages" packages
info "Shell" shell
info "Resolution" resolution
info "DE" de
info "WM" wm
info "WM Theme" wmtheme
info "Theme" theme
info "Icons" icons
info "Terminal" term
info "Terminal Font" termfont
info "CPU" cpu
info "GPU" gpu
info "Memory" memory
# info "CPU Usage" cpu_usage
# info "Disk" disk
# info "Battery" battery
# info "Font" font
# info "Song" song
# info "Local IP" localip
# info "Public IP" publicip
# info "Users" users
# info "Birthday" birthday
info linebreak
info cols
info linebreak
}
.....
```
Type the command below to view all flags and their configuration values you can use with neofetch script:
```
$ neofetch --help
```
To launch neofetch with all functions and flags enabled, employ the `--test` flag:
```
$ neofetch --test
```
You can enable the ASCII art logo again using the `--ascii` flag:
```
$ neofetch --ascii
```
In this article, we have covered a simple and highly configuration/customizable command line script that gathers your system information and displays it on the terminal.
Remember to get in touch with us via the feedback form below to ask any questions or give us your thoughts concerning the neofetch script.
Last but not least, if you know of any similar scripts out there, do not hesitate to let us know, we will be pleased to hear from you.
Visit the [neofetch Github repository][13].
--------------------------------------------------------------------------------
via: http://www.tecmint.com/neofetch-shows-linux-system-information-with-logo
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-imagemagick-in-linux/
[2]:http://www.tecmint.com/linux-terminal-emulators/
[3]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[4]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[5]:http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
[6]:http://www.tecmint.com/linux-terminal-emulators/
[7]:http://www.tecmint.com/install-imagemagick-in-linux/
[8]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[9]:http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Linux-Mint-System-Information.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information-with-Logo.png
[13]:https://github.com/dylanaraps/neofetch

View File

@ -0,0 +1,78 @@
Make KDE Plasma 5 Desktop Look & Feel Like Windows 10 Using These Extensions
============================================================
![kde-plasma-to-windows-10](https://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Plasma-to-Windows-10.jpg)
With a few steps, heres how you can turn KDE Plasma 5 desktop into Windows 10.
Other than the menu, much of Plasma desktop is already pretty much resembling Win 10\. Therefore, it only require a few light touches in order to make the two almost identical.
### The Start Menu
The first and probably most iconic part of making Plasma look like Win 10 is by achieving the Win 10 Start Menu look.
This can be easily done by installing [Zrens Tiled Menu][1].
#### To install:
1. Right click on Plasma Desktop -> Unlock Widgets
2. Right click on Plasma Desktop -> Add Widgets
3. Get new widgets -> Download New Plasma Widgets
4. Search for “Tiled Menu” -> Install
#### To activate:
1. Right click on your current menu button -> Alternatives…
2. Select “Tiled Menu” -> click Switch
[
![KDE Tiled Menu extension.](http://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Tiled-Menu-extension-730x619.jpg)
][2]
KDE Tiled Menu extension.
### The Theme
The next you might inquire after the menu is a theme. Luckily, [K10ne][3] offers you a Win 10 theme experience.
#### To install:
1. Open up “System Settings” from Plasmas menu -> Workspace Theme
2. Select “Desktop Theme” from the sidebar -> Get new Theme
3. Search for “K10ne” -> Install
#### To activate:
1. Open up “System Settings” from Plasmas menu -> Workspace Theme
2. Select “Desktop Theme” from the sidebar -> “K10ne”
3. Apply
### The Task Bar
Lastly, you might also want to incorporate a more Win 10 style task bar, just to have a more complete experience.
This time, the package you need, called “Icons-only Task Manager”, usually installed by default by most distros. If you dont have it inquire your distros appropriate channels how to get it.
#### To activate:
1. Right click on Plasma Desktop -> Unlock Widgets
2. Right click on Plasma Desktop -> Add Widgets
3. Drag & drop “Icons-only Task Manager” to the suitable place on your desktops panel
--------------------------------------------------------------------------------
via: https://iwf1.com/make-kde-plasma-5-desktop-look-feel-like-windows-10-using-these-extensions/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/linux
[1]:https://github.com/Zren/plasma-applets/tree/master/tiledmenu
[2]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Tiled-Menu-extension.jpg
[3]:https://store.kde.org/p/1153465/

View File

@ -0,0 +1,231 @@
Apache Vs Nginx Vs Node.js And What It Means About The Performance Of WordPress Vs Ghost
============================================================
![Node vs Apache vs Nginx](https://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Apache-vs-Nginx-730x430.jpg)
Ultimate battle of the giants: can the rising star Node.js prevail against the titans Apache and Nginx?
Just like you, I too have read the various kinds of opinions / facts which are scattered all over the Internet throughout all sorts of sources, some of which I consider reliable, while others, perhaps shady or doubtful.
Many of the sources I read were quite contradicting, ahm did someone say StackOverflow?[1][2], others showed a clear yet surprising results[3] thus having a crucial role in pushing me towards running my own tests and experiments.
At first, I did some thought experiments thinking I may avoid all the hassle of building and running physical tests of my own I was drowning deep in those before I even knew it.
Nonetheless, looking backwards on it, it seem that my initial thoughts were quite accurate after all and have been reaffirmed by my tests; a fact which reminds me of what I learned back in school regarding Einstein and his photoelectric effect experiments where he faced a waveparticle duality and initially concluded that the experiments were affected by his state of mind, that is, when he expected the result would be a wave then so it was and vice versa.
That said, Im pretty sure my results wont prove to be a duality anytime in the near future, although my own state of mind probably did had an effect, to some extents, on them.
### About The Comparison
One of the sources I read came up with a revolutionary way, in my view, to deal with the natural subjectiveness and personal biases an author may have.
A way which I decided to embrace as-well, thus I declare the following in advance:
Developers spend many years honing their craft. Those who reach higher levels usually make their own choice based on a host of factors. Its subjective; youll promote and defend your technology decision.
That said, the point of this comparison is not to become another “use whatever suits you, buddy” article. I will make recommendations based on my own experience, requirements and biases. Youll agree with some points and disagree with others; thats great — your comments will help others make an informed choice.
And thank you to Craig Buckler of [SitePoint][2] for re-enlightening me regarding the purpose of comparisons a purpose I tend re-forgetting as Im trying to please all visitors.
### About The Tests
All test were ran locally on an:
* Intel core i7-2600k machine of 4 cores and 8 threads.
* **[Gentoo Linux][1]** is the operating system used to run the tests.
The tool used for benchmarking: ApacheBench, Version 2.3 <$Revision: 1748469 $>.
The tests included a series of benchmarks, starting from 1,000 to 10,000 requests and a concurrency of 100 to 1,000 the results were quite surprising.
In addition, stress test to measure server function under high load was also issued.
As for the content, the main focus was about a static file containing a number of Lorem Ipsum verses with headings and an image.
[
![Lorem Ipsum and ApacheBenchmark](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Lorem-Ipsum-and-ApacheBenchmark-730x411.jpg)
][3]
Lorem Ipsum and ApacheBenchmark
The reason I decided to focus on static files is because they remove all sorts of rendering factors that may have an effect on the tests, such as: the speed of a programming language interpreter, how well is an interpreter integrated with the server, etc…
Also, based on my own experience, a substantial part of the average page load time is usually being spent on static content such as images for example, therefore in order to see which server could save us the most of that precious time it seem more realistic to focus on that part.
That aside, I also wanted to test a more real case scenario where I benchmarked each server upon running a dynamic page of different CMSs (more details about that later on).
### The Servers
As Im running Gentoo Linux, you could say that either one of my HTTP servers is starting from an optimized state to begin with, since I built them using only the use-flags I actually needed. I.e there shouldnt be any unnecessary code or module loading or running in the background while I ran my tests.
[
![Apache vs Nginx vs Node.js use-flags](http://iwf1.com/wordpress/wp-content/uploads/2016/10/Apache-vs-Nginx-vs-Node.js-use-flags-730x241.jpg)
][4]
Apache vs Nginx vs Node.js use-flags
### Apache
`$: curl -i http://localhost/index.html
HTTP/1.1 200 OK
Date: Sun, 30 Oct 2016 15:35:44 GMT
Server: Apache
Last-Modified: Sun, 30 Oct 2016 14:13:36 GMT
ETag: "2cf2-54015b280046d"
Accept-Ranges: bytes
Content-Length: 11506
Cache-Control: max-age=600
Expires: Sun, 30 Oct 2016 15:45:44 GMT
Vary: Accept-Encoding
Content-Type: text/html`
Apache was configured with “event mpm”.
### Nginx
`$: curl -i http://localhost/index.html
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Sun, 30 Oct 2016 14:17:30 GMT
Content-Type: text/html
Content-Length: 11506
Last-Modified: Sun, 30 Oct 2016 14:13:36 GMT
Connection: keep-alive
Keep-Alive: timeout=20
ETag: "58160010-2cf2"
Accept-Ranges: bytes`
Nginx included various tweaks, among them: “sendfile on”, “tcp_nopush on” and “tcp_nodelay on”.
### Node.js
`$: curl -i http://127.0.0.1:8080
HTTP/1.1 200 OK
Content-Length: 11506
Etag: 15
Last-Modified: Thu, 27 Oct 2016 14:09:58 GMT
Content-Type: text/html
Date: Sun, 30 Oct 2016 16:39:47 GMT
Connection: keep-alive`
The Node.js server used in the static tests was custom built from scratch, tailor made to be as lightweight and fast as possible no external modules (outside of Nodes core) were used.
### The Results
Click on the images to enlarge:
[
![Apache vs Nginx vs Node: performance under requests load (per 100 concurrent users)](http://iwf1.com/wordpress/wp-content/uploads/2016/11/requests-730x234.jpg)
][5]
Apache vs Nginx vs Node: performance under requests load (per 100 concurrent users)
[
![Apache vs Nginx vs Node: performance under concurrent users load](http://iwf1.com/wordpress/wp-content/uploads/2016/11/concurrency-730x234.jpg)
][6]
Apache vs Nginx vs Node: performance under concurrent users load (per 1,000 requests)
### Stress Testing
[
![Apache vs Nginx vs Node: time to complete 100,000 requests with concurrency of 1,000](http://iwf1.com/wordpress/wp-content/uploads/2016/11/stress.jpg)
][7]
Apache vs Nginx vs Node: time to complete 100,000 requests with concurrency of 1,000
### What Can We Learn From The Results?
Judging by the results above, it appears that Nginx can complete the highest amount of requests in the least amount of time, in other words, **Nginx** is the fastest HTTP server.
Another thing we can learn, which is quite surprising as a matter of fact, is that Node.js can be faster than Nginx and Apache in some cases, given the right amount of concurrent users and requests.
To those who wonder, the answer is NO, when the number of requests was raised during the concurrency test then Nginx would return to a leading position.
Unlike Apache and Nginx, Node.js, especially clustered Node, seem to be indifferent to the number of concurrent users hitting it. As the chart shows, clustered Node keeps a straight line at around 0.1 seconds while both Apache and Nginx suffer a variation of about 0.2 seconds.
A conclusion that can be drawn based on the above statistics is that the smaller the site is the less it matters which server it uses. However, as the site grows larger audience, the more apparent the impact an HTTP server has.
At the bottom line, when it comes to the raw speed of each server, as its depicted by the stress test, my sense is that the most crucial factor behind the performance is not some special algorithm but what it comes down to is actually the programming language each server is running.
As both Apache and Nginx are using C language which is AOT (Ahead Of Time) compiled language, Node.js on the other hand is using JavaScript which is an interpreted / JIT (Just In Time) compiled language. This means theres additional work for the Node.js server on its way to execute a program.
This sense I base not only upon the results above but also upon further results, which youll see below, where I got pretty much the same performance parity even when using an optimized Node.js server built with the popular Express framework.
### The Bigger Picture
At the end of the day, an HTTP server is quite useless without the content it serves. Therefore when looking to compare web servers, a vital part we must take into account is the content we wish to run on top of it.
Although other function exists as well, the most widely popular use done with an HTTP server is running a website. Hence, to see the real life implications of each servers performance I decided to compare WordPress the most widely used CMS (Content Management System) in the world, with Ghost a rising star with a gimmick of using JavaScript at its core.
Will a Ghost web-page based on JavaScript alone be able to outperform a WordPress page running on top of PHP and Apache / Nginx?
Thats an interesting question since Ghost has the advantage of using a single, coherent tool for its actions no additional layers needed, whereas WordPress needs to rely on the integration between Apache / Nginx and PHP, an integration which might incur significant performance drawbacks.
Adding to that, theres also a significant performance difference between PHP and Node.js in favor of the latter, which Ill briefly talk about below, things might come out a bit differently than initially seemed.
### PHP Vs Node.js
In order to compare WordPress and Ghost we must first consider an essential component which affects both.
Essentially, WordPress is a PHP based CMS while Ghost is Node.js (JavaScript) based. Unlike PHP, Node.js enjoys the following advantages:
* Non-blocking I/O
* Event driven
* Modern, less legacy code encumbered
Since there are plenty of comparisons out there explaining and demonstrating Node.js raw speed over PHP (including PHP 7) I shall not elaborate further on the subject, Google it, I implore you.
So, given that Node.js outperforms PHP in general, will it be significant enough to make a Node.js website faster than Apache / Nginx with PHP?
### WordPress Vs Ghost
When comparing WordPress to Ghost some would say its like comparing apples to oranges and for the most part Ill agree, as WordPress is a fully fledged CMS while Ghost is basically just a blogging platform at the moment.
However, the two still share many overlapping areas where both can be used to publish thoughts to the world.
Given that premise, how can we compare the 2 while one runs on totally different code base than the other, including themes and core features.
Indeed, a scientific lab-conditioned test would be hard to devise. However, in this comparison Im interested in a more real life case scenario, where WordPress gets to keep its theme and so does Ghost. Thus, the goal here is to have both platforms web-pages similar in size as possible and let PHP and Node.js do their magic behind the scenes.
Since the results were measured against different criteria and most importantly not exact same sizes, it wouldnt be fair to display them side by side in a chart. Hence a table is used instead:
[
![Node vs Nginx vs Apache comparison table](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Nginx-vs-Apache-comparison-table-730x185.jpg)
][8]
Node vs Nginx vs Apache running WordPress & Ghost. Top 2 rows are WordPress, bottom 2 are Ghost
As you can see, despite the fact Ghost (Node.js) is loading a smaller sized page (youd be surprised how much difference can 1kB make) it still remains slower than both WordPress with Nginx and with Apache.
Also, does preempting every Node server hit with Nginx proxy that serves as a load balancer actually contributes or detracts from performance?
Well, according to the table above, if it has any effect at all then it is a detracting one which is a reasonable outcome as adding another layer should make things slower. However, the numbers above shows it just might be negligible.
But the most important thing the table above shows us is that even though Node.js is faster than PHP, the role an HTTP server has, may surpass the importance of what type of programming language a certain web platform uses.
Of course, on the other hand, if the page loaded was a lot more reliant on server-side script serving, then the results would of wind up a bit different, I suspect.
At the end of it, if a web platform really wants to beat WordPress at its own game, performance-wise that is, the conclusion rising from this comparison is itll have to have some sort of customized tool a-la PHP-FPM, that will communicate with JavaScript directly (instead of running it as a server) thus it could fully harness JS power to reach a better performance.
--------------------------------------------------------------------------------
via: https://iwf1.com/apache-vs-nginx-vs-node-js-and-what-it-means-about-the-performance-of-wordpress-vs-ghost/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/linux
[1]:http://iwf1.com/5-reasons-use-gentoo-linux/
[2]:https://www.sitepoint.com/sitepoint-smackdown-php-vs-node-js/
[3]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Lorem-Ipsum-and-ApacheBenchmark.jpg
[4]:http://iwf1.com/wordpress/wp-content/uploads/2016/10/Apache-vs-Nginx-vs-Node.js-use-flags.jpg
[5]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/requests.jpg
[6]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/concurrency.jpg
[7]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/stress.jpg
[8]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Nginx-vs-Apache-comparison-table.jpg

View File

@ -0,0 +1,90 @@
Translating by StdioA
How to Check Timezone in Linux
============================================================
In this short article, we will walk newbies through the various simple ways of checking system timezone in Linux. Time management on a Linux machine especially a production server is always an important aspect of system administration.
There are a number of time management utilities available on Linux such as date and timedatectlcommands to get the current timezone of system and [synchronize with a remote NTP server][1] to enable an automatic and more accurate system time handling.
Well, let us dive into the different ways of finding out our Linux system timezone.
1. We will start by using the traditional date command to find out present timezone as follows:
```
$ date
```
Alternatively, type the command below, where `%Z` format prints the alphabetic timezone and `%z` prints the numeric timezone:
```
$ date +”%Z %z”
```
[
![Find Linux Timezone](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Linux-Timezone.png)
][2]
Find Linux Timezone
Note: There are many formats in the date man page that you can make use of, to alter the output of the date command:
```
$ man date
```
2. Next, you can likewise use timedatectl, when you run it without any options, the command displays an overview of the system including the timezone like so:
```
$ timedatectl
```
More so, try to employ a pipeline and [grep command][3] to only filter the timezone as below:
```
$ timedatectl | grep “Time zone”
```
[
![Find Current Linux Timezone](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Current-Linux-Timezone.png)
][4]
Find Current Linux Timezone
Learn how to [set timezone in Linux using timedatectl][5] command.
3. In addition, display the content of the file `/etc/timezone` using [cat utility][6] to check your timezone:
```
$ cat /etc/timezone
```
[
![Check Timezone of Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Timezone-of-Linux.png)
][7]
Check Timezone of Linux
For REHL/CentOS/Fedora users, here is one more command for the same purpose:
```
$ grep ZONE /etc/sysconfig/clock
```
Thats all! Do not forget to share you thoughts about the article by means of the feedback form below. Importantly, you should look through this time management guide for Linux to get more insight into handling time on your system, it has simple and easy-to-follow examples.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-timezone
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ntp-server-in-centos/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Linux-Timezone.png
[3]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Current-Linux-Timezone.png
[5]:http://www.tecmint.com/set-time-timezone-and-synchronize-time-using-timedatectl-command/
[6]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Timezone-of-Linux.png

View File

@ -0,0 +1,40 @@
Introduction to Eclipse Che, a next-generation, web-based IDE
============================================================
![Introduction to Eclipse Che, a next-generation, web-based IDE](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ETOrrpcP "Introduction to Eclipse Che, a next-generation, web-based IDE")
>Image by : opensource.com
Correctly installing and configuring an integrated development environment, workspace, and build tools in order to contribute to a project can be a daunting or time consuming task, even for experienced developers. Tyler Jewell, CEO of [Codenvy][1], faced this problem when he was attempting to set up a simple Java project when he was working on getting his coding skills back after dealing with some health issues and having spent time in managerial positions. After multiple days of struggling, Jewell could not get the project to work, but inspiration struck him. He wanted to make it so that "anyone, anytime can contribute to a project with installing software."
It is this idea that lead to the development of [Eclipse Che][2].
Eclipse Che is a web-based integrated development environment (IDE) and workspace. Workspaces in Eclipse Che are bundled with an appropriate runtime stack and serve their own IDE, all in one tightly integrated bundle. A project in one of these workspaces has everything it needs to run without the developer having to do anything more than picking the correct stack when creating a workspace.
The ready-to-go bundled stacks included with Eclipse Che cover most of the modern popular languages. There are stacks for C++, Java, Go, PHP, Python, .NET, Node.js, Ruby on Rails, and Android development. A Stack Library provides even more options and if that is not enough, there is the option to create a custom stack that can provide specialized environments.
Eclipse Che is a full-featured IDE, not a simple web-based text editor. It is built on Orion and the JDT. Intellisense and debugging are both supported, and version control with both Git and Subversion is integrated. The IDE can even be shared by multiple users for paired programming. With just a web browser, a developer can write and debug their code. However, if a developer would prefer to use a desktop-based IDE, it is possible to connect to the workspace with a SSH connection.
One of the major technologies underlying Eclipse Che are [Linux containers][3], using Docker. Workspaces are built using Docker and installing a local copy of Eclipse Che requires nothing but Docker and a small script file. The first time `che.sh start` is run, the requisite Docker containers are downloaded and run. If setting up Docker to install Eclipse Che is too much work for you, Codenvy does offer online hosting options. They even provide 4GB workspaces for open source projects for any contributor to the project. Using Codenvy's hosting option or another online hosting method, it is possible to provide a url to potential contributors that will automatically create a workspace complete with a project's code, all with one click.
Beyond Codenvy, contributors to Eclipse Che include Microsoft, Red Hat, IBM, Samsung, and many others. Several of the contributors are working on customized versions of Eclipse Che for their own specific purposes. For example, Samsung's [Artik IDE][4] for IoT projects. A web-based IDE might turn some people off, but Eclipse Che has a lot to offer, and with so many big names in the industry involved, it is worth checking out.
* * *
If you are interested in learning more about Eclipse Che, [CheConf 2016][5] takes place on November 15\. CheConf 2016 is an online conference and registration is free. Sessions start at 11:00 am Eastern time (4:00 pm UTC) and end at 5:30 pm Eastern time (10:30 pm UTC).
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/11/introduction-eclipse-che
作者:[Joshua Allen Holm][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:http://codenvy.com/
[2]:http://eclipse.org/che
[3]:https://opensource.com/resources/what-are-linux-containers
[4]:http://eclipse.org/che/artik
[5]:https://eclipse.org/che/checonf/

View File

@ -0,0 +1,175 @@
Build, Deploy and Manage Custom Apps with IBM Bluemix
============================================================
![IBM Blue mix logo](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/IBM-Blue-mix-logo.jpg?resize=300%2C266)
_IBMs Bluemix affords developers an opportunity to build, deploy and manage custom apps. Bluemix is built on Cloud Foundry. It supports a number of programming languages as well as OpenWhisk, which allows developers to call any function without the need for resource management._
Bluemix is an open standards, cloud-based platform implemented by IBM. It has an open architecture which enables organisations to create, develop and manage their applications on the cloud. It is based on Cloud Foundry and hence can be considered as a Platform as a Service (PaaS). With Bluemix, developers need not worry about cloud configurations, but can concentrate on their applications. Cloud configurations will be done automatically by Bluemix.
Bluemix also provides a dashboard, with which developers can create, manage and view services and applications, while monitoring resource usage also.
It supports the following programming languages:
* Java
* Python
* Ruby on Rails
* PHP
* Node.js
It also supports OpenWhisk (Function as a Service), which is also an IBM product that allows developers to call any function without requiring any resource management.
![Figure 1 An Overview of IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-1-An-Overview-of-IBM-Bluemix.jpg?resize=296%2C307)
Figure 1: An Overview of IBM Bluemix
![Figure 2 The IBM Bluemix architecture](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-2-The-IBM-Bluemix-architecture.jpg?resize=350%2C239)
Figure 2: The IBM Bluemix architecture
![Figure 3 Creating an organisation in IBM Bluemix](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-3-Creating-an-organisation-in-IBM-Bluemix.jpg?resize=350%2C280)
Figure 3: Creating an organisation in IBM Bluemix
**How IBM Bluemix works**
Bluemix is built on top of IBMs SoftLayer IaaS (Infrastructure as a Service). It uses Cloud Foundry as an open source PaaS. It starts by pushing code through Cloud Foundry, which plays the role of combining the code and suitable runtime environment based on the programming language in which the application is written. IBM services, third party services or community built services can be used for different functionalities. Secure connectors can be used to connect to on-premise systems and the cloud.
![Figure 4 Setting up Space in IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-4-Setting-up-Space-in-IBM-Bluemix.jpg?resize=350%2C267)
Figure 4: Setting up Space in IBM Bluemix
![Figure 5 The app template](http://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-5-The-app-template.jpg?resize=350%2C135)
Figure 5: The app template
![Figure 6 IBM Bluemix supported programming languages](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-6-IBM-Bluemix-supported-programming-languages.jpg?resize=350%2C173)
Figure 6: IBM Bluemix supported programming languages
**Creating an app in Bluemix**
In this article, we will create a sample Hello World application in IBM Bluemix by using the Liberty for Java starter pack, in just a few simple steps.
1\. Go to [_https://console.ng.bluemix.net/registration/_][2].
2\. Confirm the Bluemix account.
3\. Click on the confirmation link in the mail to complete the sign up process.
4\. Give your email ID and click on _Continue_ to log in.
5\. Enter the password and click on _Log in._
6. _Set up_ and _Environment_ share resources in specific regions.
7\. Create Space to manage access and roll-back in Bluemix. We can map Spaces to development stages such as dev, test, uat, pre-prod and prod.
![Figure 7 Naming the app](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg?resize=350%2C133)
Figure 7: Naming the app
![Figure 8 Knowing when the app is ready](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-8-Knowing-when-the-app-is-ready.jpg?resize=350%2C170)
Figure 8: Knowing when the app is ready
![Figure 9 The IBM Bluemix Java App](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-9-The-IBM-Bluemix-Java-App.jpg?resize=350%2C151)
Figure 9: The IBM Bluemix Java App
8\. Once this initial configuration is completed, click on_ Im ready_. _Good to Go_!
9\. Verify the IBM Bluemix dashboard after successfully logging in, specifically sections such as Cloud Foundry Apps where 2GB is available and Virtual Server where 0 instances are available, as of now.
10\. Click on _Create app_. Choose the template for app creation. In our case, we will go for a Web app.
11\. How do you get started? Click on Liberty for Java, and then verify the description.
12\. Click on _Continue_.
13\. What do you want to name your new app? For this article, lets use osfy-bluemix-tutorial and click on _Finish_.
14\. It will take some time to create resources and to host an application on Bluemix.
15\. In a few minutes, your app will be up and running. Note the URL of the application.
16\. Visit the applications URL _http://osfy-bluemix-tutorial.au-syd.mybluemix.net/_. Bingo, our first Java application is up and running on IBM Bluemix.
17\. To verify the source code, click on _Files_ and navigate to different files and folders in the portal.
18\. The _Logs_ section provides all the activity logs, starting from the applications creation.
19\. The _Environment Variables_ section provides details on all the environment variables of VCAP_Services as well as those that are user defined.
20\. To verify the applications consumption of resources, go to the Liberty for Java section.
21\. The _Overview_ section of each application contains details regarding resources, the applications health, and activity logs, by default.
22\. Open Eclipse, go to the Help menu and click on _Eclipse Marketplace_.
23\. Find _IBM Eclipse tools_ for _Bluemix_ and click on _Install_.
24\. Confirm the selected features and install them in Eclipse.
25\. Download the application starter code. Import it into Eclipse by clicking on _File Menu_, select _Import Existing Projects_ into _Workspace_ and start modifying the existing code.
![Figure 10 The Java app source files](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-10-The-Java-app-source-files.jpg?resize=350%2C173)
Figure 10: The Java app source files
![Figure 11 The Java app logs](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-11-The-Java-app-logs.jpg?resize=350%2C133)
Figure 11: The Java app logs
![Figure 12 Java app -- Liberty for Java](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-12-Java-app-Liberty-for-Java.jpg?resize=350%2C169)
Figure 12: Java app — Liberty for Java
**[
][1]Why IBM Bluemix?**
Here are some compelling reasons to use IBM Bluemix:
* Supports multiple languages and platforms
* Free trial
1\. Minimal registration process
2\. No credit card required
3\. 30-days trial period with quotas of 2GB of runtime, 20 services, 500 routes
4\. Unlimited access to standard support
5\. No production use limitations
* Pay only for the use of each runtime and service
* Quick set-up hence faster time to market
* Continuous delivery of new features
* Secure integration with on-premise resources
* Use cases
1\. Web applications and mobile back-ends
2\. APIs and on-premise integration
* DevOps services are available as SaaS on the cloud and support continuous delivery of:
1\. Web IDE
2\. SCM
3\. Agile planning
4\. Delivery pipeline service
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/build-deploy-manage-custom-apps-ibm-bluemix/
作者:[MITESH_SONI][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensourceforu.com/author/mitesh_soni/
[1]:http://opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg
[2]:https://console.ng.bluemix.net/registration/

View File

@ -0,0 +1,68 @@
Is Mozilla Firefox Collecting Your Data Without Your Consent?
============================================================
![Mozilla Firefox collects your data](https://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Firefox-collects-your-data-730x429.jpg)
A geolocation service packaged with Firefox web-browser is running in the background even while the latter is closed.
Weve still not fully recovered from the news about the scandalous browser add-on which meant to protect users privacy but instead **[sells their information to third-party companies][1]**, and already we are perhaps in front of another, much bigger in scale, new outrage.
**MLS** is Mozilla Location Service which lets devices determine their location based on network infrastructure like WiFi access points, cell towers and Bluetooth beacons.
Pretty much, it is Mozillas equivalent to Google Location Service which is the service used when you turn on your GPS on Android devices and opt for High accuracy mode.
Those of you who ever experienced GPS issues will probably know to appreciate how accurate this mode actually is.
But besides being able to accurately pinpoint your location, another side of it is that the service, through the use of WiFi networks, is able to collect personally identifiable information of both the **users who knowingly contribute to the database** and the **owners of the WiFi devices being scanned**.
That being said, Mozilla also mentions you can opt out from the service, but can you really?
### When The Background Becomes Your Privacy Foreground
Being a [crowdsource][2] project, in order to maintain and grow MLS, Mozilla is in fact dependent of users contributions, thus theyve developed a number of ways through which users can participate.
One of these ways, meant to be used by end users is a Android app called Stumbler:
> “Mozilla Stumbler is an open-source wireless network scanner which collects GPS, cellular and wireless network metadata for our crowd-sourced location database.”[1]
Yet Stumbler is not only a standalone app but also a service used by Firefox for Android “to contribute data and enhance” MLS.
The problem with that service lies in the fact it runs in the background without most users are aware of it and **even though you may disable it**.
According to Mozilla[1], to enable the service you need to open the Settings menu (in Firefox for Android) -> Open the Privacy section -> scroll to the bottom to see the Data Choices, and finally, Check the Mozilla Location Service box.
[
![Mozilla Location Services is unchecked yet Stumbler is on](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Location-Services-is-unchecked-yet-Stumler-is-on-730x602.jpg)
][3]
Mozilla Location Services is unchecked yet Stumbler is on
In reality, youll find that the Stumbler service is **actively running on your device in the background**, meaning its practically invisible because it has no interface, even though the MLS box is unchecked and furthermore, even if all the Data Choices check boxes are unchecked and Firefox browser itself is closed.
Apparently, the only way to stop stumbler is by ending it directly, however to do so, youll first need a way to detect its running and ultimately, thats just a temporary solution that only lasts until the devices next reboot.
### How To Stay Safer?
In order to exempt yourself from MLS data collection, there are still a few methods you may practice, in the hope those wouldnt be disregarded by Mozilla just like the MLS check box in Firefox for Android.
Make your wireless network hidden or add the string “_nomap” to the end of its name, e.g “myWirelessNetwork” becomes “myWirelessNetwork_nomap”. This should signal Mozillas applications that you do not wish to participate in their data collection.
As for the Stumbler service on Android, due to being a service (as opposed to a process), youll probably wont be able to see it in the list of running processes / recent apps. Thus, either use a dedicated app to close it or enable “Developer Options” and go to “Running services” -> tap on Firefox and finally, stop “stumbler”.
--------------------------------------------------------------------------------
via: https://iwf1.com/is-mozilla-firefox-collecting-your-data-without-your-consent/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/is-mozilla-firefox-collecting-your-data-without-your-consent/
[1]:https://iwf1.com/shock-this-popular-browser-add-on-sells-your-browsing-history/
[2]:https://en.wikipedia.org/wiki/Crowdsourcing
[3]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Location-Services-is-unchecked-yet-Stumler-is-on.jpg

View File

@ -0,0 +1,177 @@
How to Check Which Apache Modules are Enabled/Loaded in Linux
============================================================
In this guide, we will briefly talk about the Apache web server front-end and how to list or check which Apache modules have been enabled on your server.
Apache is built, based on the principle of modularity, this way, it enables web server administrators to add different modules to extend its primary functionalities and [enhance apache performance][5] as well.
Some of the common Apache modules include:
1. mod_ssl  which offers [HTTPS for Apache][1].
2. mod_rewrite  which allows for matching url patterns with regular expressions, and perform a transparent redirect using [.htaccess tricks][2], or apply a HTTP status code response.
3. mod_security  which offers you to [protect Apache against Brute Force or DDoS attacks][3].
4. mod_status  that allows you to [monitor Apache web server load and page statics][4].
In Linux, the apachectl or apache2ctl command is used to control Apache HTTP server interface, it is a front-end to Apache.
You can display the usage information for apache2ctl as below:
```
$ apache2ctl help
OR
$ apachectl help
```
apachectl help
```
Usage: /usr/sbin/httpd [-D name] [-d directory] [-f file]
[-C "directive"] [-c "directive"]
[-k start|restart|graceful|graceful-stop|stop]
[-v] [-V] [-h] [-l] [-L] [-t] [-S]
Options:
-D name : define a name for use in directives
-d directory : specify an alternate initial ServerRoot
-f file : specify an alternate ServerConfigFile
-C "directive" : process directive before reading config files
-c "directive" : process directive after reading config files
-e level : show startup errors of level (see LogLevel)
-E file : log startup errors to file
-v : show version number
-V : show compile settings
-h : list available command line options (this page)
-l : list compiled in modules
-L : list available configuration directives
-t -D DUMP_VHOSTS : show parsed settings (currently only vhost settings)
-S : a synonym for -t -D DUMP_VHOSTS
-t -D DUMP_MODULES : show all loaded modules
-M : a synonym for -t -D DUMP_MODULES
-t : run syntax check for config files
```
apache2ctl can function in two possible modes, a Sys V init mode and pass-through mode. In the SysV init mode, apache2ctl takes simple, one-word commands in the form below:
```
$ apachectl command
OR
$ apache2ctl command
```
For instance, to start Apache and check its status, run these two commands with root user privileges by employing the [sudo command][6], in case you are a normal user:
```
$ sudo apache2ctl start
$ sudo apache2ctl status
```
Check Apache Status
```
tecmint@TecMint ~ $ sudo apache2ctl start
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1\. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1456) already running
tecmint@TecMint ~ $ sudo apache2ctl status
Apache Server Status for localhost (via 127.0.0.1)
Server Version: Apache/2.4.18 (Ubuntu)
Server MPM: prefork
Server Built: 2016-07-14T12:32:26
-------------------------------------------------------------------------------
Current Time: Tuesday, 15-Nov-2016 11:47:28 IST
Restart Time: Tuesday, 15-Nov-2016 10:21:46 IST
Parent Server Config. Generation: 2
Parent Server MPM Generation: 1
Server uptime: 1 hour 25 minutes 41 seconds
Server load: 0.97 0.94 0.77
Total accesses: 2 - Total Traffic: 3 kB
CPU Usage: u0 s0 cu0 cs0
.000389 requests/sec - 0 B/second - 1536 B/request
1 requests currently being processed, 4 idle workers
__W__...........................................................
................................................................
......................
Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process
```
And when operating in pass-through mode, apache2ctl can take all the Apache arguments in the following syntax:
```
$ apachectl [apache-argument]
$ apache2ctl [apache-argument]
```
All the Apache-arguments can be listed as follows:
```
$ apache2 help [On Debian based systems]
$ httpd help [On RHEL based systems]
```
#### Check Enabled Apache Modules
Therefore, in order to check which modules are enabled on your Apache web server, run the applicable command below for your distribution, where `-t -D DUMP_MODULES` is a Apache-argument to show all enabled/loaded modules:
```
--------------- On Debian based systems ---------------
$ apache2ctl -t -D DUMP_MODULES
OR
$ apache2ctl -M
```
```
--------------- On RHEL based systems ---------------
$ apachectl -t -D DUMP_MODULES
OR
$ httpd -M
$ apache2ctl -M
```
List Apache Enabled Loaded Modules
```
[root@tecmint httpd]# apachectl -M
Loaded Modules:
core_module (static)
mpm_prefork_module (static)
http_module (static)
so_module (static)
auth_basic_module (shared)
auth_digest_module (shared)
authn_file_module (shared)
authn_alias_module (shared)
authn_anon_module (shared)
authn_dbm_module (shared)
authn_default_module (shared)
authz_host_module (shared)
authz_user_module (shared)
authz_owner_module (shared)
authz_groupfile_module (shared)
authz_dbm_module (shared)
authz_default_module (shared)
ldap_module (shared)
authnz_ldap_module (shared)
include_module (shared)
....
```
Thats all! in this simple tutorial, we explained how to use the Apache front-end tools to list enabled/loaded apache modules. Keep in mind that you can get in touch using the feedback form below to send us your questions or comments concerning this guide.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-apache-modules-enabled
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-lets-encrypt-ssl-certificate-to-secure-apache-on-rhel-centos/
[2]:http://www.tecmint.com/apache-htaccess-tricks/
[3]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[4]:http://www.tecmint.com/monitor-apache-web-server-load-and-page-statistics/
[5]:http://www.tecmint.com/apache-performance-tuning/
[6]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/

View File

@ -1,57 +0,0 @@
宽松开源许可证的崛起意味着什么
====
为什么像 GNU GPL 这样的限制性许可证越来越不受青睐。
“如果你用了任何开源软件, 那么你软件的其他部分也必须开源。” 这是微软 CEO Steve Ballmer 2001 年说的, 尽管他说的不对, 还是引发了人们对自由软件的 FUD (恐惧, 不确定和怀疑)。 大概这才是他的意图。
对开源软件的这些 FUD 主要与开源许可有关。 现在有许多不同的许可证, 当中有些限制比其他的更严格(也有人称“更具保护性”)。 诸如 GNU 通用公共许可证 GPL 这样的限制性许可证使用了 copyleft 的概念。 copyleft 赋予人们自由发布软件副本和修改版的权力, 只要衍生工作给予人们同样的权力。 bash 和 GIMP 等开源项目就是使用了 GPL v3。 还有一个 Affero GPL 的许可证, 它为网络上的软件(如网络服务)提供了 copyleft 许可。
这意味着, 如果你使用了这种许可的代码, 然后加入了你自己的专有代码, 那么在一些情况下, 整个代码, 包括你的代码也就遵从这种限制性开源许可证。 Ballmer 说的大概就是这类的许可证。
但宽松许可证不同。 比如, 只要保留属性且不要求开发者承担责任, MIT 许可证允许任何人任意使用开源代码, 包括修改和出售。 另一个比较流行的宽松开源许可证, Apache 许可证 2.0 也把专利权从贡献者授予用户。 JQuery, .NET Core 和 Rails 使用了 MIT 许可证, 使用 Apache 许可证 2.0 的软件包括安卓, Apache 和 Swift。
两种许可证类型最终都是为了让软件更有用。 限制性许可证促进了参与和分享的开源理念, 使每个人从软件中得到最多的利益。 而宽松许可证通过允许人们任意使用软件来确保人们能从软件中得到最多的利益, 即使这意味着他们可以使用代码, 修改它, 据为己有,甚至以专有软件出售,而不做任何回报。
开源许可证管理公司 Black Duck Software 的数据显示, 去年使用最多的开源许可证是限制性许可证 GPL 2.0, 市占率大约 25%。 宽松许可证 MIT 和 Apache 2.0 次之, 市占率分别为 18% 和 16% 再后面是 GPL 3.0, 市占率大约 10%。 这样来看, 限制性许可证占 35% 宽松许可证占 34% 几乎是平手。
但这个数据没有显示趋势。 Black Duck 的数据显示, 从 2009 年到 2015 年的六年间, MIT 许可证的市占率上升了 15.7% Apache 的市占率上升了 12.4%。 在这段时期, GPL v2 和 v3 的市占率惊人地下降了 21.4%。 换言之, 在这段时期里, 大量市占率从限制性许可证移动到宽松许可证。
这个趋势还在继续。 Black Duck 的[最新数据][1]显示, MIT 现在的市占率为 26% GPL v2 为 21% Apache 2 为 16% GPL v3 为 9%。 即 30% 的限制性许可证和 42% 的宽松许可证--与前一年的 35% 的限制许可证和 34% 的宽松许可证相比, 发生了重大的转变。 对 GitHub 上使用许可证的[调查研究][2]证实了这种转变。 它显示 MIT 以压倒性的 45% 占有率成为最流行的许可证, 与之相比, GPL v2 只有 13% Apache 11%。
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### 引领趋势
从限制性许可证到宽松许可证,这么大的转变背后是什么呢? 是公司害怕如果使用了限制性许可证的软件他们就会像Ballmer说的那样失去自己私有软件的控制权了吗 事实上, 可能就是如此。 比如, Google [禁用了 Affero GPL 软件][3]。
[Instructional Media + Magic][4] 的主席 Jim Farmer 是一个教育开源技术的开发者。 他作为很多公司为避开法律问题而不使用限制性许可证。 “问题就在于复杂性。 许可证的复杂性越高, 被人因为某此行为而告上法庭的可能性越高。 高复杂性更可能带来麻烦“, 他说。
他补充说, 这种对限制性许可证的恐惧正被律师们驱动着, 许多律师建议自己的客户使用 MIT 或 Apache 2.0 许可证的软件, 并明确反对使用 Affero 许可证的软件。
他说, 这会对软件开发者产生影响, 因为如果公司都避开限制性许可证软件的使用,开发者想要自己的软件被使用, 就更会把新的软件使用宽松许可证。
但 SalesAgility 也就是开源 SuiteCRM 的那家公司,的 CEO Greg Soper 认为这种到宽松许可证的转变也由一些开发者驱动。 “看看像 Rocket.Chat 这样的应用。 开发者本可以选择 GPL 2.0 或 Affero 许可证, 但他们选择了宽松许可证,” 他说。 “这样可以给这个应用最大的机会, 因为专有软件厂商可以使用它, 不会伤害到他们的产品, 且不需要把他们的产品也使用开源许可证。 这样如果开发者想要让第三方应用使用他的应用的话, 他有理由选择宽松许可证。”
Soper 指出, 限制性许可证的设计,就是通过阻止开发者拿了别人的代码,做了修改,但不把结果回报给社区来帮助开源项目。 “ Affero 许可证对我们的产品很重要, 因为如果有人 fork 了,并做得比我们好, 却又不把代码回报回来, 就会杀死我们的产品,” 他说。 “ 对 Rocket.Chat 则不同, 因为如果它使用 Affero 那么它会污染公司的 IP 所以公司不会使用它。 不同的许可证有不同的使用案例。”
曾在 Gnome 现在是 LibreOffice 的 OpenOffice 上工作的开源开发者 Michael Meeks 同意 Jim Farmer 的,许多公司确实出于对法律的担心,而选择使用宽松许可证的软件的观点。 “copyleft 许可证有风险, 但同样也有巨大的益处。 遗憾的是人们都听从律师, 而律师只是讲风险, 但从不告诉你有些事是安全的。”
Ballmer 发表他不正确的言论已经 15 年了, 但它产生的 FUD 还是有影响--即使从限制性许可证到宽松许可证的转变并不是他想要的。
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -1,51 +1,52 @@
# aria2 (Command Line Downloader) command examples
# aria2 (命令行下载器)命令行实例
[aria2][4] is a Free, open source, lightweight multi-protocol & multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces. aria2 automatically validates chunks of data while downloading a file. It can download a file from multiple sources/protocols and tries to utilize your maximum download bandwidth. By default all the Linux Distribution included aria2, so we can install easily from official repository. some of the GUI download manager using aria2 as a plugin to improve the download speed like [uget][3].
[aria2][4] 是一个免费的、开源的、轻量级多协议和多源命令行下载工具。它支持 HTTP/HTTPS、 FTP、 SFTP、 BitTorrent 和 Metalink 协议。aria2 可以通过内建的 JSON-RPC 和 XML-RPC 接口来操纵。aria2 下载文件的时候,自动验证数据块。它可以通过多源或者多协议下载一个文件,并且尝试利用你的最大下载带宽。默认情况下,所有的 Linux 发行版都包括 aria2所以我们可以从官方库中很容易的安装。一些 GUI 下载管理器例如 [uget][3] 使用 aria2作为一个插件来提高下载速度。
#### Aria2 Features
* HTTP/HTTPS GET support
* HTTP Proxy support
* HTTP BASIC authentication support
* HTTP Proxy authentication support
* FTP support(active, passive mode)
* FTP through HTTP proxy(GET command or tunneling)
* Segmented download
* Cookie support
* It can run as a daemon process.
* BitTorrent protocol support with fast extension.
* Selective download in multi-file torrent
* Metalink version 3.0 support(HTTP/FTP/BitTorrent).
* Limiting download/upload speed
#### Aria2 特性
#### 1) Install aria2 on Linux
* HTTP/HTTPS GET 支持
* HTTP 代理支持
* HTTP BASIC 认证支持
* HTTP 代理认证支持
* FTP 支持(主动、被动模式)
* 通过 HTTP 代理的 FTPGET 命令行或者隧道)
* 分段下载
* Cookie 支持
* 可以作为守护进程运行。
* 使用 快速扩展的 BitTorrent 协议支持
* 在多 torrent 文件下选择性下载
* Metalink 3.0 版本支持HTTP/FTP/BitTorrent
* 限制下载、上传速度
We can easily install aria2 command line downloader to all the Linux Distribution such as Debian, Ubuntu, Mint, RHEL, CentOS, Fedora, suse, openSUSE, Arch Linux, Manjaro, Mageia, etc.. Just fire the below command to install. For CentOS, RHEL systems we need to enable [uget][2] or [RPMForge][1] repository.
#### 1) Linux 下安装 aria2
我们可以很容易的在所有的 Linux 发行版 上安装 aria2 命令行下载器,例如 Debian、 Ubuntu、 Mint、 RHEL、 CentOS、 Fedora、 suse、 openSUSE、 Arch Linux、 Manjaro、 Mageia 等等……只需要输入下面的命令安装即可。对于 CentOS、 RHEL 系统,我们需要开启 [uget][2] 或者 [RPMForge][1] 库的支持。
```
[For Debian, Ubuntu & Mint]
[对于 Debian、 Ubuntu 和 Mint]
$ sudo apt-get install aria2
[For CentOS, RHEL, Fedora 21 and older Systems]
[对于 CentOS、 RHEL、 Fedora 21 和更早些的操作系统]
# yum install aria2
[Fedora 22 and later systems]
[Fedora 22 和 之后的系统]
# dnf install aria2
[For suse & openSUSE]
[对于 suse 和 openSUSE]
# zypper install wget
[Mageia]
# urpmi aria2
[For Debian, Ubuntu & Mint]
[对于 Debian、 Ubuntu 和 Mint]
$ sudo pacman -S aria2
```
#### 2) Download Single File
#### 2) 下载单个文件
The below command will download the file from given URL and stores in current directory, while downloading the file we can see the (date, time, download speed & download progress) of file.
下面的命令将会从指定的 URL 中下载一个文件,并且保存在当前目录,在下载文件的过程中,我们可以看到文件的(日期、时间、下载速度和下载进度)。
```
# aria2c https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
@ -62,9 +63,9 @@ Status Legend:
```
#### 3) Save the file with different name
#### 3) 使用不同的名字保存文件
We can save the file with different name & format while initiate downloading, using -o (lowercase) option. Here we are going to save the filename with owncloud.zip.
在初始化下载的时候,我们可以使用 -o小写选项在保存文件的时候使用不同的名字。这儿我们将要使用 owncloud.zip 文件名来保存文件。
```
# aria2c -o owncloud.zip https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
@ -81,9 +82,9 @@ Status Legend:
```
#### 4) Limit download speed
#### 4) 下载速度限制
By default aria2 utilize full bandwidth for downloading file and we cant use anything on server before download completion (Which will affect other service accessing bandwidth). So better use max-download-limit option to avoid further issue while downloading big size file.
默认情况下aria2 利用全带宽限制文件,在文件下载完成之前,我们不可以在服务器上使用任何东西(这将会影响其他服务访问带宽)。所以在下载大文件时最好使用 max-download-limit 选项来避免进一步的问题。
```
# aria2c --max-download-limit=500k https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
@ -100,9 +101,9 @@ Status Legend:
```
#### 5) Download Multiple Files
#### 5) 下载多个文件
The below command will download more then on file from the location and stores in current directory, while downloading the file we can see the (date, time, download speed & download progress) of file.
下面的命令将会从指定位置下载不止一个文件并保持到当前目录,在下载文件的过程中,我们可以看到文件的(日期、时间、下载速度和下载进度)。
```
# aria2c -Z https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2 ftp://ftp.gnu.org/gnu/wget/wget-1.17.tar.gz
@ -122,11 +123,10 @@ Status Legend:
```
#### 6) Resume Incomplete download
#### 6) 恢复不完整下载
Make sure, whenever going to download big size of file (eg: ISO Images), i advise you to use -c option which will help us to resume the existing incomplete download from the state and complete as usual when we are facing any network connectivity issue or system problems. Otherwise when you are download again, it will initiate the fresh download and store to different file name (append .1 to the filename automatically). Note: If any interrupt happen, aria2 save file with .aria2 extension.
当你遇到一些网络连接问题或者系统问题的时候,并将要下载一个大文件(例如: ISO 镜像文件),我建议你使用 -c 选项,他可以通过状态帮助我们恢复已经存在的未完成的下载,并且像往常一样完成。不然的话,当你再次下载,它将会初始化新的下载,并保存成一个不同的文件名(自动的在文件名后面添加 .1 。注意如果任意打断发生aria2 使用 .aria2 后缀保存文件。
<iframe marginwidth="0" marginheight="0" scrolling="no" frameborder="0" height="90" width="728" id="_mN_gpt_827143833" style="border-width: 0px; border-style: initial; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline;"></iframe>
```
# aria2c -c https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#db0b08 8.2MiB/21MiB(38%) CN:1 DL:3.1MiB ETA:4s]^C
@ -142,7 +142,7 @@ db0b08|INPR| 3.3MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(INPR):download in-progress.
aria2 will resume download if the transfer is restarted.
如果重新启动传输aria2 将会恢复下载
# aria2c -c https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#873d08 21MiB/21MiB(98%) CN:1 DL:2.7MiB]
@ -158,9 +158,9 @@ Status Legend:
```
#### 7) Get the input from file
#### 7) 从文件获取输入
Alternatively wget can get the list of input URLs from file and start downloading. We need to create a file and store each URL in separate line. Add -i option with aria2 command to perform this action.
或许 wget 可以从一个文件获取输入的 URL 列表来下载。我们需要创建一个文件,将每一个 URL 存储在单独的行中。ara2 命令行可以添加 -i 选项来执行此操作。
```
# aria2c -i test-aria2.txt
@ -180,9 +180,9 @@ Status Legend:
```
#### 8) Download using 2 connections per host
#### 8) 每个主机使用两个连接来下载
The maximum number of connections to one server for each download. By default this will establish one connection to each host. We can establish more then one connection to each host to speedup download by adding -x2 (2 means, two connection) option with aria2 command
默认情况,每次下载连接到一台服务器的最大数目,对于一条主机只能建立一条。我们可以通过 aria2 命令行添加 -x22 表示两个连接)来创建到每台主机多于一个连接,以加快下载速度。
```
# aria2c -x2 https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
@ -199,9 +199,9 @@ Status Legend:
```
#### 9) Download Torrent Files
#### 9) 下载种子文件
We can directly download a Torrent files using aria2 command.
我们可以使用 aria2 命令行直接下载一个种子文件
```
# aria2c https://torcache.net/torrent/C86F4E743253E0EBF3090CCFFCC9B56FA38451A3.torrent?title=[kat.cr]irudhi.suttru.2015.official.teaser.full.hd.1080p.pathi.team.sr
@ -221,27 +221,27 @@ Status Legend:
```
#### 10) Download BitTorrent Magnet URI
#### 10) 下载 Bit 种子磁力链接
Also we can directly download a Torrent files through BitTorrent Magnet URI using aria2 command.
使用 aria2 我们也可以通过 bit 磁力链接直接下载一个种子文件
```
# aria2c 'magnet:?xt=urn:btih:248D0A1CD08284299DE78D5C1ED359BB46717D8C'
```
#### 11) Download BitTorrent Metalink
#### 11) 下载 Metalink Bit 种子
Also we can directly download a Metalink file using aria2 command.
我们也可以通过 aria2 命令行直接下载一个 Metalink 文件。
```
# aria2c https://curl.haxx.se/metalink.cgi?curl=tar.bz2
```
#### 12) Download a file from password protected site
#### 12) 从密码保护的网站下载一个文件
Alternatively we can download a file from password protected site. The below command will download the file from password protected site.
或者,我们也可以从一个密码保护网站下载一个文件。下面的命令行将会从一个密码保护网站中下载文件。
```
# aria2c --http-user=xxx --http-password=xxx https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
@ -250,9 +250,9 @@ Alternatively we can download a file from password protected site. The below com
```
#### 13) Read more about aria2
#### 13) 阅读更多关于 aria2
If you want to know more option which is available for wget, you can grep the details on your terminal itself by firing below commands..
如果你希望了解了解更多选项 —— 它们同时适用于 wget可以输入下面的命令行在你自己的终端获取详细信息
```
# man aria2c
@ -261,7 +261,7 @@ or
```
Enjoy…)
谢谢欣赏 …)
--------------------------------------------------------------------------------
@ -269,7 +269,7 @@ via: http://www.2daygeek.com/aria2-command-line-download-utility-tool/
作者:[MAGESH MARUTHAMUTHU][a]
译者:[译者ID](https://github.com/译者ID)
译者:[yangmingming](https://github.com/yangmingming)
校对:[校对者ID](https://github.com/校对者ID)

View File

@ -21,15 +21,15 @@ Windows的Linux子系统测试在上周刚刚完成所有测试并放出升
![](https//openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=186c4d0&c=a8c914bf9b64cf67abc65e319f8e71c7951fb1aa&p=0)
First up was the SQLite embedded database benchmark. The out-of-the-box Ubuntu/Bash on Windows performance was quite slow, but when switching that 14.04 environment to 16.04 LTS, the performance was much faster. However, for this disk-heavy workload the native Ubuntu Linux installations were almost twice as fast as relying upon the Windows Subsystem for Linux.
首先是SQLite嵌入式数据库基准测试.这个盒子外的Ubuntu/Bash on Windows性能是相当的慢,但是如果切换环境从14.04到16.04LTS, 性能会块很多.然而, 对于重磁盘的工作负载,原生Ubuntu Linux比Windows的子系统Linux快了近2倍.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=fa40825&c=0912dc3f6d6a9f36da09fdd4c0cf4e330fa40f90&p=0)
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8419652&c=9b9f6b0822ed5b9dc2977a7f2faf499fce4fba23&p=0)
The CompileBench test profile as additional disk-focused workloads show that this is the particular subsystem really straining the Ubuntu performance atop Windows 10 with it being up to multiple times slower.
编译测试作为额外的重磁盘测试显示, 定制的Windows子系统真的成倍的限制了Ubuntu性能.
Next up were some basic system memory speed tests with Stream.
接下来,是一些使用Stream的基本的系统内存速度测试
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=9560e6f&c=ebbc6937fa8daf0540e0df353432a29f938cf7ed&p=0)
@ -37,29 +37,29 @@ Next up were some basic system memory speed tests with Stream.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=5a2e9d2&c=d37eee4c9394fa8104e7e49e26c964af70ec326b&p=0)
Strangely, the Stream memory benchmarks show better performance with Ubuntu on Windows than Ubuntu itself! This happened on both the 14.04 and 16.04 based environments that the Windows results came out faster.
奇怪的是, 这些内存的基准测试显示Ubuntu on Windows的性能比原生的Ubuntu好!这个现象同时发生在基于同样的Windows却环境不同的14.04和16.04上.
Next are more of the CPU-heavy tests,
接下来, 是一些重CPU测试.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ee1f01f&c=3e9a67230e0e081b99ee3237e702c0b40ee73d60&p=0)
With the Dolfyn scientific test, the performance between Ubuntu on Windows and Ubuntu installed bare metal was actually quite close. With Ubuntu 16.04 the performance is slower on both platforms due to the newer GCC compiler regressing the performance.
通过Dolfyn科学测试Ubuntu On Windows和原生Ubuntu之间的性能其实是相当接近的。 对于Ubuntu 16.04由于较新的GCC编译器回退性能两个平台上的性能都较慢。
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=dd69257&c=0e31babb8b96be1ae38ea739fbb1346bf9bc4b07&p=0)
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=a02416b&c=c8abb70dee982dd494fb1891bd9dc154fa7a7f47&p=0)
Fhourstones and John The Ripper show that the performance of Ubuntu running on Windows via the Windows Subsystem for Linux can be incredibly close to the bare metal Ubuntu Linux performance!
透过Fhourstones和John The Ripper表明通过在Windows上运行Linux子系统的Ubuntu的性能可以非常接近裸机Ubuntu Linux性能
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=3140e3c&c=f4bf6330a7d58b5939c61cbd91fe5db379c1592a&p=0)
The x264 results were another strange case similar to Stream where the best performance was actually with Ubuntu on Windows 10 via WSL!
类似于Stream, x264结果是另一个奇怪的情况其中最好的性能实际上是使用WSL Ubuntu On Windows
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ad12f0b&c=f50c829c97d731f6926c5a874cf83f8fc5440067&p=0)
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8b7a7ca&c=3de3e8537d08665e8a41380b6b2298c09f408fa0&p=0)
The timed compilation benchmarks were heavily in favor of the bare metal Ubuntu Linux installations outside of Windows. This is likely due to these large program compilations requiring plenty of disk reads and from the earlier disk-focused benchmarks showing that is the big area where the Windows Subsystem for Linux is slow.
定时编译基准测试非常利于裸机Ubuntu Linux. 这是由于大型程序编译需要大量读写磁盘, 先前测试已经发现了, 基于Windows的子系统缓慢的大灾区.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=25892d8&c=f6cd3fa4a3497e3d2663106e0bf3fcd227f9b9a3&p=0)
@ -67,11 +67,11 @@ The timed compilation benchmarks were heavily in favor of the bare metal Ubuntu
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=4899bb2&c=80df0e1e749910ebd84b0d6c2688316e5cfb8cda&p=0)
Many of our other common open-source benchmarks show that for the strictly CPU-focused tests, the Windows Subsystem for Linux is close -- or even matches -- the native Ubuntu Linux performance running on the actual hardware.
许多其他的通用开源基准测试表明, 严格的重CPU测试, Windows子系统的Ubuntu的性能是很接近的, 甚至是相等与原生安装在实际硬盘中的Ubuntu Linux.
These latest Windows Subsystem for Linux results are actually rather impressive. The big letdown is just the continued slow disk/file-system performance, but for CPU-bound workloads the results are very compelling. There's also the rare cases with x264 and Stream where the performance of the Ubuntu user-space on Windows appears to clearly outperform that of Ubuntu Linux running on the hardware by itself.
最新的Window的Linux子系统,测试结果实际上相当令人印象深刻。让人沮丧仅仅只是持续缓慢的磁盘/文件系统性能但是对于受CPU限制的工作负载结果是非常引人注目的。还有很罕见的x264和Stream测试Ubuntu On Windows上的性能似乎明显优于运行在硬件上的Ubuntu Linux。
Overall the experience was actually quite pleasant and haven't run into any other bugs or annoyances while running with Ubuntu/Bash on Windows. If you're interested in more Windows vs. Linux benchmarks, please consider voicing yourself as a Phoronix Premium subscriber.
总的来说, 测试实验是十分愉快的并且在Ubuntu/Bash on Windows也没有遇到任何其他的bug.如果你有还兴趣了解更多关于Windows和Linux的基准测试, 欢迎留言讨论.
--------------------------------------------------------------------------------
via: https://www.phoronix.com/scan.php?page=article&item=windows10-anv-wsl&num=1