Merge pull request #21 from LCTT/master

更新 20160513
This commit is contained in:
Chang Liu 2016-05-13 22:59:32 +08:00
commit 57320700b4
190 changed files with 10701 additions and 8558 deletions

View File

@ -1,9 +1,9 @@
简介 简介
------------------------------- -------------------------------
LCTT是“Linux中国”[http://linux.cn/](http://linux.cn/)的翻译组负责从国外优秀媒体翻译Linux相关的技术、资讯、杂文等内容。 LCTT是“Linux中国”[https://linux.cn/](https://linux.cn/)的翻译组负责从国外优秀媒体翻译Linux相关的技术、资讯、杂文等内容。
LCTT已经拥有近百余名活跃成员并欢迎更多的Linux志愿者加入我们的团队。 LCTT已经拥有几百名活跃成员并欢迎更多的Linux志愿者加入我们的团队。
![logo](http://img.linux.net.cn/static/image/common/lctt_logo.png) ![logo](http://img.linux.net.cn/static/image/common/lctt_logo.png)
@ -52,13 +52,15 @@ LCTT的组成
* 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。 * 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。
* 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。 * 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。
* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。 * 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。
* 2016/05/09 提升PurlingNayuki为校对。
活跃成员 活跃成员
------------------------------- -------------------------------
目前 TP 活跃成员有: 目前 TP 活跃成员有:
- CORE @wxy, - Leader @wxy,
- CORE @DeadFire, - Source @oska874,
- Proofreader @PurlingNayuki,
- CORE @geekpi, - CORE @geekpi,
- CORE @GOLinux, - CORE @GOLinux,
- CORE @ictlyh, - CORE @ictlyh,
@ -71,6 +73,7 @@ LCTT的组成
- CORE @Vic020, - CORE @Vic020,
- CORE @dongfengweixiao, - CORE @dongfengweixiao,
- CORE @alim0x, - CORE @alim0x,
- Senior @DeadFire,
- Senior @reinoir, - Senior @reinoir,
- Senior @tinyeyeser, - Senior @tinyeyeser,
- Senior @vito-L, - Senior @vito-L,
@ -80,41 +83,42 @@ LCTT的组成
- ZTinoZ, - ZTinoZ,
- theo-l, - theo-l,
- luoxcat, - luoxcat,
- disylee, - martin2011qi,
- wi-cuckoo, - wi-cuckoo,
- disylee,
- haimingfg, - haimingfg,
- KayGuoWhu, - KayGuoWhu,
- wwy-hust, - wwy-hust,
- martin2011qi, - felixonmars,
- cvsher,
- su-kaiyao, - su-kaiyao,
- ivo-wang,
- GHLandy,
- cvsher,
- wyangsun,
- DongShuaike,
- flsf, - flsf,
- SPccman, - SPccman,
- Stevearzh - Stevearzh
- mr-ping,
- Linchenguang, - Linchenguang,
- oska874
- Linux-pdz, - Linux-pdz,
- 2q1w2007, - 2q1w2007,
- felixonmars,
- wyangsun,
- MikeCoder,
- mr-ping,
- xiqingongzi
- H-mudcup, - H-mudcup,
- zhangboyue, - cposture,
- xiqingongzi,
- goreliu, - goreliu,
- DongShuaike, - NearTan,
- TxmszLou, - TxmszLou,
- ZhouJ-sh, - ZhouJ-sh,
- wangjiezhe, - wangjiezhe,
- NearTan,
- icybreaker, - icybreaker,
- shipsw, - shipsw,
- johnhoow, - johnhoow,
- soooogreen,
- linuhap, - linuhap,
- boredivan,
- blueabysm, - blueabysm,
- liaoishere, - boredivan,
- name1e5s,
- yechunxiao19, - yechunxiao19,
- l3b2w1, - l3b2w1,
- XLCYun, - XLCYun,
@ -122,43 +126,55 @@ LCTT的组成
- tenght, - tenght,
- coloka, - coloka,
- luoyutiantang, - luoyutiantang,
- yupmoon, - sonofelice,
- jiajia9linuxer, - jiajia9linuxer,
- scusjs, - scusjs,
- tnuoccalanosrep, - tnuoccalanosrep,
- woodboow, - woodboow,
- 1w2b3l, - 1w2b3l,
- JonathanKang,
- crowner, - crowner,
- mtunique, - mtunique,
- dingdongnigetou, - dingdongnigetou,
- CNprober, - CNprober,
- JonathanKang,
- Medusar,
- hyaocuk, - hyaocuk,
- szrlee, - szrlee,
- KnightJoker,
- Xuanwo, - Xuanwo,
- nd0104, - nd0104,
- jerryling315,
- xiaoyu33, - xiaoyu33,
- guodongxiaren, - guodongxiaren,
- zzlyzq, - ynmlml,
- yujianxuechuan, - kylepeng93,
- ailurus1991,
- ggaaooppeenngg, - ggaaooppeenngg,
- Ricky-Gong, - Ricky-Gong,
- zky001,
- Flowsnow,
- lfzark, - lfzark,
- 213edu, - 213edu,
- Tanete, - Tanete,
- liuaiping, - liuaiping,
- jerryling315, - bestony,
- Timeszoro,
- rogetfan,
- itsang,
- JeffDing,
- Yuking-net,
- MikeCoder,
- zhangboyue,
- liaoishere,
- yupmoon,
- Medusar,
- zzlyzq,
- yujianxuechuan,
- ailurus1991,
- tomatoKiller, - tomatoKiller,
- stduolc, - stduolc,
- shaohaolin, - shaohaolin,
- Timeszoro,
- rogetfan,
- FineFan, - FineFan,
- kingname, - kingname,
- jasminepeng,
- JeffDing,
- CHINAANSHE, - CHINAANSHE,
(按提交行数排名前百) (按提交行数排名前百)
@ -173,7 +189,7 @@ LFS 项目活跃成员有:
- @KevinSJ - @KevinSJ
- @Yuking-net - @Yuking-net
更新于2015/11/29 更新于2016/05/09
谢谢大家的支持! 谢谢大家的支持!

View File

@ -4,27 +4,27 @@ Linux上的游戏所有你需要知道的
**我能在 Linux 上玩游戏吗 ** **我能在 Linux 上玩游戏吗 **
这是打算[投奔 Linux 阵营][1]的人最经常问的问题之一。毕竟,在 Linux 上面玩游戏经常被认为有点难以实现。事实上,一些人甚至考虑他们能不能在 Linux 上看电影或者听音乐。考虑到这些,关于 Linux 的平台的游戏的问题是很现实的。 这是打算[投奔 Linux 阵营][1]的人最经常问的问题之一。毕竟,在 Linux 上面玩游戏经常被认为有点难以实现。事实上,一些人甚至考虑他们能不能在 Linux 上看电影或者听音乐。考虑到这些,关于 Linux 的平台的游戏的问题是很现实的。
在本文中,我将解答大多数 Linux 新手关于在 Linux 打游戏的问题。例如 Linux 下能不能玩游戏,如果能的话,在**哪里下载游戏**或者如何获取有关游戏的信息。 在本文中,我将解答大多数 Linux 新手关于在 Linux 打游戏的问题。例如 Linux 下能不能玩游戏,如果能的话,在哪里**下载游戏**或者如何获取有关游戏的信息。
但是在此之前,我需要说明一下。我不是一个 PC 上的玩家或者说我不认为我是一个在 Linux 桌面上完游玩戏的家伙。我更喜欢在 PS4 上玩游戏并且我不关心 PC 上的游戏甚至也不关心手机上的游戏(我没有给我的任何一个朋友安利糖果传奇)。这也就是你很少在 It's FOSS 上很少看见关于 [Linux 上的游戏][2]的部分 但是在此之前,我需要说明一下。我不是一个 PC 上的玩家或者说我不认为我是一个在 Linux 桌面游戏玩家。我更喜欢在 PS4 上玩游戏并且我不关心 PC 上的游戏甚至也不关心手机上的游戏(我没有给我的任何一个朋友安利糖果传奇)。这也就是你很少在 It's FOSS 上很少看见关于 [Linux 上的游戏][2]的原因
所以我为什么要这个主题? 所以我为什么要提到这个主题?
因为别人问过我几次有关 Linux 上的游戏的问题并且我想要写出来一个能解答这些问题的 Linux 上的游戏指南。注意,在这里我不只是讨论 Ubuntu 上的游戏。我讨论的是在所有的 Linux 上的游戏。 因为别人问过我几次有关 Linux 上的游戏的问题并且我想要写出来一个能解答这些问题的 Linux 游戏指南。注意,在这里我不只是讨论在 Ubuntu 上玩游戏。我讨论的是在所有的 Linux 上的游戏。
### 我能在 Linux 上玩游戏吗 ### ### 我能在 Linux 上玩游戏吗 ###
是,但不是完全是。 是,但不是完全是。
“是”是指你能在Linux上玩游戏“不完全是”是指你不能在 Linux 上玩 ’所有的游戏‘ “是”是指你能在Linux上玩游戏“不完全是”是指你不能在 Linux 上玩 ‘所有的游戏’
什么?你是拒绝的?不必这样。我的意思是你能在 Linux 上玩很多流行的游戏,比如[反恐精英以及地铁:最后的曙光][3]等。但是你可能不能玩到所有在 Windows 上流行的最新游戏,比如[实况足球2015][4]。 感到迷惑了吗?不必这样。我的意思是你能在 Linux 上玩很多流行的游戏,比如[反恐精英以及地铁:最后的曙光][3]等。但是你可能不能玩到所有在 Windows 上流行的最新游戏,比如[实况足球 2015 ][4]。
在我看来,造成这种情况的原因是 Linux 在桌面系统中仅占不到 2%,这占比使得大多数开发者没有在 Linux 上发布他们的游戏的打算 在我看来,造成这种情况的原因是 Linux 在桌面系统中仅占不到 2%,这样的占比使得大多数开发者没有开发其游戏的 Linux 版的动力
这就意味指大多数近年来被提及的比较多的游戏很有可能不能在 Linux 上玩。不要灰心。我们能以某种方式在 Linux 上玩这些游戏,我们将在下面的章节中讨论这些方法。但是,在此之前,让我们看看在 Linux 上能玩的游戏的种类。 这就意味指大多数近年来被提及的比较多的游戏很有可能不能在 Linux 上玩。不要灰心。还有别的方式在 Linux 上玩这些游戏,我们将在下面的章节中讨论这些方法。但是,在此之前,让我们看看在 Linux 上能玩的游戏的种类。
要我说的话,我会把那些游戏分为四类: 要我说的话,我会把那些游戏分为四类:
@ -33,7 +33,7 @@ Linux上的游戏所有你需要知道的
3. 浏览器里的游戏 3. 浏览器里的游戏
4. 终端里的游戏 4. 终端里的游戏
让我们以最重要的 Linux 的原生游戏开始。 让我们以最重要的一类, Linux 的原生游戏开始。
--------- ---------
@ -41,15 +41,15 @@ Linux上的游戏所有你需要知道的
原生游戏指的是官方支持 Linux 的游戏。这些游戏有原生的 Linux 客户端并且能像在 Linux 上的其他软件一样不需要附加的步骤就能安装在 Linux 上面(我们将在下一节讨论)。 原生游戏指的是官方支持 Linux 的游戏。这些游戏有原生的 Linux 客户端并且能像在 Linux 上的其他软件一样不需要附加的步骤就能安装在 Linux 上面(我们将在下一节讨论)。
所以,如你所见,这里有一些为 Linux 开发的游戏,下一个问题就是在哪能找到这些游戏以及如何安装。我将列出一些让你玩到游戏的渠道 所以,如你所见,有一些为 Linux 开发的游戏,下一个问题就是在哪能找到这些游戏以及如何安装。我将列出一些让你玩到游戏的渠道。
#### Steam #### #### Steam ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Install-Steam-Ubuntu-11.jpeg) ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Install-Steam-Ubuntu-11.jpeg)
“[Steam][5] 是一个游戏的分发平台。就如同 Kindle 是电子书的分发平台,iTunes 是音乐的分发平台一样Steam 也具有那样的功能。它给了你购买和安装游戏,玩多人游戏以及在它的平台上关注其他游戏的选项。这些游戏被[ DRM ][6]所保护。” “[Steam][5] 是一个游戏的分发平台。就如同 Kindle 是电子书的分发平台, iTunes 是音乐的分发平台一样, Steam 也具有那样的功能。它提供购买和安装游戏,玩多人游戏以及在它的平台上关注其他游戏的选项。其上的游戏被[ DRM ][6]所保护。”
两年以前,游戏平台 Steam 宣布支持 Linux这在当时是一个大新闻。这是 Linux 上玩游戏被严肃对待的一个迹象。尽管这个决定更多地影响了他们自己的基于 Linux 游戏平台[ Steam OS][7]。这仍然是令人欣慰的事情,因为它给 Linux 带来了一大堆游戏。 两年以前,游戏平台 Steam 宣布支持 Linux ,这在当时是一个大新闻。这是 Linux 上玩游戏被严肃对待的一个迹象。尽管这个决定更多地影响了他们自己的基于 Linux 游戏平台以及一个独立 Linux 发行版[ Steam OS][7] 这仍然是令人欣慰的事情,因为它给 Linux 带来了一大堆游戏。
我已经写了一篇详细的关于安装以及使用 Steam 的文章。如果你想开始使用 Steam 的话,读读那篇文章。 我已经写了一篇详细的关于安装以及使用 Steam 的文章。如果你想开始使用 Steam 的话,读读那篇文章。
@ -57,23 +57,23 @@ Linux上的游戏所有你需要知道的
#### GOG.com #### #### GOG.com ####
[GOG.com][9] 失灵一个与 Steam 类似的平台。与 Steam 一样,你能在这上面找到数以百计的 Linux 游戏,你可以购买和安装它们。如果游戏支持好几个平台,尼卡一在多个操作系统上安装他们。你买到你账户的游戏你可以随时玩。捏可以在你想要下载的任何时间下载。 [GOG.com][9] 是另一个与 Steam 类似的平台。与 Steam 一样,你能在这上面找到数以百计的 Linux 游戏,并购买和安装它们。如果游戏支持好几个平台,你可以在多个操作系统上安装他们。你可以随时游玩使用你的账户购买的游戏。你也可以在任何时间下载。
GOG.com 与 Steam 不同的是前者仅提供没有 DRM 保护的游戏以及电影。而且GOG.com 完全是基于网页的,所以你不需要安装类似 Steam 的客户端。你只需要用浏览器下载游戏然后安装到你的系统上。 GOG.com 与 Steam 不同的是前者仅提供没有 DRM 保护的游戏以及电影。而且GOG.com 完全是基于网页的,所以你不需要安装类似 Steam 的客户端。你只需要用浏览器下载游戏然后安装到你的系统上。
#### Portable Linux Games #### #### Portable Linux Games ####
[Portable Linux Games][10] 是一个集聚了不少 Linux 游戏的网站。这家网站最特别以及最好的就是你能离线安装这些游戏。 [Portable Linux Games][10] 是一个集聚了不少 Linux 游戏的网站。这家网站最特别以及最好的就是你能离线安装这些游戏。
你下载到的文件包含所有的依赖(仅需 Wine 以及 Perl并且他们也是与平台无关的。你所需要的仅仅是下载文件并且双击来启动安装程序。你也可以把文件储存起来以用于将来的安装,如果你网速不够快的话我很推荐您这样做。 你下载到的文件包含所有的依赖(仅需 Wine 以及 Perl并且他们也是与平台无关的。你所需要的仅仅是下载文件并且双击来启动安装程序。你也可以把文件储存起来以用于将来的安装。如果你网速不够快的话,我很推荐你这样做。
#### Game Drift 游戏商店 #### #### Game Drift 游戏商店 ####
[Game Drift][11] 是一个只专注于游戏的基于 Ubuntu 的 Linux 发行版。但是如果你不想只为游戏就去安装这个发行版的话,你也可以经常上线看哪个游戏可以在 Linux 上运行并且安装他们。 [Game Drift][11] 是一个只专注于游戏的基于 Ubuntu 的 Linux 发行版。但是如果你不想只为游戏就去安装这个发行版的话,你也可以经常去它的在线游戏商店去看哪个游戏可以在 Linux 上运行并且安装他们。
#### Linux Game Database #### #### Linux Game Database ####
如其名字所示,[Linux Game Database][12]是一个收集了很多 Linux 游戏的网站。你能在这里浏览诸多类型的游戏并从游戏开发者的网站下载/安装这些游戏。作为这家网站的会员,你甚至可以为游戏打分。LGDB有点像 Linux 游戏界的 IMDB 或者 IGN. 如其名字所示,[Linux Game Database][12]是一个收集了很多 Linux 游戏的网站。你能在这里浏览诸多类型的游戏并从游戏开发者的网站下载/安装这些游戏。作为这家网站的会员,你甚至可以为游戏打分。 LGDB 有点像 Linux 游戏界的 IMDB 或者 IGN.
#### Penguspy #### #### Penguspy ####
@ -81,7 +81,7 @@ GOG.com 与 Steam 不同的是前者仅提供没有 DRM 保护的游戏以及电
#### 软件源 #### #### 软件源 ####
看看你自己的发行版的软件源。那里可能有一些游戏。如果你用 Ubuntu 的话,它的软件中心里有一个游戏的分类。在一些其他的发行版里也有,比如 Liux Mint 等。 看看你自己的发行版的软件源。其中可能有一些游戏。如果你用 Ubuntu 的话,它的软件中心里有一个游戏的分类。在一些其他的发行版里也有,比如 Linux Mint 等。
---------- ----------
@ -89,19 +89,19 @@ GOG.com 与 Steam 不同的是前者仅提供没有 DRM 保护的游戏以及电
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Wine-Linux.png) ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Wine-Linux.png)
到现在为止,我们一直在讨论 Linux 的原生游戏。但是并没有很多 Linux 上的原生游戏,或者说,火的不要不要的游戏大多不支持 Linux但是都支持 Windows PC。所以,如何在 Linux 上玩 Wendows 的游戏? 到现在为止,我们一直在讨论 Linux 的原生游戏。但是并没有很多 Linux 上的原生游戏,或者更准确地说,火的不要不要的游戏大多不支持 Linux但是都支持 Windows PC 。所以,如何在 Linux 上玩 Windows 的游戏?
幸好,由于我们有 Wine, PlayOnLinux 和 CrossOver 等工具,我们能在 Linux 上玩不少的 Wendows 游戏。 幸好,由于我们有 Wine 、 PlayOnLinux 和 CrossOver 等工具,我们能在 Linux 上玩不少的 Windows 游戏。
#### Wine #### #### Wine ####
Wine 是一个能使 Wendows 应用在类似 Linux, BSD 和 OS X 上运行的兼容层。在 Wine 的帮助下,你可以在 Linux 下安装以及使用很多 Windows 下的应用。 Wine 是一个能使 Windows 应用在类似 Linux BSD 和 OS X 上运行的兼容层。在 Wine 的帮助下,你可以在 Linux 下安装以及使用很多 Windows 下的应用。
[在 Ubuntu 上安装 Wine][14]或者在其他 Linux 上安装 Wine 是很简单的,因为大多数发行版的软件源里都有它。这里也有一个很大的[ Wine 支持的应用的数据库][15]供您浏览。 [在 Ubuntu 上安装 Wine][14]或者在其他 Linux 上安装 Wine 是很简单的,因为大多数发行版的软件源里都有它。这里也有一个很大的[ Wine 支持的应用的数据库][15]供您浏览。
#### CrossOver #### #### CrossOver ####
[CrossOver][16] 是 Wine 的增强版,它给 Wine 提供了专业的技术上的支持。但是与 Wine 不同, CrossOver 不是免费的。你需要购买许可。好消息是它会把更新也贡献到 Wine 的开发者那里并且事实上加速了 Wine 的开发使得 Wine 能支持更多的 Windows 上的游戏和应用。如果你可以年支付 48 美元,你可以购买 CrossOver 并得到他们提供的技术支持。 [CrossOver][16] 是 Wine 的增强版,它给 Wine 提供了专业的技术上的支持。但是与 Wine 不同, CrossOver 不是免费的。你需要购买许可。好消息是它会把更新也贡献到 Wine 的开发者那里并且事实上加速了 Wine 的开发使得 Wine 能支持更多的 Windows 上的游戏和应用。如果你可以接受每年支付 48 美元,你可以购买 CrossOver 并得到他们提供的技术支持。
### PlayOnLinux ### ### PlayOnLinux ###
@ -131,21 +131,21 @@ PlayOnLinux 也基于 Wine 但是执行程序的方式略有不同。它有着
当你了解了不少的在 Linux 上你可以玩到的游戏以及你如何使用他们,下一个问题就是如何保持游戏的版本是最新的。对于这件事,我建议你看看下面的博客,这些博客能告诉你 Linux 游戏世界的最新消息: 当你了解了不少的在 Linux 上你可以玩到的游戏以及你如何使用他们,下一个问题就是如何保持游戏的版本是最新的。对于这件事,我建议你看看下面的博客,这些博客能告诉你 Linux 游戏世界的最新消息:
- [Gaming on Linux][23]:我认为我把它叫做 Linux 游戏门户并没有错误。在这你可以得到关于 Linux 的游戏的最新的传言以及新闻。最近, Gaming on Linux 有了一个由 Linux 游戏爱好者组成的漂亮的社区。 - [Gaming on Linux][23]:我认为我把它叫做 Linux 游戏专业门户并没有错误。在这你可以得到关于 Linux 的游戏的最新的传言以及新闻。它经常更新, 还有由 Linux 游戏爱好者组成的优秀社区。
- [Free Gamer][24]:一个专注于免费开源的游戏的博客。 - [Free Gamer][24]:一个专注于免费开源的游戏的博客。
- [Linux Game News][25]:一个提供很多的 Linux 游戏的升级的 Tumbler 博客。 - [Linux Game News][25]:一个提供很多的 Linux 游戏的升级的 Tumbler 博客。
#### 还有别的要说的吗? #### #### 还有别的要说的吗? ####
我认为让你知道如何开始在 Linux 上的游戏人生是一个好事。如果你仍然不能被说服我推荐你做个[双系统][26],把 Linux 作为你的主要桌面系统,当你想玩游戏时,重启到 Windows。这是一个对游戏妥协的解决办法。 我认为让你知道如何开始在 Linux 上的游戏人生是一个好事。如果你仍然不能被说服我推荐你做个[双系统][26],把 Linux 作为你的主要桌面系统,当你想玩游戏时,重启到 Windows。这是一个对游戏妥协的解决办法。
现在,这里是你说出你自己的状况的时候了。你在 Linux 上玩游戏吗?你最喜欢什么游戏?你关注了哪些游戏博客? 现在,这里是你说出你自己的想法的时候了。你在 Linux 上玩游戏吗?你最喜欢什么游戏?你关注了哪些游戏博客?
投票项目: 投票项目:
你怎样在 Linux 上玩游戏? 你怎样在 Linux 上玩游戏?
- 我玩原生 Linux 游戏,也用 Wine 以及 PlayOnLinux 运行 Windows 游戏 - 我玩原生 Linux 游戏,也用 Wine 以及 PlayOnLinux 运行 Windows 游戏
- 我喜欢网页游戏 - 我喜欢网页游戏
- 我喜欢终端游戏 - 我喜欢终端游戏
- 我只玩原生 Linux 游戏 - 我只玩原生 Linux 游戏
@ -167,7 +167,7 @@ via: http://itsfoss.com/linux-gaming-guide/
作者:[Abhishek][a] 作者:[Abhishek][a]
译者:[name1e5s](https://github.com/name1e5s) 译者:[name1e5s](https://github.com/name1e5s)
校对:[校对者ID](https://github.com/校对者ID) 校对:[PurlingNayuki](https://github.com/PurlingNayuki)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -4,8 +4,7 @@ Linux 内核里的数据结构——双向链表
双向链表 双向链表
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
Linux 内核中自己实现了双向链表,可以在 [include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h) 找到定义。我们将会首先从双向链表数据结构开始介绍**内核里的数据结构**。为什么?因为它在内核里使用的很广泛,你只需要在 [free-electrons.com](http://lxr.free-electrons.com/ident?i=list_head) 检索一下就知道了。
Linux 内核自己实现了双向链表,可以在[include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h)找到定义。我们将会从双向链表数据结构开始`内核的数据结构`。为什么?因为它在内核里使用的很广泛,你只需要在[free-electrons.com](http://lxr.free-electrons.com/ident?i=list_head) 检索一下就知道了。
首先让我们看一下在 [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 里的主结构体: 首先让我们看一下在 [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 里的主结构体:
@ -25,7 +24,7 @@ struct GList {
}; };
``` ```
通常来说一个链表会包含一个指向某个项目的指针。但是内核的实现并没有这样做。所以问题来了:`链表在哪里保存数据呢?`。实际上内核里实现的链表实际上是`侵入式链表`。侵入式链表并不在节点内保存数据-节点仅仅包含指向前后节点的指针,然后把数据是附加到链表的。这就使得这个数据结构是通用的,使用起来就不需要考虑节点数据的类型了。 通常来说一个链表结构会包含一个指向某个项目的指针。但是 Linux 内核中的链表实现并没有这样做。所以问题来了:**链表在哪里保存数据呢?**。实际上,内核里实现的链表是**侵入式链表Intrusive list**。侵入式链表并不在节点内保存数据-它的节点仅仅包含指向前后节点的指针,以及指向链表节点数据部分的指针——数据就是这样附加在链表上的。这就使得这个数据结构是通用的,使用起来就不需要考虑节点数据的类型了。
比如: 比如:
@ -36,7 +35,7 @@ struct nmi_desc {
}; };
``` ```
让我们看几个例子来理解一下在内核里是如何使用`list_head` 的。如上所述,在内核里有实在很多不同的地方用到了链表。我们来看一个在杂项字符驱动里面的使用的例子。在 [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) 的杂项字符驱动API 被用来编写处理小型硬件和虚拟设备的小驱动。这些驱动共享相同的主设备号: 让我们看几个例子来理解一下在内核里是如何使用 `list_head` 的。如上所述,在内核里有很多很多不同的地方都用到了链表。我们来看一个在杂项字符驱动里面的使用的例子。在 [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) 的杂项字符驱动 API 被用来编写处理小型硬件或虚拟设备的小驱动。这些驱动共享相同的主设备号:
```C ```C
#define MISC_MAJOR 10 #define MISC_MAJOR 10
@ -84,7 +83,7 @@ struct miscdevice
}; };
``` ```
可以看到结构体的第四个变量`list` 是所有注册过的设备的链表。在源代码文件的开始可以看到这个链表的定义: 可以看到结构体`miscdevice`的第四个变量`list` 是所有注册过的设备的链表。在源代码文件的开始可以看到这个链表的定义:
```C ```C
static LIST_HEAD(misc_list); static LIST_HEAD(misc_list);
@ -103,7 +102,7 @@ static LIST_HEAD(misc_list);
#define LIST_HEAD_INIT(name) { &(name), &(name) } #define LIST_HEAD_INIT(name) { &(name), &(name) }
``` ```
现在来看看注册杂项设备的函数`misc_register`。它在开始就用 `INIT_LIST_HEAD` 初始化了`miscdevice->list`。 现在来看看注册杂项设备的函数`misc_register`。它在开始就用函数 `INIT_LIST_HEAD` 初始化了`miscdevice->list`。
```C ```C
INIT_LIST_HEAD(&misc->list); INIT_LIST_HEAD(&misc->list);
@ -119,13 +118,13 @@ static inline void INIT_LIST_HEAD(struct list_head *list)
} }
``` ```
在函数`device_create` 创建了设备后我们就用下面的语句将设备添加到设备链表: 接下来,在函数`device_create` 创建了设备后我们就用下面的语句将设备添加到设备链表:
``` ```
list_add(&misc->list, &misc_list); list_add(&misc->list, &misc_list);
``` ```
内核文件`list.h` 提供了项链表添加新项的API 接口。我们来看看它的实现: 内核文件`list.h` 提供了向链表添加新项的 API 接口。我们来看看它的实现:
```C ```C
@ -138,8 +137,8 @@ static inline void list_add(struct list_head *new, struct list_head *head)
实际上就是使用3个指定的参数来调用了内部函数`__list_add` 实际上就是使用3个指定的参数来调用了内部函数`__list_add`
* new - 新项。 * new - 新项。
* head - 新项将会被添加到`head`之前. * head - 新项将会插在`head`的后面
* head->next - `head` 之后的项。 * head->next - 插入前,`head` 后面的项。
`__list_add`的实现非常简单: `__list_add`的实现非常简单:
@ -155,9 +154,9 @@ static inline void __list_add(struct list_head *new,
} }
``` ```
我们在`prev`和`next` 之间添加一个新项。所以我们用宏`LIST_HEAD_INIT`定义的`misc` 链表会包含指向`miscdevice->list` 的向前指针和向后指针。 这里,我们在`prev`和`next` 之间添加一个新项。所以我们开始时用宏`LIST_HEAD_INIT`定义的`misc` 链表会包含指向`miscdevice->list` 的向前指针和向后指针。
有一个问题:如何得到列表的内容呢?这里有一个特殊的宏: 儿还有一个问题:如何得到列表的内容呢?这里有一个特殊的宏:
```C ```C
#define list_entry(ptr, type, member) \ #define list_entry(ptr, type, member) \
@ -166,7 +165,7 @@ static inline void __list_add(struct list_head *new,
使用了三个参数: 使用了三个参数:
* ptr - 指向链表头的指针; * ptr - 指向结构 `list_head` 的指针;
* type - 结构体类型; * type - 结构体类型;
* member - 在结构体内类型为`list_head` 的变量的名字; * member - 在结构体内类型为`list_head` 的变量的名字;
@ -205,9 +204,9 @@ int main() {
} }
``` ```
最终会打印`2` 最终会打印`2`
下一点就是`typeof`,它也很简单。就如你从名字所理解的,它仅仅返回了给定变量的类型。当我第一次看到宏`container_of`的实现时,让我觉得最奇怪的就是`container_of`中的0.实际上这个指针巧妙的计算了从结构体特定变量的偏移,这里的`0`刚好就是位宽里的零偏移。让我们看一个简单的例子: 下一点就是`typeof`,它也很简单。就如你从名字所理解的,它仅仅返回了给定变量的类型。当我第一次看到宏`container_of`的实现时,让我觉得最奇怪的就是表达式`((type *)0)`中的0。实际上这个指针巧妙的计算了从结构体特定变量的偏移,这里的`0`刚好就是位宽里的零偏移。让我们看一个简单的例子:
```C ```C
#include <stdio.h> #include <stdio.h>
@ -226,33 +225,35 @@ int main() {
结果显示`0x5`。 结果显示`0x5`。
下一个宏`offsetof` 会计算从结构体的某个变量的相对于结构体起始地址的偏移。它的实现和上面类似: 下一个宏`offsetof`会计算从结构体起始地址到某个给定结构字段的偏移。它的实现和上面类似:
```C ```C
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
``` ```
现在我们来总结一下宏`container_of`。只需要知道结构体里面类型为`list_head` 的变量的名字和结构体容器的类型,它可以通过结构体的变量`list_head`获得结构体的起始地址。在宏定义的第一行,声明了一个指向结构体成员变量`ptr`的指针`__mptr`,并且把`ptr` 的地址赋给它。现在`ptr` 和`__mptr` 指向了同一个地址。从技术上讲我们并不需要这一行,但是它可以方便进行类型检查。第一行保证了特定的结构体(参数`type`)包含成员变量`member`。第二行代码会用宏`offsetof`计算成员变量相对于结构体起始地址的偏移,然后从结构体的地址减去这个偏移,最后就得到了结构体。 现在我们来总结一下宏`container_of`。只需给定结构体中`list_head`类型 字段的地址、名字和结构体容器的类型,它就可以返回结构体的起始地址。在宏定义的第一行,声明了一个指向结构体成员变量`ptr`的指针`__mptr`,并且把`ptr` 的地址赋给它。现在`ptr` 和`__mptr` 指向了同一个地址。从技术上讲我们并不需要这一行,但是它可以方便进行类型检查。第一行保证了特定的结构体(参数`type`)包含成员变量`member`。第二行代码会用宏`offsetof`计算成员变量相对于结构体起始地址的偏移,然后从结构体的地址减去这个偏移,最后就得到了结构体。
当然了`list_add` 和 `list_entry`不是`<linux/list.h>`提供的唯一功能。双向链表的实现还提供了如下API 当然了`list_add` 和 `list_entry`不是`<linux/list.h>`提供的唯一功能。双向链表的实现还提供了如下API
* list_add * list\_add
* list_add_tail * list\_add\_tail
* list_del * list\_del
* list_replace * list\_replace
* list_move * list\_move
* list_is_last * list\_is\_last
* list_empty * list\_empty
* list_cut_position * list\_cut\_position
* list_splice * list\_splice
* list_for_each * list\_for\_each
* list_for_each_entry * list\_for\_each\_entry
等等很多其它API。 等等很多其它API。
via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/dlist.md ----
via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/dlist.md
译者:[Ezio](https://github.com/oska874) 译者:[Ezio](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID) 校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,6 +1,6 @@
15条给系统管理员的实用 Linux/Unix 磁带管理命令 给系统管理员的15条实用 Linux/Unix 磁带管理命令
================================================================================ ================================================================================
磁带设备应只用于定期的文件归档或将数据从一台服务器传送至另一台。通常磁带设备与 Unix 机器连接,用 mt 或 mtx 控制。你可以将所有的数据备份到磁盘(也许是云中)和磁带设备。在这个教程中你将会了解到: 磁带设备应只用于定期的文件归档或将数据从一台服务器传送至另一台。通常磁带设备与 Unix 机器连接,用 mt 或 mtx 控制。强烈建议您将所有的数据同时备份到磁盘(也许是云中)和磁带设备中。在本教程中你将会了解到:
- 磁带设备名 - 磁带设备名
- 管理磁带驱动器的基本命令 - 管理磁带驱动器的基本命令
@ -8,12 +8,13 @@
### 为什么备份? ### ### 为什么备份? ###
一个备份设备是很重要的 一个备份计划对定期备份文件来说很有必要,如果你宁愿选择不备份,那么丢失重要数据的风险会大大增加。有了备份,你就有了从磁盘故障中恢复的能力。备份还可以帮助你抵御
- 从磁盘故障中恢复的能力
- 意外的文件删除 - 意外的文件删除
- 文件或文件系统损坏 - 文件或文件系统损坏
- 服务器完全毁坏,包括由于火灾或其他问题导致的同盘备份毁坏 - 服务器完全毁坏,包括由于火灾或其他问题导致的同盘备份毁坏
- 硬盘或 SSD 崩溃
- 病毒或勒索软件破坏或删除文件
你可以使用磁带归档备份整个服务器并将其离线存储。 你可以使用磁带归档备份整个服务器并将其离线存储。
@ -21,15 +22,15 @@
![Fig.01: Tape file marks](http://s0.cyberciti.org/uploads/cms/2015/10/tape-format.jpg) ![Fig.01: Tape file marks](http://s0.cyberciti.org/uploads/cms/2015/10/tape-format.jpg)
图01磁带文件标记 *图01磁带文件标记*
每个磁带设备能存储多个备份文件。磁带备份文件通过 cpiotardd 等命令创建。但是,磁带设备可以由各种程序打开,写入数据,并关闭。你可以存储若干备份(磁带文件)到一个物理磁带上。在每个磁带文件之间有个“磁带文件标记”。这个是用来指示一个物理磁带上磁带文件的结尾以及另一个文件的开始。你需要使用 mt 命令来定位磁带(快进,倒带和标记)。 每个磁带设备能存储多个备份文件。磁带备份文件通过 cpiotardd 等命令创建。同时,磁带设备可以由多种程序打开、写入数据、及关闭。你可以存储若干备份(磁带文件)到一个物理磁带上。在每个磁带文件之间有个“磁带文件标记”。这用来指示一个物理磁带上磁带文件的结尾以及另一个文件的开始。你需要使用 mt 命令来定位磁带(快进,倒带和标记)。
#### 磁带上的数据是如何存储的 #### #### 磁带上的数据是如何存储的 ####
![Fig.02: How data is stored on a tape](http://s0.cyberciti.org/uploads/cms/2015/10/how-data-is-stored-on-a-tape.jpg) ![Fig.02: How data is stored on a tape](http://s0.cyberciti.org/uploads/cms/2015/10/how-data-is-stored-on-a-tape.jpg)
图02磁带上的数据是如何存储的 *图02磁带上的数据是如何存储的*
所有的数据使用 tar 以连续磁带存储格式连续地存储。第一个磁带归档会从磁带的物理开始端开始存储tar #0)。接下来的就是 tar #1,以此类推。 所有的数据使用 tar 以连续磁带存储格式连续地存储。第一个磁带归档会从磁带的物理开始端开始存储tar #0)。接下来的就是 tar #1,以此类推。
@ -59,22 +60,22 @@
输入下列命令: 输入下列命令:
## Linux更多信息参阅 man ## ### Linux更多信息参阅 man ###
lsscsi lsscsi
lsscsi -g lsscsi -g
## IBM AIX ## ### IBM AIX ###
lsdev -Cc tape lsdev -Cc tape
lsdev -Cc adsm lsdev -Cc adsm
lscfg -vl rmt* lscfg -vl rmt*
## Solaris Unix ## ### Solaris Unix ###
cfgadm a cfgadm a
cfgadm -al cfgadm -al
luxadm probe luxadm probe
iostat -En iostat -En
## HP-UX Unix ## ### HP-UX Unix ###
ioscan Cf ioscan Cf
ioscan -funC tape ioscan -funC tape
ioscan -fnC tape ioscan -fnC tape
@ -85,9 +86,9 @@
![Fig.03: Installed tape devices on Linux server](http://s0.cyberciti.org/uploads/cms/2015/10/linux-find-tape-devices-command.jpg) ![Fig.03: Installed tape devices on Linux server](http://s0.cyberciti.org/uploads/cms/2015/10/linux-find-tape-devices-command.jpg)
图03Linux 服务器上已安装的磁带设备 *图03Linux 服务器上已安装的磁带设备*
### mt 命令例 ### ### mt 命令例 ###
在 Linux 和类 Unix 系统上mt 命令用来控制磁带驱动器的操作,比如查看状态或查找磁带上的文件或写入磁带控制标记。下列大多数命令需要作为 root 用户执行。语法如下: 在 Linux 和类 Unix 系统上mt 命令用来控制磁带驱动器的操作,比如查看状态或查找磁带上的文件或写入磁带控制标记。下列大多数命令需要作为 root 用户执行。语法如下:
@ -97,7 +98,7 @@
你可以设置 TAPE shell 变量。这是磁带驱动器的路径名。在 FreeBSD 上默认的(如果变量没有设置,而不是 null是 /dev/nsa0。可以通过 mt 命令的 -f 参数传递变量覆盖它,就像下面解释的那样。 你可以设置 TAPE shell 变量。这是磁带驱动器的路径名。在 FreeBSD 上默认的(如果变量没有设置,而不是 null是 /dev/nsa0。可以通过 mt 命令的 -f 参数传递变量覆盖它,就像下面解释的那样。
## 添加到你的 shell 配置文件 ## ### 添加到你的 shell 配置文件 ###
TAPE=/dev/st1 #Linux TAPE=/dev/st1 #Linux
TAPE=/dev/rmt/2 #Unix TAPE=/dev/rmt/2 #Unix
TAPE=/dev/nsa3 #FreeBSD TAPE=/dev/nsa3 #FreeBSD
@ -105,13 +106,13 @@
### 1显示磁带/驱动器状态 ### ### 1显示磁带/驱动器状态 ###
mt status #Use default mt status ### Use default
mt -f /dev/rmt/0 status #Unix mt -f /dev/rmt/0 status ### Unix
mt -f /dev/st0 status #Linux mt -f /dev/st0 status ### Linux
mt -f /dev/nsa0 status #FreeBSD mt -f /dev/nsa0 status ### FreeBSD
mt -f /dev/rmt/1 status #Unix unity 1 也就是 tape device no. 1 mt -f /dev/rmt/1 status ### Unix unity 1 也就是 tape device no. 1
你可以像下面一样使用 shell 循环调查系统并定位所有的磁带驱动器: 你可以像下面一样使用 shell 循环语句遍历一个系统并定位其所有的磁带驱动器:
for d in 0 1 2 3 4 5 for d in 0 1 2 3 4 5
do do
@ -133,7 +134,7 @@
mt -f /dev/mt/0 off mt -f /dev/mt/0 off
mt -f /dev/st0 eject mt -f /dev/st0 eject
### 4擦除磁带倒带可以的情况下卸载磁带) ### ### 4擦除磁带倒带支持的情况下卸载磁带) ###
mt erase mt erase
mt -f /dev/st0 erase #Linux mt -f /dev/st0 erase #Linux
@ -179,7 +180,7 @@
bsfm 后退指定的文件标记数目。磁带定位在下一个文件的第一块。 bsfm 后退指定的文件标记数目。磁带定位在下一个文件的第一块。
asf The tape is positioned at the beginning of the count file. Positioning is done by first rewinding the tape and then spacing forward over count filemarks.磁带定位在 asf 磁带定位在指定文件标记数目的开始位置。定位通过先倒带,再前进指定的文件标记数目来实现。
fsr 前进指定的记录数。 fsr 前进指定的记录数。
@ -207,7 +208,7 @@
mt -f /dev/st0 rewind; dd if=/dev/st0 of=- mt -f /dev/st0 rewind; dd if=/dev/st0 of=-
## tar 格式 ## ### tar 格式 ###
tar tvf {DEVICE} {Directory-FileName} tar tvf {DEVICE} {Directory-FileName}
tar tvf /dev/st0 tar tvf /dev/st0
tar tvf /dev/st0 desktop tar tvf /dev/st0 desktop
@ -215,40 +216,40 @@
### 12使用 dump 或 ufsdump 备份分区 ### ### 12使用 dump 或 ufsdump 备份分区 ###
## Unix 备份 c0t0d0s2 分区 ## ### Unix 备份 c0t0d0s2 分区 ###
ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s2 ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s2
## Linux 备份 /home 分区 ## ### Linux 备份 /home 分区 ###
dump 0uf /dev/nst0 /dev/sda5 dump 0uf /dev/nst0 /dev/sda5
dump 0uf /dev/nst0 /home dump 0uf /dev/nst0 /home
## FreeBSD 备份 /usr 分区 ## ### FreeBSD 备份 /usr 分区 ###
dump -0aL -b64 -f /dev/nsa0 /usr dump -0aL -b64 -f /dev/nsa0 /usr
### 12使用 ufsrestore 或 restore 恢复分区 ### ### 12使用 ufsrestore 或 restore 恢复分区 ###
## Unix ## ### Unix ###
ufsrestore xf /dev/rmt/0 ufsrestore xf /dev/rmt/0
## Unix 交互式恢复 ## ### Unix 交互式恢复 ###
ufsrestore if /dev/rmt/0 ufsrestore if /dev/rmt/0
## Linux ## ### Linux ###
restore rf /dev/nst0 restore rf /dev/nst0
## 从磁带媒介上的第6个备份交互式恢复 ## ### 从磁带媒介上的第6个备份交互式恢复 ###
restore isf 6 /dev/nst0 restore isf 6 /dev/nst0
## FreeBSD 恢复 ufsdump 格式 ## ### FreeBSD 恢复 ufsdump 格式 ###
restore -i -f /dev/nsa0 restore -i -f /dev/nsa0
### 13从磁带开头开始写入见图02 ### ### 13从磁带开头开始写入见图02 ###
## 这会覆盖磁带上的所有数据 ## ### 这会覆盖磁带上的所有数据 ###
mt -f /dev/st1 rewind mt -f /dev/st1 rewind
### 备份 home ## ### 备份 home ###
tar cvf /dev/st1 /home tar cvf /dev/st1 /home
## 离线并卸载磁带 ## ### 离线并卸载磁带 ###
mt -f /dev/st0 offline mt -f /dev/st0 offline
从磁带开头开始恢复: 从磁带开头开始恢复:
@ -259,22 +260,22 @@
### 14从最后一个 tar 后开始写入见图02 ### ### 14从最后一个 tar 后开始写入见图02 ###
## 这会保留之前写入的数据 ## ### 这会保留之前写入的数据 ###
mt -f /dev/st1 eom mt -f /dev/st1 eom
### 备份 home ## ### 备份 home ###
tar cvf /dev/st1 /home tar cvf /dev/st1 /home
## 卸载 ## ### 卸载 ###
mt -f /dev/st0 offline mt -f /dev/st0 offline
### 15从 tar number 2 后开始写入见图02 ### ### 15从 tar number 2 后开始写入见图02 ###
## 在 tar number 2 之后写入(应该是 2+1 ### 在 tar number 2 之后写入(应该是 2+1###
mt -f /dev/st0 asf 3 mt -f /dev/st0 asf 3
tar cvf /dev/st0 /usr tar cvf /dev/st0 /usr
## asf 等效于 fsf ## ### asf 等效于 fsf ###
mt -f /dev/sf0 rewind mt -f /dev/sf0 rewind
mt -f /dev/st0 fsf 2 mt -f /dev/st0 fsf 2
@ -413,7 +414,7 @@ via: http://www.cyberciti.biz/hardware/unix-linux-basic-tape-management-commands
作者Vivek Gite 作者Vivek Gite
译者:[alim0x](https://github.com/alim0x) 译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID) 校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,111 @@
安装 openSUSE Leap 42.1 之后要做的 8 件事
================================================================================
![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg)
*致谢:[Metropolitan Transportation/Flicrk][1]*
> 如果你已经在你的电脑上安装了 openSUSE这就是你接下来要做的。
[openSUSE Leap 确实是个巨大的飞跃][2],它允许用户运行一个和 SUSE Linux 企业版拥有同样基因的发行版。和其它系统一样,为了实现最佳的使用效果,在使用它之前需要做些优化设置。
下面是一些我在我的电脑上安装 openSUSE Leap 之后做的一些事情(不适用于服务器)。这里面没有强制性的设置,基本安装对你来说也可能足够了。但如果你想获得更好的 openSUSE Leap 体验,那就跟着我往下看吧。
### 1. 添加 Packman 仓库 ###
由于专利和授权等原因openSUSE 和许多 Linux 发行版一样不通过官方仓库repos提供一些软件、解码器以及驱动等。取而代之的是通过第三方或社区仓库来提供。第一个也是最重要的仓库是“Packman”。因为这些仓库不是默认启用的我们需要添加它们。你可以通过 YaSTopenSUSE 的特色之一)或者命令行完成(如下方介绍)。
![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png)
*添加 Packman 仓库。*
使用 YaST打开软件源部分。点击“添加”按钮并选择“社区仓库Community Repositories”。点击“下一步”。一旦仓库列表加载出来了选择 Packman 仓库。点击“确认”,然后点击“信任”导入信任的 GnuPG 密钥。
或者在终端里使用以下命令添加并启用 Packman 仓库:
zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman
仓库添加之后,你就可以使用更多的包了。想安装任意软件或包,打开 YaST 软件管理器,搜索并安装即可。
### 2. 安装 VLC ###
VLC 是媒体播放器里的瑞士军刀,几乎可以播放任何媒体文件。你可以从 YaST 软件管理器 或 software.opensuse.org 安装 VLC。你需要安装两个包vlc 和 vlc-codecs。
如果你用终端,运行以下命令:
sudo zypper install vlc vlc-codecs
### 3. 安装 Handbrake ###
如果你需要转码或转换视频文件格式,[Handbrake 是你的不二之选][3]。Handbrake 就在我们启用的仓库中,所以只需要在 YaST 中搜索并安装它即可。
如果你用终端,运行以下命令:
sudo zypper install handbrake-cli handbrake-gtk
提示VLC 也能转码音频和视频文件。)
### 4. 安装 Chrome ###
openSUSE 的默认浏览器是 Firefox。但是因为 Firefox 不能够播放专有媒体,比如 Netflix我推荐安装 Chrome。这需要额外的工作。首先你需要从谷歌导入信任密钥。打开终端执行“wget”命令下载密钥
wget https://dl.google.com/linux/linux_signing_key.pub
然后导入密钥:
sudo rpm --import linux_signing_key.pub
现在到 [Google Chrome 网站][4] 去,下载 64 位 .rpm 文件。下载完成后执行以下命令安装浏览器:
sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm
### 5. 安装 Nvidia 驱动 ###
即便你使用 Nvidia 或 ATI 显卡openSUSE Leap 也能够开箱即用。但是,如果你需要专有驱动来游戏或其它目的,你可以安装这些驱动,但需要一点额外的工作。
首先你需要添加 Nvidia 源;它的步骤和使用 YaST 添加 Packman 仓库是一样的。唯一的不同是你需要在社区仓库部分选择 Nvidia。添加好了之后**软件管理 > 附加** 去并选择“附加/安装所有匹配的推荐包”。
![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png)
它会打开一个对话框,显示所有将要安装的包,点击确认后按介绍操作。添加了 Nvidia 源之后你也可以通过命令安装需要的 Nvidia 驱动:
sudo zypper inr
(注:我没使用过 AMD/ATI 显卡,所以这方面我没有经验。)
### 6. 安装媒体解码器 ###
你安装 VLC 之后就不需要安装媒体解码器了,但如果你要使用其它软件来播放媒体的话就需要安装了。一些开发者写了脚本/工具来简化这个过程。打开[这个页面][5]并点击合适的按钮安装完整的包。它会打开 YaST 并自动安装包(当然通常你还需要提供 root 权限密码并信任 GnuPG 密钥)。
### 7. 安装你喜欢的电子邮件客户端 ###
openSUSE 自带 Kmail 或 Evolution这取决于你安装的桌面环境。我用的是 KDE Plasma 自带的 Kmail这个邮件客户端还有许多地方有待改进。我建议可以试试 Thunderbird 或 Evolution。所有主要的邮件客户端都能在官方仓库找到。你还可以看看我[精心挑选的 Linux 最佳邮件客户端][7]。
### 8. 在防火墙允许 Samba 服务 ###
相比于其它发行版openSUSE 默认提供了更加安全的系统。但对新用户来说它也需要一点设置。如果你正在使用 Samba 协议分享文件到本地网络的话,你需要在防火墙允许该服务。
![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png)
*在防火墙设置里允许 Samba 客户端和服务端*
打开 YaST 并搜索 Firewall。在防火墙设置里进入到“允许的服务Allowed Services你会在“要允许的服务Service to allow”下面看到一个下拉列表。选择“Samba 客户端”然后点击“添加”。同样方法添加“Samba 服务器”。都添加了之后,点击“下一步”,然后点击“完成”,现在你就可以通过本地网络从你的 openSUSE 分享文件以及访问其它机器了。
这差不多就是我以我喜欢的方式对我的新 openSUSE 系统做的所有设置了。如果你有任何问题,欢迎在评论区提问。
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html
作者:[Swapnil Bhartiya][a]
译者:[alim0x](https://github.com/alim0x)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
[1]:https://www.flickr.com/photos/mtaphotos/11200079265/
[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution
[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line
[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha
[5]:http://opensuse-community.org/
[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html

View File

@ -0,0 +1,53 @@
KDE、GNOME 和 XFCE 桌面比较
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2013/07/300px-Xfce_logo.svg_.png)
这么多年来,很多人一直都在他们的 linux 桌面端使用 KDE 或者 GNOME 桌面环境。在这两个桌面环境多年不断发展的同时其它的桌面也在持续增加它们的用户规模。举个例子说在轻量级桌面环境下XFCE 一举成为了最受欢迎的桌面环境,相较于 LXDE 缺少的优美视觉效果,默认配置下的 XFCE 在这方面就可以打败前者。XFCE 提供的功能特性都能在 GNOME2 下得到,然而,在一些较老的计算机上,它的轻量级的特性却能取得更好的效果。
### 桌面主题定制 ###
用户完成安装之后XFCE 看起来可能会有一点无趣,因为它在视觉上还缺少一些吸引力。但是,请不要误解我的话, XFCE 仍然拥有漂亮的桌面,但是对于大多数刚刚接触 XFCE 桌面环境的人来说,可能它看起来像香草一样普普通通。不过好消息是当我们想要给 XFCE 安装新的主题的时候,这会是一个十分轻松的过程,因为你能够快速的找到你喜欢的 XFCE 主题之后你就可以将它解压到一个合适的目录中。从这一点上来说XFCE 自带的一个放在“外观”下的重要的图形界面工具可以帮助用户更加容易的选择中意的主题,这可能是目前在 XFCE 上这方面最好用的工具了。如果用户按照上面的建议去做的话,对于想要尝试使用 XFCE 的任何用户来说都将不存在困难。
在 GNOME 桌面上,用户也可以按照类似上面的方法去做。不过,其中最主要的不同点就是在你做之前,用户必须手动下载并安装 GNOME Tweak Tool。当然对于使用来说都不会有什么障碍但是对于用户来说使用 XFCE 安装和激活主题并不需要去额外去下载安装各种调整工具,这可能是他们无法忽略的一个优势。而在 GNOME 上,尤其是在用户已经下载并安装了 GNOME Tweak tool 之后,你仍将必须确保你已经安装了“用户主题扩展”。
同 XFCE 一样,用户需要去搜索并下载自己喜欢的主题,然后,用户可以再次使用 GNOME Tweak tool并点击该工具界面左边的“外观”按钮接着用户便可以直接查看页面底部并点击文件浏览按钮然后浏览到那个压缩的文件夹并打开。当完成这些之后用户将会看到一个告诉用户已经成功应用了主题的对话框这样你的主题便已经安装完成。然后用户就可以简单的使用下拉菜单来选择他们想要的主题。和 XFCE 一样,主题激活的过程也是十分简单的,然而,对于因为要使用一个新的主题而去下载一个没有预先安装到系统里面的应用,这种情况也是需要考虑的。
最后,就是 KDE 桌面主题定制的过程了。和 XFCE 一样不需要去下载额外的工具来安装主题。从这点来看让人有种XFCE 可能要被 KDE 战胜了的感觉。不仅在 KDE 上可以完全使用图形用户界面来安装主题,而且甚至只需要用户点击获取新主题的按钮就可以找到、查看新的主题,并且最后自动安装。
然而,我们应该注意到 KDE 相比 XFCE 而言,是一个更加健壮完善的桌面环境。当然,对于主要以极简设计为目的的桌面来说,缺失一些更多的功能是有一定的道理的。为此,我们要为这样优秀的功能给 KDE 加分。
### MATE 不是一个轻量级的桌面环境 ###
在继续比较 XFCE、GNOME3 和 KDE 之前,对于老手我们需要澄清一下,我们没有将 MATE 桌面环境加入到我们的比较中。MATE 可被看作是 GNOME2 的另一个衍生品,但是它并没有主要作为一款轻量级或者快捷桌面出现。相反,它的主要目的是成为一款更加传统和舒适的桌面环境,并使它的用户在使用它时就像在家里一样舒适。
另一方面XFCE 生来就是要实现他自己的一系列使命。XFCE 给它的用户提供了一个更轻量而仍保持吸引人的视觉体验的桌面环境。然后,对于一些认为 MATE 也是一款轻量级的桌面环境的人来说,其实 MATE 真正的目标并不是成为一款轻量级的桌面环境。这两种选择在各自安装了一款好的主题之后看起来都会让人觉得非常具有吸引力。
### 桌面导航 ###
XFCE 除了桌面,还提供了一个醒目的导航器。任何使用过传统的 Windows 或者 GNOME 2/MATE 桌面环境的用户都可以在没有任何帮助的情况下自如的使用新安装的 XFCE 桌面环境的导航器。紧接着,添加小程序到面板中也是很显眼的。就像找一个已经安装的应用程序一样,直接使用启动器并点击你想要运行的应用程序图标就行。除了 LXDE 和 MATE 之外,还没有其他的桌面的导航器可以做到如此简单。不仅如此,更好的是控制面板的使用是非常容易使用的,对于刚刚使用这个新桌面的用户来说这是一个非常大的好处。如果用户更喜欢通过老式的方法去使用他们的桌面,那么 GNOME 就不合适。通过热角而取代了最小化按钮,加上其他的应用排布方式,这可以让大多数新用户易于使用它。
如果用户来自类似 Windows 这样的桌面环境那么这些用户需要摒弃这些习惯不能简单的通过鼠标右击一下就将一个小程序添加到他们的工作空间顶部。与此相反它可以通过使用扩展来实现。GNOME 是可以安装拓展的,并且是非常的容易,这些容易之处体现在只需要用户简单的使用位于 GNOME 扩展页面上的 on/off 开关即可。不过,用户必须知道这个东西,才能真正使用上这个功能。
另一方面GNOME 正在它的外观中体现它的设计理念即为用户提供一个直观和易用的控制面板。你可能认为那并不是什么大事但是在我看来它确实是我认为值得称赞并且有必要被提及的方面。KDE 给它的用户提供了更多的传统桌面使用体验,并通过提供相似的启动器和一种更加类似的获取软件的方式的能力来迎合来自 Windows 的用户。添加小部件或者小程序到 KDE 桌面是件非常简单的事情,只需要在桌面上右击即可。唯一的问题是 KDE 中这个功能不好发现,就像 KDE 中的其它东西一样对于用户来说好像是隐藏的。KDE 的用户可能不同意我的观点,但我仍然坚持我的说法。
要增加一个小部件,只要在“我的面板”上右击就可以看见面板选项,但是并不是安装小部件的一个直观的方法。你并不能看见“添加部件”,除非你选择了“面板选项”,然后才能看见“添加部件”。这对我来说不是个问题,但是对于一些用户来说,它变成了不必要的困惑。而使事情变得更复杂的是,在用户能够找到部件区域后,他们后来发现一种称为“活动”的新术语。它和部件在同一个地方,可是它在自己的区域却是另外一种行为。
现在请不要误解我KDE 中的活动特性是很不错的也是很有价值的但是从可用性的角度看为了不让新手感到困惑它更加适合于放在另一个菜单项。用户各有不同但是让新用户多测试一段时间可以让它不断改进。对“活动”的批评先放一边KDE 添加新部件的方法的确很棒。与 KDE 的主题一样用户不能通过使用提供的图形用户界面浏览和自动安装部件。这是一个有点神奇的功能但是它这样也可以工作。KDE 的控制面板可能和用户希望的样子不一样,它不是足够的简单。但是有一点很清楚,这将是他们致力于改进的地方。
### 因此XFCE 是最好的桌面环境,对吗? ###
就我自己而言,我在我的计算机上使用 GNOME、KDE并在我的办公室和家里的电脑上使用 Xfce。我也有一些老机器在使用 Openbox 和 LXDE。每一个桌面的体验都可以给我提供一些有用的东西可以帮助我以适合的方式使用每台机器。对我来说Xfce 是我的心中的挚爱,因为 Xfce 是一个我使用了多年的桌面环境。但对于这篇文章,我是用我日常使用的机器来撰写的,事实上,它用的是 GNOME。
这篇文章的主要思想是,对于那些正在寻找稳定的、传统的、容易理解的桌面环境的用户来说,我还是觉得 Xfce 能提供好一点的用户体验。欢迎您在评论部分和我们分享你的意见。
--------------------------------------------------------------------------------
via: http://www.unixmen.com/kde-vs-gnome-vs-xfce-desktop/
作者:[M.el Khamlichi][a]
译者:[kylepeng93](https://github.com/kylepeng93)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/

View File

@ -0,0 +1,67 @@
Debian Live项目的剧变
==================================================================================
尽管围绕 Debian Live 项目发生了很多戏剧性事件,关于 [Debian Live 项目][1]结束的[公告][2]的影响力甚至小于该项目首次出现时的公告。主要开发者的离开是最显而易见的损失,而社区对他本人及其项目的态度是很令人困惑的,但是这个项目也许还是会以其他的形式继续下去。所以 Debian 仍然会有更多的工具去创造启动光盘和其他介质。尽管是用这样一种有遗憾的方式,项目创始人 Dabiel Baumann 和 Debian CD 团队以及安装检测团队之间出现的长期争论已经被「解决」了。                                     
在 11 月 9 日, Baumann 发表了题为「 Debian Live 项目的突然结束」的一篇公告。在那篇短文中,他一一列举出了自从这个和他有关的[项目被发起][3]以来近 10 年间发生的不同的事件,这些事件可以表明他在 Debian Live 项目上的努力一直没有被重视或没有被足够重视。最具决定性的因素是因为在「包的含义」上存在冲突, R.Learmonth [申请][4]了新的包名,而这侵犯了在 Debian Live 上使用的命名空间。
考虑到最主要的 Debian Live 包之一被命名为 live-build ,而 R.Learmonth 申请的新包名却是 live-build-ng ,这简直是对 live-build 的挑战。 live-build-ng 意为一种围绕 [vmdebootstrap][5]【译者注创造真实的和虚拟机Debian的磁盘映像】工具的外部包装这种包装是为了创造 live 介质光盘和USB的插入也是 Debian Live 最需要的的部分。但是当 Baumann Learmonth [要求][6]为他的包换一个不同的名字的时候,他得到了一个「有趣」的[回复][7]
```
应该注意到, live-build 不是一个 Debian 项目,它是一个声称自己是官方 Debian 项目的外部项目,这是一个需要我们解决的问题。
这不是命名空间的问题,我们要将以目前维护的 live-config 和 live-boot 包为基础,把它们加入到 Debian 的本地项目。如果迫不得已的话,这将会有很多分支,但是我希望它不要发生,这样的话我们就可以把这些包整合到 Debian 中并继续以一种协作的方式去开发。
live-build 已经被 debian-cd 放弃live-build-ng 将会取代它。至少在一个精简的 Debian 环境中live-build 会被放弃。我们(开发团队)正在与 debian-cd 和 Debian Installer 团队合作开发 live-build-ng 。
```
Debian Live 是一个「官方的」 Debian 项目(也可以是狭义的「官方」),尽管它因为思路上的不同产生过争论。除此之外, vmdebootstrap 的维护者 Neil Willians 为脱离 Debian Live 项目[提供了如下的解释][8]:
```
为了更好的支持 live-build 的代替者, vmdebootstrap 肯定会被推广。为了能够用 live-build 解决目前存在的问题,这项工作会由 debian-cd 团队来负责。这些问题包括可靠性问题,以及不能很好的支持多种机器和 UEFI 等。 vmdebootstrap 也存在着这些问题,我们用来自于对 live-boot 和 live-config 的支持情况来确定 vmdebootstrap 的功能。
```
这些抱怨听起来合情合理,但是它们可能已经在目前的项目中得到了解决。然而一些秘密的项目有很明显的取代 live-build 的意图。正如 Baumann [指出][9]的,这些计划没有被发布到 debian-live 的邮件列表中。人们首次从 Debian Live 项目中获知这些计划正是因为这一次的ITP事件所以它看起来像是一个「秘密计划」——有些事情在像 Debian 这样的项目中得不到很好的安排。
人们可能已经猜到了,有很多帖子都支持 Baumann [重命名][10] live-build-ng 的请求,但是紧接着,人们就因为他要停止继续在 Debian Live 上工作的决定而变得沮丧。然而 Learmonth 和 Williams 却坚持认为取代 live-build 很有必要。Learmonth 给 live-build-ng 换了一个争议性也许小一些的名字: live-wrapper 。他说他的目标是为 Debian Live 项目加入新的工具(并且「把 Debian Live 项目引入 Debian 里面」),但是完成这件事还需要很大的努力。
```
我向已经被 ITP 问题所困扰的每个人道歉。我们已经告知大家 live-wrapper 还不足以完全替代 live-build 且开发工作仍在进行以收集反馈。尽管有了这部分的工作,我们收到的反馈缺并不是我们所需要的。
```
这种对于取代 live-build 的强烈反对或许已经被预知到了。自由软件社区的沟通和交流很关键,所以,计划去替换一个项目的核心很容易引起争议——更何况是一个一直不为人所知的计划。从 Banumann 的角度来说,他当然不是完美的,他因为上传个不合适的 [syslinux 包][11]导致了 wheezy 的延迟发布,并且从那以后他被从 Debian 开发者暂时[降级][12]为 Debian 维护者。但是这不意味着他应该受到这种对待。当然,这个项目还有其他人参与,所以不仅仅是 Baumann 受到了影响。
Ben Armstrong 是其他参与者中的一位,在这个事件中,他很圆滑地处理了一些事,并且想从这个事件中全身而退。他从一封邮件[13]开始,这个邮件是为了庆祝这个项目,以及他和他的团队在过去几年取得的成果。正如他所说, Debian Live 的[下游项目列表][14]是很令人振奋的。在另一封邮件中,他也[指出][15]了这个项目不是没有生命力的:
```
如果 Debian CD 开发团队通过他们的努力开发出可行的、可靠的、经过完善测试替代品,以及一个合适的取代 live-build 的候选者,这对于 Debian 项目有利无害。如果他们继续做这件事,他们不会「用一个官方改良,但不可靠且几乎没有经过测试的待选者取代 live-build 」。到目前为止,我还没有看到他们那样做的迹象。其间, live-build 仍保留在存档中——它仍然处于良好状态,且没有一种经过改良的继任者来取代它,因此开发团队没有必要尽快删除它。
```
11 月 24 号, Armstrong 也在[他的博客][16]上[发布][17]了一个有关 Debian Live 的新消息。它展示了从 Baumann 退出起两周内的令人高兴的进展。甚至有迹象表明 Debian Live 项目与 live-wrapper 开发者开展了合作。博客上也有了一个[计划表][18],同时不可避免地寻求更多的帮助。这让人们有理由相信围绕项目发生的戏剧性事件仅仅是一个小摩擦——也许不可避免,但绝不是像现在看起来这么糟糕。
---------------------------------
via: https://lwn.net/Articles/665839/
作者Jake Edge
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[PurlingNayuki](https://github.com/PurlingNayuki)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]: https://lwn.net/Articles/666127/
[2]: http://live.debian.net/
[3]: https://www.debian.org/News/weekly/2006/08/
[4]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804315
[5]: http://liw.fi/vmdebootstrap/
[6]: https://lwn.net/Articles/666173/
[7]: https://lwn.net/Articles/666176/
[8]: https://lwn.net/Articles/666181/
[9]: https://lwn.net/Articles/666208/
[10]: https://lwn.net/Articles/666321/
[11]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699808
[12]: https://nm.debian.org/public/process/14450
[13]: https://lwn.net/Articles/666336/
[14]: http://live.debian.net/project/downstream/
[15]: https://lwn.net/Articles/666338/
[16]: https://lwn.net/Articles/666340/
[17]: http://syn.theti.ca/2015/11/24/debian-live-after-debian-live/
[18]: https://wiki.debian.org/DebianLive/TODO

View File

@ -0,0 +1,127 @@
SELinux 入门
===============================
回到 Kernel 2.6 时代,那时候引入了一个新的安全系统,用以提供访问控制安全策略的机制。这个系统就是 [Security Enhanced Linux (SELinux)][1],它是由[美国国家安全局NSA][2]贡献的,它为 Linux 内核子系统引入了一个健壮的强制控制访问Mandatory Access Control架构。
如果你在之前的 Linux 生涯中都禁用或忽略了 SELinux这篇文章就是专门为你写的这是一篇对存在于你的 Linux 桌面或服务器之下的 SELinux 系统的介绍,它能够限制权限,甚至消除程序或守护进程的脆弱性而造成破坏的可能性。
在我开始之前,你应该已经了解的是 SELinux 主要是红帽 Red Hat Linux 以及它的衍生发行版上的一个工具。类似地, Ubuntu 和 SUSE以及它们的衍生发行版使用的是 AppArmor。SELinux 和 AppArmor 有显著的不同。你可以在 SUSEopenSUSEUbuntu 等等发行版上安装 SELinux但这是项难以置信的挑战除非你十分精通 Linux。
说了这么多,让我来向你介绍 SELinux。
### DAC vs. MAC
Linux 上传统的访问控制标准是自主访问控制Discretionary Access ControlDAC。在这种形式下一个软件或守护进程以 User IDUID或 Set owner User IDSUID的身份运行并且拥有该用户的目标文件、套接字、以及其它进程权限。这使得恶意代码很容易运行在特定权限之下从而取得访问关键的子系统的权限。
另一方面强制访问控制Mandatory Access ControlMAC基于保密性和完整性强制信息的隔离以限制破坏。该限制单元独立于传统的 Linux 安全机制运作,并且没有超级用户的概念。
### SELinux 如何工作
考虑一下 SELinux 的相关概念:
- 主体Subjects
- 目标Objects
- 策略Policy
- 模式Mode
当一个主体Subject如一个程序尝试访问一个目标Object如一个文件SELinux 安全服务器SELinux Security Server在内核中从策略数据库Policy Database中运行一个检查。基于当前的模式mode如果 SELinux 安全服务器授予权限,该主体就能够访问该目标。如果 SELinux 安全服务器拒绝了权限,就会在 /var/log/messages 中记录一条拒绝信息。
听起来相对比较简单是不是?实际上过程要更加复杂,但为了简化介绍,只列出了重要的步骤。
### 模式
SELinux 有三个模式(可以由用户设置)。这些模式将规定 SELinux 在主体请求时如何应对。这些模式是:
- Enforcing (强制)— SELinux 策略强制执行,基于 SELinux 策略规则授予或拒绝主体对目标的访问
- Permissive (宽容)— SELinux 策略不强制执行,不实际拒绝访问,但会有拒绝信息写入日志
- Disabled (禁用)— 完全禁用 SELinux
![](https://www.linux.com/images/stories/66866/jack2-selinux_a.png)
*图 1getenforce 命令显示 SELinux 的状态是 Enforcing 启用状态。*
默认情况下,大部分系统的 SELinux 设置为 Enforcing。你要如何知道你的系统当前是什么模式你可以使用一条简单的命令来查看这条命令就是 `getenforce`。这个命令用起来难以置信的简单(因为它仅仅用来报告 SELinux 的模式)。要使用这个工具,打开一个终端窗口并执行 `getenforce` 命令。命令会返回 Enforcing、Permissive或者 Disabled见上方图 1
设置 SELinux 的模式实际上很简单——取决于你想设置什么模式。记住:**永远不推荐关闭 SELinux**。为什么?当你这么做了,就会出现这种可能性:你磁盘上的文件可能会被打上错误的权限标签,需要你重新标记权限才能修复。而且你无法修改一个以 Disabled 模式启动的系统的模式。你的最佳模式是 Enforcing 或者 Permissive。
你可以从命令行或 `/etc/selinux/config` 文件更改 SELinux 的模式。要从命令行设置模式,你可以使用 `setenforce` 工具。要设置 Enforcing 模式,按下面这么做:
1. 打开一个终端窗口
2. 执行 `su` 然后输入你的管理员密码
3. 执行 `setenforce 1`
4. 执行 `getenforce` 确定模式已经正确设置(图 2
![](https://www.linux.com/images/stories/66866/jack-selinux_b.png)
*图 2设置 SELinux 模式为 Enforcing。*
要设置模式为 Permissive这么做
1. 打开一个终端窗口
2. 执行 `su` 然后输入你的管理员密码
3. 执行 `setenforce 0`
4. 执行 `getenforce` 确定模式已经正确设置(图 3
![](https://www.linux.com/images/stories/66866/jack-selinux_c.png)
*图 3设置 SELinux 模式为 Permissive。*
注:通过命令行设置模式会覆盖 SELinux 配置文件中的设置。
如果你更愿意在 SELinux 命令文件中设置模式,用你喜欢的编辑器打开那个文件找到这一行:
SELINUX=permissive
你可以按你的偏好设置模式,然后保存文件。
还有第三种方法修改 SELinux 的模式(通过 bootloader但我不推荐新用户这么做。
### 策略类型
SELinux 策略有两种:
- Targeted — 只有目标网络进程dhcpdhttpdnamednscdntpdportmapsnmpdsquid以及 syslogd受保护
- Strict — 对所有进程完全的 SELinux 保护
你可以在 `/etc/selinux/config` 文件中修改策略类型。用你喜欢的编辑器打开这个文件找到这一行:
SELINUXTYPE=targeted
修改这个选项为 targeted 或 strict 以满足你的需求。
### 检查完整的 SELinux 状态
有个方便的 SELinux 工具,你可能想要用它来获取你启用了 SELinux 的系统的详细状态报告。这个命令在终端像这样运行:
sestatus -v
你可以看到像图 4 那样的输出。
![](https://www.linux.com/images/stories/66866/jack-selinux_d.png)
*图 4sestatus -v 命令的输出。*
### 仅是皮毛
和你预想的一样,我只介绍了 SELinux 的一点皮毛。SELinux 的确是个复杂的系统,想要更扎实地理解它是如何工作的,以及了解如何让它更好地为你的桌面或服务器工作需要更加地深入学习。我的内容还没有覆盖到疑难解答和创建自定义 SELinux 策略。
SELinux 是所有 Linux 管理员都应该知道的强大工具。现在已经向你介绍了 SELinux我强烈推荐你回到 Linux.com当有更多关于此话题的文章发表的时候或看看 [NSA SELinux 文档][3] 获得更加深入的指南。
LCTT - 相关阅读:[鸟哥的 Linux 私房菜——程序管理与 SELinux 初探][4]
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/docs/ldp/883671-an-introduction-to-selinux
作者:[Jack Wallen][a]
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/community/forums/person/93
[1]: http://selinuxproject.org/page/Main_Page
[2]: https://www.nsa.gov/research/selinux/
[3]: https://www.nsa.gov/research/selinux/docs.shtml
[4]: http://vbird.dic.ksu.edu.tw/linux_basic/0440processcontrol_5.php

View File

@ -1,12 +1,11 @@
年轻人,你为啥使用 linux 年轻人,你为啥使用 linux
================================================================================ ================================================================================
> 今天的开源综述:是什么带你进入 linux 的世界号外IBM 基于 Linux 的大型机。以及,你应该抛弃 win10 选择 Linux 的原因。
### 当初你为何使用 Linux ### ### 当初你为何使用 Linux ###
Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营了。但是你知道是什么让他们开始使用 Linux 的吗?一个 Reddit 用户在网站上问了这个问题,并且得到了很多有趣的回答。 Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营了。但是你知道是什么让他们开始使用 Linux 的吗?一个 Reddit 用户在网站上问了这个问题,并且得到了很多有趣的回答。
一个名为 SilverKnight 的用户在 Reddit 的 Linux 块上问了如下问题: 一个名为 SilverKnight 的用户在 Reddit 的 Linux 块上问了如下问题:
> 我知道这个问题肯定被问过了,但我还是想听听年轻一代使用 Linux 的原因,以及是什么让他们坚定地成为 Linux 用户。 > 我知道这个问题肯定被问过了,但我还是想听听年轻一代使用 Linux 的原因,以及是什么让他们坚定地成为 Linux 用户。
> >
@ -18,7 +17,7 @@ Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营
以下是网站上的回复: 以下是网站上的回复:
> **DoublePlusGood**我12岁开始使用 Backtrack现在改名为 Kali因为我想成为一名黑客LCTT 译注原文1337 haxor1337 是 leet 的火星文写法,意为'火星文'haxor 为 hackor 的火星文写法,意为'黑客',另一种写法是 1377 h4x0r满满的火星文文化。我现在一直使用 ArchLinux因为它给我无限自由让我对我的电脑可以为所欲为。 > **DoublePlusGood**我12岁开始使用 Backtrack现在改名为 Kali因为我想成为一名黑客LCTT 译注:原文1337 haxor1337 是 leet 的火星文写法,意为'火星文'haxor 为 hackor 的火星文写法,意为'黑客',另一种写法是 1377 h4x0r满满的火星文文化。我现在一直使用 ArchLinux因为它给我无限自由让我对我的电脑可以为所欲为。
> >
> **Zack**我记得是12、3岁的时候使用 Linux现在15岁了。 > **Zack**我记得是12、3岁的时候使用 Linux现在15岁了。
> >
@ -44,9 +43,9 @@ Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营
> >
> 我很喜欢这个系统。然后在圣诞节的时候我得到树莓派,上面只能跑 Debian还不能支持其它发行版。 > 我很喜欢这个系统。然后在圣诞节的时候我得到树莓派,上面只能跑 Debian还不能支持其它发行版。
> >
> **Cqz**我9岁的时候有一次玩 Windows 98结果这货当机了原因未知。我没有 Windows 安装盘,但我爸的一本介绍编程的杂志上有一张随书附赠的光盘,这张光盘上刚好有 Mandrake Linux 的安装软件,于是我瞬间就成为了 Linux 用户。我当时还不知道自己在玩什么,但是玩得很嗨皮。这些年我虽然在电脑上装了多种 Windows 版本,但是 FLOSS 世界才是我的家。现在我只把 Windows 装在虚拟机上,用来玩游戏。 > **Cqz**我9岁的时候有一次玩 Windows 98结果这货当机了原因未知。我没有 Windows 安装盘,但我爸的一本介绍编程的杂志上有一张随书附赠的光盘,这张光盘上刚好有 Mandrake Linux 的安装软件,于是我瞬间就成为了 Linux 用户。我当时还不知道自己在玩什么,但是玩得很嗨皮。这些年我虽然在电脑上装了多种 Windows 版本,但是 FLOSS 世界才是我的家LCTT 译注FLOSS —— Free/Libre and Open Source Software自由/开源软件)。现在我只把 Windows 装在虚拟机上,用来玩游戏。
> >
> **Tosmarcel**15岁那年对'编程'这个概念很好奇,然后我开始了哈佛课程'CS50',这个课程要我们安装 Linux 虚拟机用来执行一些命令。当时我问自己为什么 Windows 没有这些命令?于是我 Google 了 Linux搜索结果出现了 Ubuntu在安装 Ubuntu的时候不小心把 Windows 分区给删了。。。当时对 Linux 毫无所知适应这个系统非常困难。我现在16岁用 ArchLinux不想用回 Windows我爱 ArchLinux。 > **Tosmarcel**15岁那年对'编程'这个概念很好奇,然后我开始了哈佛课程'CS50',这个课程要我们安装 Linux 虚拟机用来执行一些命令。当时我问自己为什么 Windows 没有这些命令?于是我 Google 了 Linux搜索结果出现了 Ubuntu在安装 Ubuntu 的时候不小心把 Windows 分区给删了。。。当时对 Linux 毫无所知适应这个系统非常困难。我现在16岁用 ArchLinux不想用回 Windows我爱 ArchLinux。
> >
> **Micioonthet**:第一次听说 Linux 是在我5年级的时候当时去我一朋友家他的笔记本装的就是 MEPISDebian的一个比较老的衍生版而不是 XP。 > **Micioonthet**:第一次听说 Linux 是在我5年级的时候当时去我一朋友家他的笔记本装的就是 MEPISDebian的一个比较老的衍生版而不是 XP。
> >
@ -54,7 +53,7 @@ Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营
> >
> 我13岁那年还没有自己的笔记本电脑而我另一位朋友总是抱怨他的电脑有多慢所以我打算把它买下来并修好它。我花了20美元买下了这台装着 Windows Vista 系统、跑满病毒、完全无法使用的惠普笔记本。我不想重装讨厌的 Windows 系统,记得 Linux 是免费的,所以我刻了一张 Ubuntu 14.04 光盘,马上把它装起来,然后我被它的高性能给震精了。 > 我13岁那年还没有自己的笔记本电脑而我另一位朋友总是抱怨他的电脑有多慢所以我打算把它买下来并修好它。我花了20美元买下了这台装着 Windows Vista 系统、跑满病毒、完全无法使用的惠普笔记本。我不想重装讨厌的 Windows 系统,记得 Linux 是免费的,所以我刻了一张 Ubuntu 14.04 光盘,马上把它装起来,然后我被它的高性能给震精了。
> >
> 我的世界(由于它允运行在 JAVA 上,所以当时它是 Linux 下为数不多的几个游戏之一)在 Vista 上只能跑5帧每秒而在 Ubuntu 上能跑到25帧。 > 我的世界(Minecraft由于它允运行在 JAVA 上,所以当时它是 Linux 下为数不多的几个游戏之一)在 Vista 上只能跑5帧每秒而在 Ubuntu 上能跑到25帧。
> >
> 我到现在还会偶尔使用一下那台笔记本Linux 可不会在乎你的硬件设备有多老。 > 我到现在还会偶尔使用一下那台笔记本Linux 可不会在乎你的硬件设备有多老。
> >
@ -62,9 +61,9 @@ Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营
> >
> **Webtm**:我爹每台电脑都会装多个发行版,有几台是 opensuse 和 Debian他的个人电脑装的是 Slackware。所以我记得很小的时候一直在玩 debian但没有投入很多精力我用了几年的 Windows然后我爹问我有没有兴趣试试 debian。这是个有趣的经历在那之后我一直使用 debian。而现在我不用 Linux转投 freeBSD5个月了用得很开心。 > **Webtm**:我爹每台电脑都会装多个发行版,有几台是 opensuse 和 Debian他的个人电脑装的是 Slackware。所以我记得很小的时候一直在玩 debian但没有投入很多精力我用了几年的 Windows然后我爹问我有没有兴趣试试 debian。这是个有趣的经历在那之后我一直使用 debian。而现在我不用 Linux转投 freeBSD5个月了用得很开心。
> >
> 完全控制自己的系统是个很奇妙的体验。开源有好多酷酷的软件,我认为在自己解决一些问题并且利用这些工具解决其他事情的过程是最有趣的。当然稳定和高效也是吸引我的地方。更不用说它的保密级别了。 > 完全控制自己的系统是个很奇妙的体验。开源有好多酷酷的软件,我认为在自己解决一些问题并且利用这些工具解决其他事情的过程是最有趣的。当然稳定和高效也是吸引我的地方。更不用说它的保密级别了。
> >
> **Wyronaut**我今年18第一次玩 Linux 是13岁当时玩的 Ubuntu为啥要碰 Linux因为我想搭一个'我的世界'的服务器来和小伙伴玩游戏,当时'我的世界'可是个新鲜玩意儿。而搭个私服需要用 Linux 系统。 > **Wyronaut**我今年18第一次玩 Linux 是13岁当时玩的 Ubuntu为啥要碰 Linux因为我想搭一个“我的世界”的服务器来和小伙伴玩游戏,当时“我的世界”可是个新鲜玩意儿。而搭个私服需要用 Linux 系统。
> >
> 当时我还是个新手,对着 Linux 的命令行有些傻眼,因为很多东西都要我自己处理。还是多亏了 Google 和维基,我成功地在多台老 PC 上部署了一些简单的服务器,那些早已无人问津的老古董机器又能发挥余热了。 > 当时我还是个新手,对着 Linux 的命令行有些傻眼,因为很多东西都要我自己处理。还是多亏了 Google 和维基,我成功地在多台老 PC 上部署了一些简单的服务器,那些早已无人问津的老古董机器又能发挥余热了。
> >
@ -90,7 +89,7 @@ Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营
> >
> 老实说我对电脑挺感兴趣的,当我还没接触'自由软件哲学'的时候,我认为 free 是免费的意思。我也不认为命令行界面很让人难以接受,因为我小时候就接触过 DOS 系统。 > 老实说我对电脑挺感兴趣的,当我还没接触'自由软件哲学'的时候,我认为 free 是免费的意思。我也不认为命令行界面很让人难以接受,因为我小时候就接触过 DOS 系统。
> >
> 我第一个发行版是 Mandrake在我11岁还是12岁那年我把家里的电脑弄得乱七八糟然后我一直折腾那台电脑试着让我的技能提升一个台阶。现在我在一家公司全职使用 Linux。请允许我耸个肩 > 我第一个发行版是 Mandrake在我11岁还是12岁那年我把家里的电脑弄得乱七八糟然后我一直折腾那台电脑试着让我自己的技能提升一个台阶。现在我在一家公司全职使用 Linux。请允许我耸个肩
> >
> **Matto**:我的电脑是旧货市场淘回来的,装 XP跑得慢于是我想换个系统。Google 了一下,发现 Ubuntu。当年我15、6岁现在23了就职的公司内部使用 Linux。 > **Matto**:我的电脑是旧货市场淘回来的,装 XP跑得慢于是我想换个系统。Google 了一下,发现 Ubuntu。当年我15、6岁现在23了就职的公司内部使用 Linux。
> >
@ -133,7 +132,7 @@ via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.
作者:[Jim Lynch][a] 作者:[Jim Lynch][a]
译者:[bazz2](https://github.com/bazz2) 译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,200 @@
一步一脚印GNOME十八年进化史
================================================================================
youtube 视频
<iframe width="660" height="371" src="https://www.youtube.com/embed/MtmcO5vRNFQ?feature=oembed" frameborder="0" allowfullscreen></iframe>
[GNOME][1] GNU Object Model Environment由两位墨西哥的程序员 Miguel de Icaza 和 Federico Mena 始创于1997年8月15日。GNOME 自由软件计划由志愿者和全职开发者来开发一个桌面环境及其应用程序。GNOME 桌面环境的所有部分都由开源软件组成并且支持Linux FreeBSD OpenBSD 等操作系统。
现在就让我穿越到1997年来看看 GNOME 的第一个版本:
### GNOME 1 ###
![GNOME 1.0 - GNOME 发布的第一个重大版本](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.0/gnome.png)
*GNOME 1.0 (1997) GNOME 发布的第一个重大版本*
![GNOME 1.2 Bongo](https://raw.githubusercontent.com/paulcarroty/Articles/master/GNOME_History/1.2/1361441938.or.86429.png)
*GNOME 1.2 “Bongo”2000*
![GNOME 1.4 Tranquility](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.4/1.png)
*GNOME 1.4 “Tranquility” 2001*
### GNOME 2 ###
![GNOME 2.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.0/1.png)
*GNOME 2.0 2002*
重大更新,基于 GTK+ 2。引入了人机界面指南Human Interface Guidelines
![GNOME 2.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.2/GNOME_2.2_catala.png)
*GNOME 2.2 2003*
改进了多媒体和文件管理器。
![GNOME 2.4 Temujin](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.4/gnome-desktop.png)
*GNOME 2.4 “Temujin”, 2003*
首次发布 Epiphany 浏览器增添了辅助功能accessibility
![GNOME 2.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.6/Adam_Hooper.png)
*GNOME 2.6 2004*
Nautilus 成为主要的文件管理工具,同时引入了新的 GTK+ 对话框。作为对这个版本中变化的结果创建了一个存在时间不久的分叉版本GoneME。
![GNOME 2.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.8/3.png)
*GNOME 2.8 2004*
改良了对可移动设备的支持,并新增了 Evolution 邮件应用。
![GNOME 2.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.10/GNOME-Screenshot-2.10-FC4.png)
*GNOME 2.10 2005*
减小了内存需求和提升了性能。增加了新的面板小应用(调制解调器控制、磁盘挂载器和回收站组件)以及 Totem 影片播放器和 Sound Juicer CD抓取工具。
![GNOME 2.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.12/gnome-livecd.jpg)
*GNOME 2.12 2005*
改进了 Nautilus改进了应用程序间的剪切/粘贴功能和 freedesktop.org 的整合。 新增 Evince PDF 阅读器;新默认主题 Clearlooks菜单编辑器、钥匙环管理器和管理员工具。基于 GTK+2.8,支持 Cairo。
![GNOME 2.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.14/debian4-stable.jpg)
*GNOME 2.14 2006*
性能提升(某些情况下超过 100%增强用户界面的易用性GStreamer 0.10 多媒体框架。增加了 Ekiga 视频会议应用、Deskbar 搜索工具、Pessulus 权限管理器、快速切换用户功能和 Sabayon 系统管理员工具。
![GNOME 2.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.16/Gnome-2.16-screenshot.png)
*GNOME 2.16 2006*
性能提升。增加了 Tomboy 笔记应用、Baobab 磁盘用量分析应用、Orca 屏幕阅读器以及 GNOME 电源管理程序(延长了笔记本电池寿命)。改进了 Totem、Nautilus。Metacity 窗口管理器的合成compositing支持。新的图标主题。基于 GTK+ 2.0 的全新打印对话框。
![GNOME 2.18](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.18/Gnome-2.18.1.png)
*GNOME 2.18 2007*
性能提升。增加了 Seahorse GPG 安全应用可以对邮件和本地文件进行加密。Baobab 改进了环状图表显示方式的支持。Orca 屏幕阅读器。改进了 Evince、Epiphany、GNOME 电源管理、音量控制。增加了两款新游戏GNOME 数独和 glChess 国际象棋。支持 MP3 和 AAC 音频解码。
![GNOME 2.20](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.20/rnintroduction-screenshot.png)
*GNOME 2.20 2007*
发布十周年版本。Evolution 增加了备份功能。改进了 Epiphany、EOG、GNOME 电源管理。Seahorse 中的钥匙环密码管理功能。增加:在 Evince 中可以编辑PDF文档、文件管理界面中整合了搜索模块、自动安装多媒体解码器。
![GNOME 2.22, 2008](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.22/GNOME-2-22-2-Released-2.png)
*GNOME 2.22 2008*
新增 Cheese 应用它是一个可以截取网络摄像头和远程桌面图像的工具。Metacity 支持基本的窗口合成compositing。引入 GVFSLCTT译注GNOME Virtual file systemGNOME 虚拟文件系统。改善了Totem 播放 DVD 和 YouTube 的效果,支持播放 MythTV。时钟小应用支持国际化。在 Evolution 中新增了谷歌日历以及为信息添加标签的功能。改进了 Evince、Tomboy、 Sound Juicer 和计算器。
![GNOME 2.24](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.24/gnome-224.jpg)
*GNOME 2.24 2008*
新增了 Empathy 即时通讯软件。Ekiga 升级至3.0版本。Nautilus 支持标签式浏览,更好的支持了多屏幕显示方式和数字电视功能。
![GNOME 2.26](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.26/gnome226-large_001.jpg)
*GNOME 2.26 2009*
新增光盘刻录应用 Brasero。简化了文件分享的流程。改进了媒体播放器的性能。支持多显示器和指纹识别器。
![GNOME 2.28](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.28/1.png)
*GNOME 2.28 2009*
增加了 GNOME 蓝牙模块。改进了 Epiphany 网页浏览器、Empathy 即时通讯软件、时间追踪器和辅助功能。GTK+ 升级至2.18版本。
![GNOME 2.30](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.30/GNOME2.30.png)
*GNOME 2.30 2010*
改进了 Nautilus 文件管理器、Empathy 即时通讯软件、Tomboy、Evince、时间追踪器、Epiphany 和 Vinagre。借助 GVFS 通过 libimobiledeviceLCTT 译注支持iOS®设备跨平台使用的工具协议库)部分地支持了 iPod 和 iPod Touch 设备。
![GNOME 2.32](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.32/gnome-2-32.png.en_GB.png)
*GNOME 2.32 2010*
新增 Rygel 媒体分享工具和 GNOME 色彩管理器。改进了 Empathy 即时通讯软件、Evince、Nautilus 文件管理器等。由于计划于2010年9月发布3.0版本因此大部分开发者的精力都由2.3x转移至了3.0版本。
### GNOME 3 ###
![GNOME 3.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.0/chat-3-0.png)
*GNOME 3.0 2011*
引入 GNOME Shell这是一个重新设计的、具有更简练更集中的选项的框架。基于 Mallard 标记语言的话题导向型帮助系统。支持窗口并列堆叠。启用新的视觉主题和默认字体。采用 GTK+ 3.0,具有更好的语言绑定、主题、触控以及多平台支持。去除了那些长期弃用的 API。
![GNOME 3.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.2/gdm.png)
*GNOME 3.2、 2011*
支持在线帐户、“浏览器”应用。新增通讯录应用和文档文件管理器。“文件管理器”支持快速预览。较大的整合,文档更完善,以及对外观的改善和各种性能提升。
![GNOME 3.4](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.4/application-view.png)
*GNOME 3.4, 2012*
全新外观的 GNOME 3 应用程序“文件”、Epiphany更名为“浏览器”、“GNOME 通讯录”。可以在活动概览中搜索本地文件。支持应用菜单。焕然一新的界面元素:新的颜色拾取器、重新设计的滚动条、更易使用的旋钮以及可隐藏的标题栏。支持平滑滚动。全新的动态壁纸。在系统设置中改进了对 Wacom 数位板的支持。更简便的扩展应用管理。更好的硬件支持。面向主题的帮助文档。在 Empathy 中提供了对视频电话和动态信息的支持。更好的辅助功能:提升 Orca 整合度,增强高对比度模式,以及全新的缩放设置。大量的应用增强和对细节的改进。
![GNOME 3.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.6/gnome-3-6.png)
*GNOME 3.6, 2012*
全新设计的核心元素新的应用按钮和改进的活动概览布局。新的登录和锁定界面。重新设计的通知栏。通知现在更智能可见性更高同时更容易关闭。改进了系统设置的界面和设定逻辑。用户菜单默认显示关闭电源操作。整合的输入方式。辅助功能一直开启。新的应用Boxes 桌面虚拟化,曾在 GNOME 3.4中发布过预览版。Clocks 时钟可以显示世界时间。更新了磁盘用量分析、Empathy 和字体查看器的外观。改进了 Orca 对布莱叶盲文的支持。 在“浏览器”中,用最常访问页面取代了之前的空白起始页,增添了更好的全屏模式并使用了 WebKit2 测试版引擎。 Evolution 开始使用 WebKit 显示邮件内容。 改进了“磁盘”功能。 改进了“文件”应用(即之前的 Nautilus新增诸如最近访问的文件和搜索等功能。
![GNOME 3.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.8/applications-view.png)
*GNOME 3.8, 2013*
令人耳目一新的核心组件新应用界面可以分别显示常用应用及全部应用。窗口布局得到全面改造。新的屏幕即现式OSD输入法开关。通知和信息现在会对屏幕边缘的点击作出回应。为那些喜欢传统桌面的用户提供了经典模式。重新设计了设置界面的工具栏。新的初始化引导流程。“GNOME 在线帐户”添加了对更多供应商的支持。“浏览器”正式启用 WebKit2 引擎,有了一个新的私密浏览模式。“文档”支持双页模式并且整合了 “Google 文档”。“通讯录”的 UI 升级。“GNOME 文件”、“GNOME Boxes”和“GNOME 磁盘”都得到了大幅改进。集成了 ownCloud。两款全新的 GNOME 核心应用“GNOME 时钟”和“GNOME 天气”。
![GNOME 3.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.10/GNOME-3-10-Release-Schedule-2.png)
*GNOME 3.10, 2013*
全新打造的系统状态区,能够更直观的纵览全局。一系列新应用,包括 “GNOME 地图”、“GNOME 备忘录”、 “GNOME 音乐”和“GNOME 照片”。新的基于位置的功能,如自动时区和世界时钟。支持高分辨率及智能卡。 基于 GLib 2.38 提供了对 D-Bus 的支持。
![GNOME 3.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.12/app-folders.png)
*GNOME 3.12, 2014*
改进了概览中的键盘导航和窗口选择,基于易用性测试对初始设置进行了修改。有线网络图标重新回到了状态栏上,在“应用”视图中可以自定义应用文件夹。在大量应用的对话框中引入了新的 GTK+ 小工具,同时使用了新的 GTK+ 标签风格。“GNOME 视频”“GNOME 终端”以及 Gedit 都改用了全新外观,更贴合 HIGLCTT 译注Human Interface Guidelines人机界面指南。在 GNOME Shell 的终端仿真器中提供了搜索预测功能。增强了对 “GNOME 软件”和高分辨率显示屏的支持。提供了新的录音工具。增加了新的桌面通知接口。在向 Wayland 移植的进度中达到了可用的程度,可用选择性地预览体验。
![GNOME 3.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.14/Top-Features-of-GNOME-3-14-Gallery-459893-2.jpg)
*GNOME 3.14, 2014*
更炫酷的桌面环境效果改善了对触摸屏的支持。“GNOME 软件”可以管理安装的插件。在“GNOME 照片”中可以访问 “Google 相册”。重绘了 Evince、数独、扫雷和天气应用的用户界面同时增加了一款叫做 Hitori 的 GNOME 游戏。
![GNOME 3.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.16/preview-apps.png)
*GNOME 3.16, 2015*
33000 处改变。主要的修改包括 UI 的配色方案从黑色变成了炭黑色。 增加了即现式滚动条。通知窗口中整合了日历应用。对“文件”,图像查看器和“地图”等大量应用进行了微调。可以预览应用程序。进一步从 X11 向 Wayland 移植。
感谢 GNOME Project 及 [Wikipedia][2] 提供的变更日志!感谢阅读!
--------------------------------------------------------------------------------
via: https://tlhp.cf/18-years-of-gnome-evolution/
作者:[Pavlo Rudyi][a]
译者:[Haohong WANG](https://github.com/HaohongWANG)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://tlhp.cf/author/paul/
[1]:https://www.gnome.org/
[2]:https://en.wikipedia.org/wiki/GNOME

View File

@ -1,6 +1,6 @@
如何在 FreeBSD 10.2 上安装使用 Nginx 的 Ghost 如何在 FreeBSD 10.2 上安装使用 Nginx 的 Ghost
================================================================================ ================================================================================
Node.js 是用于开发服务器端应用程序的开源运行时环境。Node.js 应用使用 JavaScript 编写,能在任何有 Node.js 运行时的服务器上运行。它跨平台支持 Linux、Windows、OSX、IBM AIX也包括 FreeBSD。Node.js 是 Ryan Dahl 以及在 Joyent 工作的其他开发者于 2009 年创建的。它的设计目标就是构建可扩展的网络应用程序。 Node.js 是用于开发服务器端应用程序的开源运行时环境。Node.js 应用使用 JavaScript 编写,能在任何有 Node.js 运行时的服务器上运行。它跨平台支持 Linux、Windows、OSX、IBM AIX也包括 FreeBSD。Node.js 是 Ryan Dahl 以及在 Joyent 工作的其他开发者于 2009 年创建的。它的设计目标就是构建可扩展的网络应用程序。
Ghost 是使用 Node.js 编写的博客平台。它不仅开源,而且有很漂亮的界面设计、对用户友好并且免费。它允许你快速地在网络上发布内容,或者创建你的混合网站。 Ghost 是使用 Node.js 编写的博客平台。它不仅开源,而且有很漂亮的界面设计、对用户友好并且免费。它允许你快速地在网络上发布内容,或者创建你的混合网站。
@ -97,7 +97,7 @@ Ghost 是使用 Node.js 编写的博客平台。它不仅开源,而且有很
npm start --production npm start --production
通过访问服务器 ip 和 2368 号端口验证。 通过访问服务器 ip 和 2368 号端口验证一下
![Ghost 安装完成](http://blog.linoxide.com/wp-content/uploads/2015/10/Ghost-Installed.png) ![Ghost 安装完成](http://blog.linoxide.com/wp-content/uploads/2015/10/Ghost-Installed.png)
@ -189,7 +189,7 @@ Ghost 是使用 Node.js 编写的博客平台。它不仅开源,而且有很
### 第五步 - 为 Ghost 安装和配置 Nginx ### ### 第五步 - 为 Ghost 安装和配置 Nginx ###
默认情况下ghost 会以单机模式运行,你可以不用 Nginx、apache 或 IIS web 服务器直接运行它。但在这篇指南中我们会安装和配置 nginx 和 ghost 一起使用。 默认情况下ghost 会以独立模式运行,你可以不用 Nginx、apache 或 IIS web 服务器直接运行它。但在这篇指南中我们会安装和配置 nginx 和 ghost 一起使用。
用 pkg 命令从 freebsd 库中安装 nginx 用 pkg 命令从 freebsd 库中安装 nginx
@ -289,7 +289,7 @@ via: http://linoxide.com/linux-how-to/install-ghost-nginx-freebsd-10-2/
作者:[Arul][a] 作者:[Arul][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/) 译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,162 @@
网络与安全方面的最佳开源软件
================================================================================
InfoWorld 在部署、运营和保障网络安全领域精选出了年度开源工具获奖者。
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-net-sec-100614459-orig.jpg)
### 最佳开源网络和安全软件 ###
[BIND](https://en.wikipedia.org/wiki/BIND), [Sendmail](https://en.wikipedia.org/wiki/Sendmail), [OpenSSH](https://en.wikipedia.org/wiki/OpenSSH), [Cacti](https://en.wikipedia.org/wiki/Cactus), [Nagios](https://en.wikipedia.org/wiki/Nagios), [Snort](https://en.wikipedia.org/wiki/Snort_%28software%29) -- 这些为了网络而生的开源软件,好些家伙们老而弥坚。今年在这个范畴的最佳选择中,你会发现中坚、支柱、新人和新贵云集,他们正在完善网络管理,安全监控,漏洞评估,[rootkit](https://en.wikipedia.org/wiki/Rootkit) 检测,以及很多方面。
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-icinga-100614482-orig.jpg)
### Icinga 2 ###
Icinga 起先只是系统监控应用 Nagios 的一个衍生分支。[Icinga 2][1] 经历了完全的重写,为用户带来了时尚的界面、对多数据库的支持,以及一个集成了众多扩展的 API。凭借着开箱即用的负载均衡、通知和配置文件Icinga 2 缩短了在复杂环境下安装的时间。Icinga 2 原生支持 [Graphite](https://github.com/graphite-project/graphite-web)(系统监控应用),轻松为管理员呈现实时性能图表。不过真的让 Icinga 今年重新火起来的原因是 Icinga Web 2 的发布,那是一个支持可拖放定制的 仪表盘 和一些流式监控工具的前端图形界面系统。
管理员可以查看、过滤、并按优先顺序排列发现的问题,同时可以跟踪已经采取的动作。一个新的矩阵视图使管理员能够在单一页面上查看主机和服务。你可以通过查看特定时间段的事件或筛选事件类型来了解哪些事件需要立即关注。虽然 Icinga Web 2 有着全新界面和更为强劲的性能,不过对于传统版 Icinga 和 Web 版 Icinga 的所有常用命令还是照旧支持的。这意味着学习新版工具不耗费额外的时间。
-- Fahmida Rashid
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zenoss-100614465-orig.jpg)
### Zenoss Core ###
这是另一个强大的开源软件,[Zenoss Core][2] 为网络管理员提供了一个完整的、一站式解决方案来跟踪和管理所有的应用程序、服务器、存储、网络组件、虚拟化工具、以及企业基础架构的其他元素。管理员可以确保硬件的运行效率并利用 ZenPacks 中模块化设计的插件来扩展功能。
在2015年二月发布的 Zenoss Core 5 保留了已经很强大的工具,并进一步改进以增强用户界面和扩展 仪表盘。基于 Web 的控制台和 仪表盘 可以高度可定制并动态调整,而现在的新版本还能让管理员混搭多个组件图表到一个图表中。想来这应该是一个更好的根源分析和因果分析的工具。
Portlets 为网络映射、设备问题、守护进程、产品状态、监视列表和事件视图等等提供了深入的分析。而且新版 HTML5 图表可以从工具导出。Zenoss 的控制中心支持带外管理并且可监控所有 Zenoss 组件。Zenoss Core 现在拥有一些新工具,用于在线备份和恢复、快照和回滚以及多主机部署等方面。更重要的是,凭借对 Docker 的全面支持,部署起来更快了。
-- Fahmida Rashid
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opennms-100614461-orig.jpg)
### OpenNMS ###
作为一个非常灵活的网络管理解决方案,[OpenNMS][3] 可以处理任何网络管理任务,无论是设备管理、应用性能监控、库存控制,或事件管理。凭借对 IPv6 的支持、强大的警报系统和记录用户脚本来测试 Web 应用程序的能力OpenNMS 拥有网络管理员和测试人员需要的一切。OpenNMS 现在变得像一款移动版 仪表盘,称之为 OpenNMS Compass可让网络专家随时甚至在外出时都可以监视他们的网络。
该应用程序的 IOS 版本,可从 [iTunes App Store][4] 上获取,可以显示故障、节点和告警。下一个版本将提供更多的事件细节、资源图表、以及关于 IP 和 SNMP 接口的信息。安卓版可从 [Google Play][5] 上获取,可在 仪表盘 上显示网络可用性,故障和告警,以及可以确认、提升或清除告警。移动客户端与 OpenNMS Horizon 1.12 或更高版本以及 OpenNMS Meridian 2015.1.0 或更高版本兼容。
-- Fahmida Rashid
![](http://images.techhive.com/images/article/2015/09/bossies-2015-onion-100614460-orig.jpg)
### Security Onion ###
如同一个洋葱,网络安全监控是由许多层组成。没有任何一个单一的工具可以让你洞察每一次攻击,为你显示对你的公司网络中的每一次侦查或是会话的足迹。[Security Onion][6] 在一个简单易用的 Ubuntu 发行版中打包了许多久经考验的工具,可以让你看到谁留在你的网络里,并帮助你隔离这些坏家伙。
无论你是采取主动式的网络安全监测还是追查可能的攻击Security Onion 都可以帮助你。Onion 由传感器、服务器和显示层组成,结合了基于网络和基于主机的入侵检测,全面的网络数据包捕获,并提供了所有类型的日志以供检查和分析。
这是一个众星云集的的网络安全工具链,包括用于网络抓包的 [Netsniff-NG](http://www.netsniff-ng.org/)、基于规则的网络入侵检测系统 Snort 和 [Suricata](https://en.wikipedia.org/wiki/Suricata_%28software%29),基于分析的网络监控系统 Bro基于主机的入侵检测系统 OSSEC 和用于显示、分析和日志管理的 Sguil、Squert、Snorby 和 ELSA 企业日志搜索和归档Enterprise Log Search and Archive。它是一个经过精挑细选的工具集所有的这些全被打包进一个向导式的安装程序并有完整的文档支持可以帮助你尽可能快地上手监控。
-- Victor R. Garza
![](http://images.techhive.com/images/article/2015/09/bossies-2015-kali-100614458-orig.jpg)
### Kali Linux ###
[Kali Linux][7] 背后的团队今年为这个流行的安全 Linux 发行版发布了新版本使其更快更全能。Kali 采用全新 4.0 版的内核,改进了对硬件和无线驱动程序的支持,并且界面更为流畅。最常用的工具都可从屏幕的侧边栏上轻松找到。而最大的改变是 Kali Linux 现在是一个滚动发行版具有持续不断的软件更新。Kali 的核心系统是基于 Debian Jessie而且该团队会不断地从 Debian 测试版拉取最新的软件包,并持续的在上面添加 Kali 风格的新特性。
该发行版仍然配备了很多的渗透测试,漏洞分析,安全审查,网络应用分析,无线网络评估,逆向工程,和漏洞利用工具。现在该发行版具有上游版本检测系统,当有个别工具可更新时系统会自动通知用户。该发行版还提过了一系列 ARM 设备的镜像,包括树莓派、[Chromebook](https://en.wikipedia.org/wiki/Chromebook) 和 [Odroid](https://en.wikipedia.org/wiki/ODROID),同时也更新了 Android 设备上运行的 [NetHunter](https://www.kali.org/kali-linux-nethunter/) 渗透测试平台。还有其他的变化Metasploit 的社区版/专业版不再包括在内,因为 Kali 2.0 还没有 [Rapid7 的官方支持][8]。
-- Fahmida Rashid
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-openvas-100614462-orig.jpg)
### OpenVAS ###
开放式漏洞评估系统Open Vulnerability Assessment System [OpenVAS][9],是一个整合多种服务和工具来提供漏洞扫描和漏洞管理的软件框架。该扫描器可以使用每周更新一次的网络漏洞测试数据,或者你也可以使用商业服务的数据。该软件框架包括一个命令行界面(以使其可以用脚本调用)和一个带 SSL 安全机制的基于 [Greenbone 安全助手][10] 的浏览器界面。OpenVAS 提供了用于附加功能的各种插件。扫描可以预定运行或按需运行。
可通过单一的主控来控制多个安装好 OpenVAS 的系统,这使其成为了一个可扩展的企业漏洞评估工具。该项目兼容的标准使其可以将扫描结果和配置存储在 SQL 数据库中,这样它们可以容易地被外部报告工具访问。客户端工具通过基于 XML 的无状态 OpenVAS 管理协议访问 OpenVAS 管理器,所以安全管理员可以扩展该框架的功能。该软件能以软件包或源代码的方式安装在 Windows 或 Linux 上运行,或者作为一个虚拟应用下载。
-- Matt Sarrel
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-owasp-100614463-orig.jpg)
### OWASP ###
[OWASP][11](开放式 Web 应用程序安全项目Open Web Application Security Project是一个专注于提高软件安全性的在全球各地拥有分会的非营利组织。这个社区性的组织提供测试工具、文档、培训和几乎任何你可以想象到的开发安全软件相关的软件安全评估和最佳实践。有一些 OWASP 项目已成为很多安全从业者工具箱中的重要组件:
[ZAP][12]ZED 攻击代理项目Zed Attack Proxy Project是一个在 Web 应用程序中寻找漏洞的渗透测试工具。ZAP 的设计目标之一是使之易于使用以便于那些并非安全领域专家的开发人员和测试人员能便于使用。ZAP 提供了自动扫描和一套手动测试工具集。
[Xenotix XSS Exploit Framework][13] 是一个先进的跨站点脚本漏洞检测和漏洞利用框架该框架通过在浏览器引擎内执行扫描以获取真实的结果。Xenotix 扫描模块使用了三个智能模糊器( intelligent fuzzers使其可以运行近 5000 种不同的 XSS 有效载荷。它有个 API 可以让安全管理员扩展和定制漏洞测试工具包。
[O-Saft][14]OWASP SSL 高级审查工具OWASP SSL advanced forensic tool是一个查看 SSL 证书详细信息和测试 SSL 连接的 SSL 审计工具。这个命令行工具可以在线或离线运行来评估 SSL 比如算法和配置是否安全。O-Saft 内置提供了常见漏洞的检查,你可以容易地通过编写脚本来扩展这些功能。在 2015 年 5 月加入了一个简单的图形用户界面作为可选的下载项。
[OWTF][15](攻击性 Web 测试框架Offensive Web Testing Framework是一个遵循 OWASP 测试指南和 NIST 和 PTES 标准的自动化测试工具。该框架同时支持 Web 用户界面和命令行,用于探测 Web 和应用服务器的常见漏洞,如配置失当和软件未打补丁。
-- Matt Sarrel
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-beef-100614456-orig.jpg)
### BeEF ###
Web 浏览器已经成为用于针对客户端的攻击中最常见的载体。[BeEF][15] 浏览器漏洞利用框架项目Browser Exploitation Framework Project是一种广泛使用的用以评估 Web 浏览器安全性的渗透工具。BeEF 通过浏览器来启动客户端攻击帮助你暴露出客户端系统的安全弱点。BeEF 建立了一个恶意网站,安全管理员用想要测试的浏览器访问该网站。然后 BeEF 发送命令来攻击 Web 浏览器并使用命令在客户端机器上植入软件。随后管理员就可以把客户端机器看作不设防般发动攻击了。
BeEF 自带键盘记录器、端口扫描器和 Web 代理这样的常用模块此外你可以编写你自己的模块或直接将命令发送到被控制的测试机上。BeEF 带有少量的演示网页来帮你快速入门使得编写更多的网页和攻击模块变得非常简单让你可以因地适宜的自定义你的测试。BeEF 是一个非常有价值的评估浏览器和终端安全、学习如何发起基于浏览器攻击的测试工具。可以使用它来向你的用户综合演示,那些恶意软件通常是如何感染客户端设备的。
-- Matt Sarrel
![](http://images.techhive.com/images/article/2015/09/bossies-2015-unhide-100614464-orig.jpg)
### Unhide ###
[Unhide][16] 是一个用于定位开放的 TCP/UDP 端口和隐藏在 UNIX、Linux 和 Windows 上的进程的审查工具。隐藏的端口和进程可能是由于运行 Rootkit 或 LKM可加载的内核模块loadable kernel module 导致的。Rootkit 可能很难找到并移除,因为它们就是专门针对隐蔽性而设计的,可以在操作系统和用户前隐藏自己。一个 Rootkit 可以使用 LKM 隐藏其进程或冒充其他进程,让它在机器上运行很长一段时间而不被发现。而 Unhide 则可以使管理员们确信他们的系统是干净的。
Unhide 实际上是两个单独的脚本一个用于进程一个用于端口。该工具查询正在运行的进程、线程和开放的端口并将这些信息与系统中注册的活动比较报告之间的差异。Unhide 和 WinUnhide 是非常轻量级的脚本可以运行命令行而产生文本输出。它们不算优美但是极为有用。Unhide 也包括在 [Rootkit Hunter][17] 项目中。
-- Matt Sarrel
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614457-orig.jpg)
### 查看更多的开源软件优胜者 ###
InfoWorld 网站的 2015 年最佳开源奖由下至上表扬了 100 多个开源项目。通过以下链接可以查看更多开源软件中的翘楚:
[2015 Bossie 评选:最佳开源应用程序][18]
[2015 Bossie 评选:最佳开源应用程序开发工具][19]
[2015 Bossie 评选:最佳开源大数据工具][20]
[2015 Bossie 评选:最佳开源数据中心和云计算软件][21]
[2015 Bossie 评选:最佳开源桌面和移动端软件][22]
[2015 Bossie 评选:最佳开源网络和安全软件][23]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982962/open-source-tools/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
作者:[InfoWorld staff][a]
译者:[robot527](https://github.com/robot527)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.icinga.org/icinga/icinga-2/
[2]:http://www.zenoss.com/
[3]:http://www.opennms.org/
[4]:https://itunes.apple.com/us/app/opennms-compass/id968875097?mt=8
[5]:https://play.google.com/store/apps/details?id=com.opennms.compass&hl=en
[6]:http://blog.securityonion.net/p/securityonion.html
[7]:https://www.kali.org/
[8]:https://community.rapid7.com/community/metasploit/blog/2015/08/12/metasploit-on-kali-linux-20
[9]:http://www.openvas.org/
[10]:http://www.greenbone.net/
[11]:https://www.owasp.org/index.php/Main_Page
[12]:https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
[13]:https://www.owasp.org/index.php/O-Saft
[14]:https://www.owasp.org/index.php/OWASP_OWTF
[15]:http://www.beefproject.com/
[16]:http://www.unhide-forensics.info/
[17]:http://www.rootkit.nl/projects/rootkit_hunter.html
[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -0,0 +1,204 @@
五大开源 Web 代理软件横向比较Squid、Privoxy、Varnish、Polipo、Tinyproxy
================================================================================
Web 代理软件转发 HTTP 请求时并不会改变数据流量。它们可以配置成透明代理,而无需客户端配置。它们还可以作为反向代理放在网站的前端;这样缓存服务器可以为一台或多台 web 服务器提供无限量的用户服务。
网站代理功能多样有着宽泛的用途从缓存页面、DNS 和其他查询,到加速 web 服务器响应、降低带宽消耗。代理软件广泛用于大型高访问量的网站,比如纽约时报、卫报, 以及社交媒体网站如 Twitter、Facebook 和 Wikipedia。
页面缓存已经成为优化单位时间内所能吞吐的数据量的至关重要的机制。好的 Web 缓存还能降低延迟,尽可能快地响应页面,让终端用户不至于因等待内容的时间过久而失去耐心。它们还能将频繁访问的内容缓存起来以节省带宽。如果你需要降低服务器负载并改善网站内容响应速度,那缓存软件能带来的好处就绝对值得探索一番。
为深入探查 Linux 下可用的相关软件的质量我列出了下边5个优秀的开源 web 代理工具。它们中有些功能完备强大,也有几个只需很低的资源就能运行。
### Squid ###
Squid 是一个高性能、开源的代理缓存服务器和 Web 缓存进程,支持 FTP、Internet Gopher、HTTPS 和 SSL 等多种协议。它通过一个非阻塞的、I/O 事件驱动的单一进程处理所有的 IPV4 或 IPV6 协议请求。
Squid 由一个主服务程序 squid和 DNS 查询程序 dnsserver另外还有一些可选的请求重写、执行认证程序组件及一些管理和客户端工具构成。
Squid 提供了丰富的访问控制、认证和日志环境, 用于开发 web 代理和内容服务网站应用。
其特性包括:
- Web 代理:
- 通过缓存来降低访问时间和带宽使用
- 将元数据和访问特别频繁的对象缓存到内存中
- 缓存 DNS 查询
- 支持非阻塞的 DNS 查询
- 实现了失败请求的未果缓存
- Squid 缓存可架设为层次结构,或网状结构以节省额外的带宽
- 通过广泛的访问控制来执行网站访问策略
- 隐匿请求,如禁用或修改客户端 HTTP 请求头特定属性
- 反向代理
- 媒体范围media-range限制
- 支持 SSL
- 支持 IPv6
- 错误页面的本地化 - Squid 可以根据访问者的语言选项对每个请求展示本地化的错误页面
- 连接固定Connection Pinning )(用于 NTLM Auth Passthrough - 一种允许 Web 服务器通过 Web 代理使用Microsoft NTLM 安全认证替代 HTTP 标准认证的方案
- 支持服务质量 (QoS, Quality of Service) 流
- 选择一个 TOS/Diffserv 值来标记本地命中
- 选择一个 TOS/Diffserv 值来标记对端命中
- 选择性地仅标记同级或上级请求
- 允许任意发往客户端的 HTTP 响应保持由远程服务器处响应的 TOS 值
- 对收到的远程服务器的 TOS 值,在复制之前对指定位进行掩码操作,再发送到客户端
- SSL Bump (用于 HTTPS 过滤和适配) - Squid-in-the-middle在 CONNECT 方式的 SSL 隧道中,用配置化的客户端和服务器端证书,对流量进行解密和加密
- 支持适配模块
- ICAP 旁路和重试增强 - 通过完全的旁路和动态链式路由扩展 ICAP来处理多多个适应性服务。
- 支持 ICY 流式协议 - 俗称 SHOUTcast 多媒体流
- 动态 SSL 证书生成
- 支持 ICAP 协议 (Internet Content Adaptation Protocol)
- 完整的请求日志记录
- 匿名连接
--
- 网站: [www.squid-cache.org][1]
- 开发: 美国国家应用网络研究实验室(NLANR)和网络志愿者
- 授权: GNU GPL v2
- 版本号: 4.0.1
### Privoxy ###
Privoxy (Privacy Enhancing Proxy) 是一个非缓存类 Web 代理软件,它自带的高级过滤功能可以用来增强隐私保护、修改页面内容和 HTTP 头部信息、访问控制以及去除广告和其它招人反感的互联网垃圾。Privoxy 的配置非常灵活,能充分定制已满足各种各样的需求和偏好。它支持单机和多用户网络两种模式。
Privoxy 使用 action 规则来处理浏览器和远程站点间的数据流。
其特性包括:
- 高度配置化——可以完全定制你的配置
- 广告拦截
- Cookie 管理
- 支持“Connection: keep-alive”。可以无视客户端配置而保持外发的持久连接
- 支持 IPv6
- 标签化Tagging允许按照客户端和服务器的请求头进行处理
- 作为拦截intercepting代理器运行
- 巧妙的动作action和过滤机制用来处理服务器和客户端的 HTTP 头部
- 可以与其他代理软件链式使用
- 整合了基于浏览器的配置和控制工具,能在线跟踪规则和过滤效果,可远程开关
- 页面过滤(文本替换、根据尺寸大小删除广告栏, 隐藏的“web-bugs”元素和 HTML 容错等)
- 模块化的配置使得标准配置和用户配置可以存放于不同文件中,这样安装更新就不会覆盖用户的个性化设置
- 配置文件支持 Perl 兼容的正则表达式,以及更为精妙和灵活的配置语法
- GIF 去动画
- 旁路处理大量点击跟踪click-tracking脚本避免脚本重定向
- 大多数代理生成的页面(例如 "访问受限" 页面可由用户自定义HTML模板
- 自动监测配置文件的修改并重新读取
- 大多数功能可以基于每个站点或每个 URL 位置来进行控制
--
- 网站: [www.privoxy.org][2]
- 开发: Fabian Keil开发领导者, David Schmidt, 和众多其他贡献者
- 授权: GNU GPL v2
- 版本号: 3.4.2
### Varnish Cache ###
Varnish Cache 是一个为性能和灵活性而生的 web 加速器。它新颖的架构设计能带来显著的性能提升。根据你的架构通常情况下它能加速响应速度300-1000倍。Varnish 将页面存储到内存,这样 web 服务器就无需重复地创建相同的页面,只需要在页面发生变化后重新生成。页面内容直接从内存中访问,当然比其他方式更快。
此外 Varnish 能大大提升响应 web 页面的速度,用在任何应用服务器上都能使网站访问速度大幅度地提升。
按经验Varnish Cache 比较经济的配置是1-16GB内存+ SSD 固态硬盘。
其特性包括:
- 新颖的设计
- VCL - 非常灵活的配置语言。VCL 配置会转换成 C然后编译、加载、运行灵活且高效
- 能使用 round-robin 轮询和随机分发两种方式来负载均衡,两种方式下后端服务器都可以设置权重
- 基于 DNS、随机、散列和客户端 IP 的分发器Director
- 多台后端主机间的负载均衡
- 支持 Edge Side Includes包括拼装压缩后的 ESI 片段
- 重度多线程并发
- URL 重写
- 单 Varnish 能够缓存多个虚拟主机
- 日志数据存储在共享内存中
- 基本的后端服务器健康检查
- 优雅地处理后端服务器“挂掉”
- 命令行界面的管理控制台
- 使用内联 C 语言来扩展 Varnish
- 可以与 Apache 用在相同的系统上
- 单个系统可运行多个 Varnish
- 支持 HAProxy 代理协议。该协议在每个收到的 TCP 请求——例如 SSL 终止过程中——附加一小段 http 头信息,以记录客户端的真实地址
- 冷热 VCL 状态
- 可以用名为 VMOD 的 Varnish 模块来提供插件扩展
- 通过 VMOD 定义后端主机
- Gzip 压缩及解压
- HTTP 流的通过和获取
- 神圣模式和优雅模式。用 Varnish 作为负载均衡器,神圣模式下可以将不稳定的后端服务器在一段时间内打入黑名单,阻止它们继续提供流量服务。优雅模式允许 Varnish 在获取不到后端服务器状态良好的响应时,提供已过期版本的页面或其它内容。
- 实验性支持持久化存储,无需 LRU 缓存淘汰
--
- 网站: [www.varnish-cache.org][3]
- 开发: Varnish Software
- 授权: FreeBSD
- 版本号: 4.1.0
### Polipo ###
Polipo 是一个开源的 HTTP 缓存代理,只需要非常低的资源开销。
它监听来自浏览器的 web 页面请求,转发到 web 服务器,然后将服务器的响应转发到浏览器。在此过程中,它能优化和整形网络流量。从本质来讲 Polipo 与 WWWOFFLE 很相似,但其实现技术更接近于 Squid。
Polipo 最开始的目标是作为一个兼容 HTTP/1.1 的代理,理论它能在任何兼容 HTTP/1.1 或更早的 HTTP/1.0 的站点上运行。
其特性包括:
- HTTP 1.1、IPv4 & IPv6、流量过滤和隐私保护增强
- 如确认远程服务器支持的话,则无论收到的请求是管道处理过的还是在多个连接上同时收到的,都使用 HTTP/1.1 管道pipelining
- 下载被中断时缓存起始部分当需要续传时用区间Range请求来完成下载
- 将 HTTP/1.0 的客户端请求升级为 HTTP/1.1,然后按照客户端支持的级别进行升级或降级后回复
- 全面支持 IPv6 (作用域(链路本地)地址除外)
- 作为 IPv4 和 IPv6 网络的网桥
- 内容过滤
- 能使用 Poor Man 多路复用技术Poor Man's Multiplexing降低延迟
- 支持 SOCKS 4 和 SOCKS 5 协议
- HTTPS 代理
- 扮演透明代理的角色
- 可以与 Privoxy 或 tor 一起运行
--
- 网站: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4]
- 开发: Juliusz Chroboczek, Christopher Davis
- 授权: MIT License
- 版本号: 1.1.1
### Tinyproxy ###
Tinyproxy 是一个轻量级的开源 web 代理守护进程,其设计目标是快而小。它适用于需要完整 HTTP 代理特性,但系统资源又不足以运行大型代理的场景,比如嵌入式部署。
Tinyproxy 对小规模网络非常有用这样的场合下大型代理会使系统资源紧张或有安全风险。Tinyproxy 的一个关键特性是其缓冲连接的理念。从效果上看, Tinyproxy 对服务器的响应进行了高速缓冲,然后按照客户端能够处理的最高速度进行响应。该特性极大的降低了网络延滞带来的问题。
特性:
- 易于修改
- 隐匿模式 - 定义哪些 HTTP 头允许通过,哪些又会被拦截
- 支持 HTTPS - Tinyproxy 允许通过 CONNECT 方法转发 HTTPS 连接,任何情况下都不会修改数据流量
- 远程监控 - 远程访问代理统计数据,让你能清楚了解代理服务当前的忙碌状态
- 平均负载监控 - 通过配置,当服务器的负载接近一定值后拒绝新连接
- 访问控制 - 通过配置,仅允许指定子网或 IP 地址的访问
- 安全 - 运行无需额外权限,减小了系统受到威胁的概率
- 基于 URL 的过滤 - 允许基于域和URL的黑白名单
- 透明代理 - 配置为透明代理,这样客户端就无需任何配置
- 代理链 - 在流量出口处采用上游代理服务器,而不是直接转发到目标服务器,创建我们所说的代理链
- 隐私特性 - 限制允许从浏览器收到的来自 HTTP 服务器的数据(例如 cookies同时限制允许通过的从浏览器到 HTTP 服务器的数据(例如版本信息)
- 低开销 - 使用 glibc 内存开销只有2MBCPU 负载按并发连接数线性增长(取决于网络连接速度)。 Tinyproxy 可以运行在老旧的机器上而无需担心性能问题。
--
- 网站: [banu.com/tinyproxy][5]
- 开发: Robert James Kaes和其他贡献者
- 授权: GNU GPL v2
- 版本号: 1.8.3
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html
译者:[fw8899](https://github.com/fw8899)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.squid-cache.org/
[2]:http://www.privoxy.org/
[3]:https://www.varnish-cache.org/
[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/
[5]:https://banu.com/tinyproxy/

View File

@ -1,5 +1,4 @@
Linux 有问必答:如何通过代理服务器安装 Ubuntu 桌面版
Linux 有问必答 - 如何通过代理服务器安装 Ubuntu 桌面
================================================================================ ================================================================================
> **问题**: 我的电脑通过 HTTP 代理连接到公司网络。当我尝试从 CD-ROM 在计算机上安装 Ubuntu 桌面时在检索文件时安装程序会被挂起检索则不会完成这可能是由于代理造成的。然而问题是Ubuntu 的安装程序从不要求我在安装过程中配置代理。那我该怎么使用代理来安装 Ubuntu 桌面? > **问题**: 我的电脑通过 HTTP 代理连接到公司网络。当我尝试从 CD-ROM 在计算机上安装 Ubuntu 桌面时在检索文件时安装程序会被挂起检索则不会完成这可能是由于代理造成的。然而问题是Ubuntu 的安装程序从不要求我在安装过程中配置代理。那我该怎么使用代理来安装 Ubuntu 桌面?
@ -56,7 +55,7 @@ via: http://ask.xmodulo.com/install-ubuntu-desktop-behind-proxy.html
作者:[Dan Nanni][a] 作者:[Dan Nanni][a]
译者:[strugglingyouth](https://github.com/strugglingyouth) 译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,156 @@
如何在 Linux 上使用 Gmail SMTP 服务器发送邮件通知
================================================================================
假定你想配置一个 Linux 应用,用于从你的服务器或桌面客户端发送邮件信息。邮件信息可能是邮件简报、状态更新(如 [Cachet][1])、监控警报(如 [Monit][2])、磁盘时间(如 [RAID mdadm][3])等等。当你要建立自己的 [邮件发送服务器][4] 传递信息时 ,你可以替代使用一个免费的公共 SMTP 服务器,从而避免遭受维护之苦。
谷歌的 Gmail 服务就是最可靠的 **免费 SMTP 服务器** 之一。想要从应用中发送邮件通知,你仅需在应用中添加 Gmail 的 SMTP 服务器地址和你的身份凭证即可。
使用 Gmail 的 SMTP 服务器会遇到一些限制,这些限制主要用于阻止那些经常滥用服务器来发送垃圾邮件和使用邮件营销的家伙。举个例子,你一次只能给至多 100 个地址发送信息,并且一天不能超过 500 个收件人。同样,如果你不想被标为垃圾邮件发送者,你就不能发送过多的不可投递的邮件。当你达到任何一个限制,你的 Gmail 账户将被暂时的锁定一天。简而言之Gmail 的 SMTP 服务器对于你个人的使用是非常棒的,但不适合商业的批量邮件。
说了这么多,是时候向你们展示 **如何在 Linux 环境下使用 Gmail 的 SMTP 服务器** 了。
### Google Gmail SMTP 服务器设置 ###
如果你想要通过你的应用使用 Gmail 的 SMTP 服务器发送邮件,请牢记接下来的详细说明。
- **邮件发送服务器 SMTP 服务器)**: smtp.gmail.com
- **使用认证**: 是
- **使用安全连接**: 是
- **用户名**: 你的 Gmail 账户 ID (比如 "alice" ,如果你的邮箱为 alice@gmail.com
- **密码**: 你的 Gmail 密码
- **端口**: 587
确切的配置根据应用会有所不同。在本教程的剩余部分,我将向你展示一些在 Linux 上使用 Gmail SMTP 服务器的应用示例。
### 从命令行发送邮件 ###
作为第一个例子,让我们尝试最基本的邮件功能:使用 Gmail SMTP 服务器从命令行发送一封邮件。为此,我将使用一个称为 mutt 的命令行邮件客户端。
先安装 mutt
对于 Debian-based 系统:
$ sudo apt-get install mutt
对于 Red Hat based 系统:
$ sudo yum install mutt
创建一个 mutt 配置文件(~/.muttrc并和下面一样在文件中指定 Gmail SMTP 服务器信息。将 \<gmail-id> 替换成自己的 Gmail ID。注意该配置只是为了发送邮件而已而非接收邮件
$ vi ~/.muttrc
----------
set from = "<gmail-id>@gmail.com"
set realname = "Dan Nanni"
set smtp_url = "smtp://<gmail-id>@smtp.gmail.com:587/"
set smtp_pass = "<gmail-password>"
一切就绪,使用 mutt 发送一封邮件:
$ echo "This is an email body." | mutt -s "This is an email subject" alice@yahoo.com
想在一封邮件中添加附件,使用 "-a" 选项
$ echo "This is an email body." | mutt -s "This is an email subject" alice@yahoo.com -a ~/test_attachment.jpg
![](https://c1.staticflickr.com/1/770/22239850784_5fb0988075_c.jpg)
使用 Gmail SMTP 服务器意味着邮件将显示是从你 Gmail 账户发出的。换句话说,收件人将视你的 Gmail 地址为发件人地址。如果你想要使用自己的域名作为邮件发送方,你需要使用 Gmail SMTP 转发服务。
### 当服务器重启时发送邮件通知 ###
如果你在 [虚拟专用服务器VPS][5] 上跑了些重要的网站,建议监控 VPS 的重启行为。作为一个更为实用的例子,让我们研究如何在你的 VPS 上为每一次重启事件建立邮件通知。这里假设你的 VPS 上使用的是 systemd并向你展示如何为自动邮件通知创建一个自定义的 systemd 启动服务。
首先创建下面的脚本 reboot_notify.sh用于负责邮件通知。
$ sudo vi /usr/local/bin/reboot_notify.sh
----------
#!/bin/sh
echo "`hostname` was rebooted on `date`" | mutt -F /etc/muttrc -s "Notification on `hostname`" alice@yahoo.com
----------
$ sudo chmod +x /usr/local/bin/reboot_notify.sh
在这个脚本中,我使用 "-F" 选项,用于指定系统级的 mutt 配置文件位置。因此不要忘了创建 /etc/muttrc 文件,并如前面描述的那样填入 Gmail SMTP 信息。
现在让我们创建如下一个自定义的 systemd 服务。
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo vi /usr/local/lib/systemd/system/reboot-task.service
----------
[Unit]
Description=Send a notification email when the server gets rebooted
DefaultDependencies=no
Before=reboot.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/reboot_notify.sh
[Install]
WantedBy=reboot.target
在创建服务后,添加并启动该服务。
$ sudo systemctl enable reboot-task
$ sudo systemctl start reboot-task
从现在起,在每次 VPS 重启时,你将会收到一封通知邮件。
![](https://c1.staticflickr.com/1/608/22241452923_2ace9cde2e_c.jpg)
### 通过服务器使用监控发送邮件通知 ###
作为最后一个例子,让我展示一个现实生活中的应用程序,[Monit][6],这是一款极其有用的服务器监控应用程序。它带有全面的 [VPS][7] 监控能力(比如 CPU、内存、进程、文件系统和邮件通知功能。
如果你想要接收 VPS 上由 Monit 产生的任何事件的邮件通知,你可以在 Monit 配置文件中添加以下 SMTP 信息。
set mailserver smtp.gmail.com port 587
username "<your-gmail-ID>" password "<gmail-password>"
using tlsv12
set mail-format {
from: <your-gmail-ID>@gmail.com
subject: $SERVICE $EVENT at $DATE on $HOST
message: Monit $ACTION $SERVICE $EVENT at $DATE on $HOST : $DESCRIPTION.
Yours sincerely,
Monit
}
# the person who will receive notification emails
set alert alice@yahoo.com
这是一个因为 CPU 负载超载而由 Monit 发送的邮件通知的例子。
![](https://c1.staticflickr.com/1/566/22873764251_8fe66bfd16_c.jpg)
### 总结 ###
如你所见,类似 Gmail 这样免费的 SMTP 服务器有着这么多不同的运用方式 。但再次重申,请牢记免费的 SMTP 服务器不适用于商业用途,仅仅适用于个人项目。无论你正在哪款应用中使用 Gmail SMTP 服务器,欢迎自由分享你的用例。
--------------------------------------------------------------------------------
via: http://xmodulo.com/send-email-notifications-gmail-smtp-server-linux.html
作者:[Dan Nanni][a]
译者:[cposture](https://github.com/cposture)
校对:[martin2011qi](https://github.com/martin2011qi), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/setup-system-status-page.html
[2]:http://xmodulo.com/server-monitoring-system-monit.html
[3]:http://xmodulo.com/create-software-raid1-array-mdadm-linux.html
[4]:http://xmodulo.com/mail-server-ubuntu-debian.html
[5]:http://xmodulo.com/go/digitalocean
[6]:http://xmodulo.com/server-monitoring-system-monit.html
[7]:http://xmodulo.com/go/digitalocean

View File

@ -4,7 +4,7 @@
如果你正好拥有全球第一支运行 Ubuntu 的手机并且希望将 **BQ Aquaris E4.5 自带的 Ubuntu 系统换成 Android **,那这篇文章能帮你点小忙。 如果你正好拥有全球第一支运行 Ubuntu 的手机并且希望将 **BQ Aquaris E4.5 自带的 Ubuntu 系统换成 Android **,那这篇文章能帮你点小忙。
有一万种理由来解释为什么要将 Ubuntu 换成主流 Android OS。其中最主要的一个就是这个系统本身仍然处于非常早期的阶段针对的目标用户仍然是开发者和爱好者。不管你的理由是什么要谢谢 bq 提供的工具,让我们能非常轻松地在 BQ Aquaris 上安装 Android OS。 有一万种理由来解释为什么要将 Ubuntu 换成主流 Android OS。其中最主要的一个就是这个系统本身仍然处于非常早期的阶段针对的目标用户仍然是开发者和爱好者。不管你的理由是什么要谢谢 BQ 提供的工具,让我们能非常轻松地在 BQ Aquaris 上安装 Android OS。
下面让我们一起看下在 BQ Aquaris 上安装 Android 需要做哪些事情。 下面让我们一起看下在 BQ Aquaris 上安装 Android 需要做哪些事情。
@ -20,7 +20,7 @@
#### 第一步:下载 Android 固件 #### #### 第一步:下载 Android 固件 ####
首先是下载可以在 BQ Aquaris E4.5 上运行的 Android 固件。幸运的是我们可以在 bq 的技术支持网站找到。可以从下面的链接直接下载,差不多 650 MB 首先是下载可以在 BQ Aquaris E4.5 上运行的 Android 固件。幸运的是我们可以在 BQ 的技术支持网站找到。可以从下面的链接直接下载,差不多 650 MB
- [下载为 BQ Aquaris E4.5 制作的 Android][1] - [下载为 BQ Aquaris E4.5 制作的 Android][1]
@ -28,21 +28,21 @@
我建议去[ bq 的技术支持网站][2]下载最新的固件。 我建议去[ bq 的技术支持网站][2]下载最新的固件。
下载完成后解压。在解压后的目录里,找到一个名字是 **MT6582_Android_scatter.txt** 的文件。后面将要用到它。 下载完成后解压。在解压后的目录里,找到一个名字是 **MT6582\_Android\_scatter.txt** 的文件。后面将要用到它。
#### 第二步:下载刷机工具 #### #### 第二步:下载刷机工具 ####
bq 已经提供了自己的刷机工具Herramienta MTK Flash Tool可以轻松地给设备安装 Andriod 或者 Ubuntu 系统。你可以从下面的链接下载工具: BQ 已经提供了自己的刷机工具Herramienta MTK Flash Tool可以轻松地给设备安装 Andriod 或者 Ubuntu 系统。你可以从下面的链接下载工具:
- [下载 MTK Flash Tool][3] - [下载 MTK Flash Tool][3]
考虑到刷机工具在以后可能会升级,你总是可以从[bq 技术支持网站][4]上找到最新的版本。 考虑到刷机工具在以后可能会升级,你总是可以从 [BQ 技术支持网站][4]上找到最新的版本。
下载完后解压。之后应该可以在目录里找到一个叫 **flash_tool** 的可执行文件。我们稍后会用到。 下载完后解压。之后应该可以在目录里找到一个叫 **flash_tool** 的可执行文件。我们稍后会用到。
#### 第三步:移除冲突的软件包(可选) #### #### 第三步:移除冲突的软件包(可选) ####
如果你正在用最新版本的 Ubuntu 或 基于 Ubuntu 的 Linux 发行版,稍后可能会碰到 “BROM ERROR : S_UNDEFINED_ERROR (1001)” 错误。 如果你正在用最新版本的 Ubuntu 或 基于 Ubuntu 的 Linux 发行版,稍后可能会碰到 “BROM ERROR : S\_UNDEFINED\_ERROR (1001)” 错误。
要避免这个错误,你需要卸载有冲突的软件包。可以使用下面的命令: 要避免这个错误,你需要卸载有冲突的软件包。可以使用下面的命令:
@ -52,7 +52,7 @@ bq 已经提供了自己的刷机工具Herramienta MTK Flash Tool可以轻
sudo service udev restart sudo service udev restart
检查一下内核模块 cdc_acm 可能存在的边际效应,运行下面的命令: 检查一下内核模块 cdc_acm 可能存在的副作用,运行下面的命令:
lsmod | grep cdc_acm lsmod | grep cdc_acm
@ -76,7 +76,7 @@ bq 已经提供了自己的刷机工具Herramienta MTK Flash Tool可以轻
![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-1.jpeg) ![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-1.jpeg)
还记得之前第一步里提到的 **MT6582_Android_scatter.txt** 文件吗?这个文本文件就在你第一步中下载的 Android 固件解压后的目录里。点击 Scatter-loading上图中然后选中 MT6582_Android_scatter.txt 文件。 还记得之前第一步里提到的 **MT6582\_Android\_scatter.txt** 文件吗?这个文本文件就在你第一步中下载的 Android 固件解压后的目录里。点击 Scatter-loading上图中然后选中 MT6582\_Android\_scatter.txt 文件。
之后,你将看到类似下面图片里的一些绿色线条: 之后,你将看到类似下面图片里的一些绿色线条:
@ -104,7 +104,7 @@ bq 已经提供了自己的刷机工具Herramienta MTK Flash Tool可以轻
### 总结 ### ### 总结 ###
要感谢厂商提供的工具,让我们可以轻松地 **在 bq Ubuntu 手机上刷 Android**。当然,你可以使用相同的步骤将 Android 替换回 Ubuntu。只是下载的时候选 Ubuntu 固件而不是 Android。 要感谢厂商提供的工具,让我们可以轻松地 **在 BQ Ubuntu 手机上刷 Android**。当然,你可以使用相同的步骤将 Android 替换回 Ubuntu。只是下载的时候选 Ubuntu 固件而不是 Android。
希望这篇文章可以帮你将你的 bq 手机上的 Ubuntu 刷成 Android。如果有什么问题或建议可以在下面留言区里讨论。 希望这篇文章可以帮你将你的 bq 手机上的 Ubuntu 刷成 Android。如果有什么问题或建议可以在下面留言区里讨论。
@ -114,7 +114,7 @@ via: http://itsfoss.com/install-android-ubuntu-phone/
作者:[Abhishek][a] 作者:[Abhishek][a]
译者:[zpl1025](https://github.com/zpl1025) 译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux 有问必答 - 如何在 Linux 上安装 Node.js Linux 有问必答如何在 Linux 上安装 Node.js
================================================================================ ================================================================================
> **问题**: 如何在你的 Linux 发行版上安装 Node.js > **问题**: 如何在你的 Linux 发行版上安装 Node.js
@ -6,7 +6,7 @@ Linux 有问必答 - 如何在 Linux 上安装 Node.js
在本教程中,我将介绍 **如何在主流 Linux 发行版上安装 Node.js包括 DebianUbuntuFedora 和 CentOS** 在本教程中,我将介绍 **如何在主流 Linux 发行版上安装 Node.js包括 DebianUbuntuFedora 和 CentOS**
Node.js 在一些发行版上作为预构建的程序包Fedora 或 Ubuntu而在其他发行版上你需要源码安装。由于 Node.js 发展比较快,建议从源码安装最新版而不是安装一个过时的预构建的程序包。最新的 Node.js 自带 npmNode.js 的包管理器),让你可以轻松的安装 Node.js 的外部模块。 Node.js 在一些发行版上预构建的程序包Fedora 或 Ubuntu而在其他发行版上你需要通过源码安装。由于 Node.js 发展比较快,建议从源码安装最新版而不是安装一个过时的预构建的程序包。最新的 Node.js 自带 npmNode.js 的包管理器),让你可以轻松的安装 Node.js 的外部模块。
### 在 Debian 上安装 Node.js on ### ### 在 Debian 上安装 Node.js on ###
@ -64,7 +64,6 @@ Node.js 被包含在 Fedora 的 base 仓库中。因此,你可以在 Fedora
### 在 Arch Linux 上安装 Node.js ### ### 在 Arch Linux 上安装 Node.js ###
Node.js is available in the Arch Linux community repository. Thus installation is as simple as running:
Node.js 在 Arch Linux 的社区库中可以找到。所以安装很简单,只要运行: Node.js 在 Arch Linux 的社区库中可以找到。所以安装很简单,只要运行:
@ -82,7 +81,7 @@ via: http://ask.xmodulo.com/install-node-js-linux.html
作者:[Dan Nanni][a] 作者:[Dan Nanni][a]
译者:[strugglingyou](https://github.com/strugglingyou) 译者:[strugglingyou](https://github.com/strugglingyou)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,57 +1,51 @@
GHLandy Translated
GIMP 过去的 20 年:一点一滴的进步 GIMP 过去的 20 年:一点一滴的进步
================================================================================ ================================================================================
youtube 视频 youtube 视频
<iframe width="660" height="371" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/PSJAzJ6mkVw?feature=oembed"></iframe> <iframe width="660" height="371" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/PSJAzJ6mkVw?feature=oembed"></iframe>
[GIMP][1]GNU 图像处理程序)—— 一流的开源免费图像处理程序。加州大学伯克利分校的 Peter Mattis 和 Spencer Kimball 最早在 1995 年的时候就进行了该程序的开发。到了 1997 年,该程序成为了 [GNU Project][2] 官方的一部分,并正式更名为 GIMP。时至今日GIMP 已经成为了最好的图像编辑器之一,并有最受欢迎的 “GIMP vs Photoshop” 之争。 [GIMP][1]GNU 图像处理程序GNU Image Manipulation Program—— 一流的开源自由的图像处理程序。加州大学伯克利分校的 Peter Mattis 和 Spencer Kimball 早在 1995 年的时候开始了该程序的开发。到了 1997 年,该程序成为了 [GNU Project][2] 官方的一部分,并正式更名为 GIMP。时至今日GIMP 已经成为了最好的图像编辑器之一,并有经常有 “GIMP vs Photoshop” 之争。
1995 年 11 月 21 日,首版发布: ### 1995 年 11 月 21 日,首版发布###
> 发布者: Peter Mattis ```
> From: Peter Mattis
> 发布主题: ANNOUNCE: The GIMP Subject: ANNOUNCE: The GIMP
> Date: 1995-11-21
> 日期: 1995-11-21 Message-ID: <48s543$r7b@agate.berkeley.edu>
> Newsgroups: comp.os.linux.development.apps,comp.os.linux.misc,comp.windows.x.apps
> 消息ID: <48s543$r7b@agate.berkeley.edu>
>
> 新闻组: comp.os.linux.development.apps,comp.os.linux.misc,comp.windows.x.apps
>
> GIMP通用图像处理程序
> ------------------------------------------------
>
> GIMP 是为各种图像编辑操作提供一个直观的图形界面而设计的。
>
> 以下是 GIMP 的主要功能介绍:
>
> 图像查看
> -------------
>
> * 支持 8 位15 位16 位和 24 位颜色
> * 8 位色显示的图像序列的稳定算法
> * 以 RGB 色、灰度和索引色模式查看图像
> * 同时编辑多个图像
> * 实时缩放和全图查看
> * 支持 GIF、JPEG、PNG、TIFF 和 XPM 格式
>
> 图像编辑
> -------------
>
> * 选区工具:包括矩形、椭圆、自由、模糊、贝尔赛曲线以及智能
> * 变换工具:包括旋转、缩放、剪切和翻转
> * 绘画工具:包括油漆桶、笔刷、喷枪、克隆、卷积、混合和文本
> * 效果滤镜:如模糊和边缘检测
> * 通道和颜色操作:叠加、反相和分解
> * 组件功能:允许你方便的添加新的文件格式和效果滤镜
> * 多步撤销/重做功能
1996 年GIMP 0.54 版 GIMP通用图像处理程序
------------------------------------------------
GIMP 是为各种图像编辑操作提供一个直观的图形界面而设计的。
以下是 GIMP 的主要功能介绍:
图像查看
-------------
* 支持 8 位15 位16 位和 24 位颜色
* 8 位色显示图像的排序和 Floyd-Steinberg 抖动算法
* 以 RGB 色、灰度和索引色模式查看图像
* 同时编辑多个图像
* 实时缩放和全图查看
* 支持 GIF、JPEG、PNG、TIFF 和 XPM 格式
图像编辑
-------------
* 选区工具:包括矩形、椭圆、自由、模糊、贝尔赛曲线以及智能
* 变换工具:包括旋转、缩放、剪切和翻转
* 绘画工具:包括油漆桶、笔刷、喷枪、克隆、卷积、混合和文本
* 效果滤镜:如模糊和边缘检测
* 通道和颜色操作:叠加、反相和分解
* 组件功能:允许你方便的添加新的文件格式和效果滤镜
* 多步撤销/重做功能
```
### 1996 年GIMP 0.54 版 ###
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/054.png) ![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/054.png)
GIMP 0.54 版需要具备 X11 显示、X-server 以及 Motif 1.2 微件,支持 8 位、15 位、16 位和 24 位的颜色深度和灰度,支持 GIF、JPEG、PNG、TIFF 和 XPM 图像格式。 GIMP 0.54 版需要具备 X11 显示、X-server 以及 Motif 1.2 件,支持 8 位、15 位、16 位和 24 位的颜色深度和灰度,支持 GIF、JPEG、PNG、TIFF 和 XPM 图像格式。
基本功能:具备矩形、椭圆、自由、模糊、贝塞尔曲线和智能等选择工具,旋转、缩放、剪切、克隆、混合和翻转等变换工具。 基本功能:具备矩形、椭圆、自由、模糊、贝塞尔曲线和智能等选择工具,旋转、缩放、剪切、克隆、混合和翻转等变换工具。
@ -66,7 +60,7 @@ GIMP 0.54 版可以在 Linux、HP-UX、Solaris 和 SGI IRIX 中运行。
这只是一个开发版本并非面向用户发布的。GIMP 有了新的工具包——GDKGIMP Drawing KitGIMP 绘图工具)和 GTKGIMP ToolkitGIMP 工具包),并弃用 Motif。GIMP 工具包随后也发展成为了 GTK+ 跨平台的微件工具包。新特性: 这只是一个开发版本并非面向用户发布的。GIMP 有了新的工具包——GDKGIMP Drawing KitGIMP 绘图工具)和 GTKGIMP ToolkitGIMP 工具包),并弃用 Motif。GIMP 工具包随后也发展成为了 GTK+ 跨平台的微件工具包。新特性:
- 基本的图层功能 - 基本的图层功能
- 子像素采集 - 子像素取样
- 笔刷间距 - 笔刷间距
- 改进剂喷枪功能 - 改进剂喷枪功能
- 绘制模式 - 绘制模式
@ -75,7 +69,7 @@ GIMP 0.54 版可以在 Linux、HP-UX、Solaris 和 SGI IRIX 中运行。
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/099.png) ![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/099.png)
从 0.99 版本开始GIMP 有了宏脚本的支持。GTK 及 GTK 功能增强版正式更名为 GTK+。其他更新: 从 0.99 版本开始GIMP 有了宏脚本的支持。GTK 及 GDK 功能增强版正式更名为 GTK+。其他更新:
- 支持大体积图像(大于 100M - 支持大体积图像(大于 100M
- 新增原生格式 XCF - 新增原生格式 XCF
@ -87,8 +81,8 @@ GIMP 0.54 版可以在 Linux、HP-UX、Solaris 和 SGI IRIX 中运行。
GIMP 和 GTK+ 开始分为两个不同的项目。GIMP 官网进行重构,包含新教程、组件和文档。新特性: GIMP 和 GTK+ 开始分为两个不同的项目。GIMP 官网进行重构,包含新教程、组件和文档。新特性:
- 基于瓦片式的内存管理 - 基于瓦片式tile的内存管理
- 组件 API 做了大改变 - 组件 API 做了大改变
- XFC 格式现在支持图层、导航和选择 - XFC 格式现在支持图层、导航和选择
- web 界面 - web 界面
- 在线图像生成 - 在线图像生成
@ -106,7 +100,7 @@ GIMP 和 GTK+ 开始分为两个不同的项目。GIMP 官网进行重构,包
- 保存前可以进行图像预览 - 保存前可以进行图像预览
- 按比例缩放的笔刷进行预览 - 按比例缩放的笔刷进行预览
- 通过路径进行递归选择 - 通过路径进行递归选择
- 新的窗口导航 - 新的导航窗口
- 支持图像拖拽 - 支持图像拖拽
- 支持水印 - 支持水印
@ -138,7 +132,7 @@ GIMP 和 GTK+ 开始分为两个不同的项目。GIMP 官网进行重构,包
- 更新了图形界面 - 更新了图形界面
- 新的选择工具 - 新的选择工具
- 继承了 GEGL GEneric Graphics Library通用图形库 - 集成了 GEGL GEneric Graphics Library通用图形库
- 为 MDI 行为实现了实用程序窗口提示 - 为 MDI 行为实现了实用程序窗口提示
### 2012 年GIMP 2.8 版 ### ### 2012 年GIMP 2.8 版 ###
@ -160,7 +154,7 @@ via: https://tlhp.cf/20-years-of-gimp-evolution/
作者:[Pavlo Rudyi][a] 作者:[Pavlo Rudyi][a]
译者:[GHLandy](https://github.com/GHLandy) 译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,65 @@
如何选择文件系统EXT4、Btrfs 和 XFS
================================================================================
![](http://1969324071.rsc.cdn77.org/wp-content/uploads/2015/09/1385698302_funny_linux_wallpapers.jpg)
老实说人们最不曾思考的问题之一是他们的个人电脑中使用了什么文件系统。Windows 和 Mac OS X 用户更没有理由去考虑,因为对于他们的操作系统,只有一种选择,那就是 NTFS 和 HFS+。相反,对于 Linux 系统而言,有很多种文件系统可以选择,现在默认的是广泛采用的 ext4。然而现在也有改用一种称为 btrfs 文件系统的趋势。那是什么使得 btrfs 更优秀,其它的文件系统又是什么,什么时候我们又能看到 Linux 发行版作出改变呢?
首先让我们对文件系统以及它们真正干什么有个总体的认识,然后我们再对一些有名的文件系统做详细的比较。
### 文件系统是干什么的? ###
如果你不清楚文件系统是干什么的,一句话总结起来也非常简单。文件系统主要用于控制所有程序在不使用数据时如何存储数据、如何访问数据以及有什么其它信息(元数据)和数据本身相关,等等。听起来要编程实现并不是轻而易举的事情,实际上也确实如此。文件系统一直在改进,包括了更多的功能、更高效地完成它需要做的事情。总而言之,它是所有计算机的基本需求、但并不像听起来那么简单。
### 为什么要分区? ###
由于每个操作系统都能创建或者删除分区很多人对分区都有模糊的认识。Linux 操作系统即便使用标准安装过程,在同一块磁盘上仍使用多个分区,这看起来很奇怪,因此需要一些解释。拥有不同分区的一个主要目的就是为了在灾难发生时能获得更好的数据安全性。
通过将硬盘划分为分区,数据会被分隔以及重组。当事故发生的时候,只有存储在被损坏分区上的数据会被破坏,很大可能上其它分区的数据能得以保留。这个原因可以追溯到 Linux 操作系统还没有日志文件系统、任何电力故障都有可能导致灾难发生的时候。
使用分区也考虑到了安全和健壮性原因,因此操作系统部分损坏并不意味着整个计算机就有风险或者会受到破坏。这也是当前采用分区的一个最重要因素。举个例子,用户创建了一些会填满磁盘的脚本、程序或者 web 应用,如果该磁盘只有一个大的分区,如果磁盘满了那么整个系统就不能工作。如果用户把数据保存在不同的分区,那么就只有那个分区会受到影响,而系统分区或者其它数据分区仍能正常运行。
记住拥有一个日志文件系统只能在掉电或者和存储设备意外断开连接时提供数据安全性并不能在文件系统出现坏块或者发生逻辑错误时保护数据。对于这种情况用户可以采用廉价磁盘冗余阵列RAIDRedundant Array of Inexpensive Disks的方案。
### 为什么要切换文件系统? ###
ext4 文件系统由 ext3 文件系统改进而来,而后者又是从 ext2 文件系统改进而来。虽然 ext4 文件系统已经非常稳定,是过去几年中绝大部分发行版的默认选择,但它是基于陈旧的代码开发而来。另外, Linux 操作系统用户也需要很多 ext4 文件系统本身不提供的新功能。虽然通过某些软件能满足这种需求,但性能会受到影响,在文件系统层次做到这些能获得更好的性能。
### Ext4 文件系统 ###
ext4 还有一些明显的限制。最大文件大小是 16 tebibytes大概是 17.6 terabytes这比普通用户当前能买到的硬盘还要大的多。使用 ext4 能创建的最大卷/分区是 1 exbibyte大概是 1,152,921.5 terabytes。通过使用多种技巧 ext4 比 ext3 有很大的速度提升。类似一些最先进的文件系统,它是一个日志文件系统,意味着它会对文件在磁盘中的位置以及任何其它对磁盘的更改做记录。纵观它的所有功能,它还不支持透明压缩、重复数据删除或者透明加密。技术上支持了快照,但该功能还处于实验性阶段。
### Btrfs 文件系统 ###
btrfs 有很多不同的叫法,例如 Better FS、Butter FS 或者 B-Tree FS。它是一个几乎完全从头开发的文件系统。btrfs 出现的原因是它的开发者起初希望扩展文件系统的功能使得它包括快照、池化pooling、校验以及其它一些功能。虽然和 ext4 无关,它也希望能保留 ext4 中能使消费者和企业受益的功能,并整合额外的能使每个人,尤其是企业受益的功能。对于使用大型软件以及大规模数据库的企业,让多种不同的硬盘看起来一致的文件系统能使他们受益并且使数据整合变得更加简单。删除重复数据能降低数据实际使用的空间,当需要镜像一个单一而巨大的文件系统时使用 btrfs 也能使数据镜像变得简单。
用户当然可以继续选择创建多个分区从而无需镜像任何东西。考虑到这种情况btrfs 能横跨多种硬盘,和 ext4 相比,它能支持 16 倍以上的磁盘空间。btrfs 文件系统一个分区最大是 16 exbibytes最大的文件大小也是 16 exbibytes。
### XFS 文件系统 ###
XFS 文件系统是扩展文件系统extent file system的一个扩展。XFS 是 64 位高性能日志文件系统。对 XFS 的支持大概在 2002 年合并到了 Linux 内核,到了 2009 年,红帽企业版 Linux 5.4 也支持了 XFS 文件系统。对于 64 位文件系统XFS 支持最大文件系统大小为 8 exbibytes。XFS 文件系统有一些缺陷例如它不能压缩删除大量文件时性能低下。目前RHEL 7.0 文件系统默认使用 XFS。
### 总结 ###
不幸的是,还不知道 btrfs 什么时候能到来。官方说,其下一代文件系统仍然被归类为“不稳定”,但是如果用户下载最新版本的 Ubuntu就可以选择安装到 btrfs 分区上。什么时候 btrfs 会被归类到 “稳定” 仍然是个谜, 直到真的认为它“稳定”之前,用户也不应该期望 Ubuntu 会默认采用 btrfs。有报道说 Fedora 18 会用 btrfs 作为它的默认文件系统,因为到了发布它的时候,应该有了 btrfs 文件系统校验器。由于还没有实现所有的功能,另外和 ext4 相比性能上也比较缓慢btrfs 还有很多的工作要做。
那么,究竟使用哪个更好呢?尽管性能几乎相同,但 ext4 还是赢家。为什么呢?答案在于易用性以及广泛性。对于桌面或者工作站, ext4 仍然是一个很好的文件系统。由于它是默认提供的文件系统,用户可以在上面安装操作系统。同时, ext4 支持最大 1 exabytes 的卷和 16 terabytes 的文件,因此考虑到大小,它也还有很大的进步空间。
btrfs 能提供更大的高达 16 exabytes 的卷以及更好的容错,但是,到现在为止,它感觉更像是一个附加的文件系统,而部署一个集成到 Linux 操作系统的文件系统。比如,尽管 btrfs 支持不同的发行版,使用 btrfs 格式化硬盘之前先要有 btrfs-tools 工具,这意味着安装 Linux 操作系统的时候它并不是一个可选项,即便不同发行版之间会有所不同。
尽管传输速率非常重要评价一个文件系统除了文件传输速度之外还有很多因素。btrfs 有很多好用的功能例如写复制Copy-on-Write、扩展校验、快照、清洗、自修复数据、冗余删除以及其它保证数据完整性的功能。和 ZFS 相比 btrfs 缺少 RAID-Z 功能,因此对于 btrfs RAID 还处于实验性阶段。对于单纯的数据存储,和 ext4 相比 btrfs 似乎更加优秀,但时间会验证一切。
迄今为止对于桌面系统而言ext4 似乎是一个更好的选择,因为它是默认的文件系统,传输文件时也比 btrfs 更快。btrfs 当然值得尝试、但要在桌面 Linux 上完全取代 ext4 可能还需要一些时间。数据场和大存储池会揭示关于 ext4、XCF 以及 btrfs 不同的场景和差异。
如果你有不同或者其它的观点,在下面的评论框中告诉我们吧。
--------------------------------------------------------------------------------
via: http://www.unixmen.com/review-ext4-vs-btrfs-vs-xfs/
作者:[M.el Khamlichi][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/

View File

@ -1,35 +1,34 @@
如何在 Ubuntu 14.04, 15.10 中安装Light Table 0.8 如何在 Ubuntu 中安装 Light Table 0.8
================================================================================ ================================================================================
![](http://ubuntuhandbook.org/wp-content/uploads/2014/11/LightTable-IDE-logo-icon.png) ![](http://ubuntuhandbook.org/wp-content/uploads/2014/11/LightTable-IDE-logo-icon.png)
Light Table 在经过一年以上的开发,已经推出了新的稳定发行版本。现在它只为 Linux 提供64位的二进制包。 Light Table 在经过一年以上的开发,已经推出了新的稳定发行版本。现在它只为 Linux 提供64位的二进制包。
LightTable 0.8.0的改动: LightTable 0.8.0的改动:
- 更改: 我们从 NW.js 中选择了 Electron - 更改: 我们从 NW.js 切换到了 Electron
- 更改: LTs 发行版本与自更新进程在github上面完全的公开 - 更改: Light Table 的发行与自更新进程完全地公开在github上
- 增加: LT 可以由提供的脚本从源码在支持的不同平台上安装 - 增加: Light Table 可以用提供的脚本在各个支持的平台上从源码构建
- 增加: LTs 大部分的代码库将用npm依赖来安装以取代以forked库安装 - 增加: Light Table 大部分的 node 代码库将通过 npm 依赖来安装,以取代以前采用分叉库的方式
- 增加: 有效文档. 更多详情内容见下面 - 增加: 有效文档更多详情内容见下面
- 修复: 版本号>= OSX 10.10的系统下工作的主要的可用性问题 - 修复: 版本号 >= OSX 10.10的系统下的主要的可用性问题
- 更改: 32位Linux不再提供官方包文件下载从源码安装仍旧将被支持 - 更改: 官方不再提供 32位 Linux 软件包下载,不过仍然支持从源码构建
- 修复: ClojureScript eval 在ClojureScript的现代版本可以正常工作 - 修复: ClojureScript eval 支持 ClojureScript 的现代版本
- 参阅更多 [github.com/LightTable/LightTable/releases][1] - 参阅更多 [github.com/LightTable/LightTable/releases][1]
![LightTable 0.8.0](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-08.jpg) ![LightTable 0.8.0](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-08.jpg)
### 如何在Ubuntu中安Light Table 0.8.0: ### ### 如何在 Ubuntu 中安Light Table 0.8.0 ###
下面的步骤回指导你怎么样在Ubuntu下安装官方的二进制包在目前Ubuntu发行版本都适用(**仅仅针对64位**)。 下面的步骤会指导你怎么样在 Ubuntu 下安装官方的二进制包,在目前的 Ubuntu 发行版本中都适用(**仅仅针对64位**)。
在开始之前,如果你安装了之前的版本请做好备份。 在开始之前,如果你安装了之前的版本请做好备份。
**1.** **1.** 从以下链接下载 LightTable Linux 下的二进制文件:
从以下链接下载LightTable Linux下的二进制文件
- [lighttable-0.8.0-linux.tar.gz][2] - [lighttable-0.8.0-linux.tar.gz][2]
**2.** **2.** 从 dash 或是应用启动器,或者是 Ctrl+Alt+T 快捷键打开终端,并且在输入以下命令后敲击回车键:
从dash或是应用启动器或者是Ctrl+Alt+T快捷键打开终端并且在输入以下命令后敲击回车键
gksudo file-roller ~/Downloads/lighttable-0.8.0-linux.tar.gz gksudo file-roller ~/Downloads/lighttable-0.8.0-linux.tar.gz
@ -37,9 +36,7 @@ LightTable 0.8.0的改动:
如果命令不工作的话从 Ubuntu 软件中心安装`gksu`。 如果命令不工作的话从 Ubuntu 软件中心安装`gksu`。
**3.** **3.** 之前的命令使用了 root 用户权限通过档案管理器打开了下载好的存档。
之前的命令使用了root用户权限通过档案管理器打开了下载好的存档。
打开它后,请做以下步骤: 打开它后,请做以下步骤:
@ -58,7 +55,7 @@ LightTable 0.8.0的改动:
gksudo gedit /usr/share/applications/lighttable.desktop gksudo gedit /usr/share/applications/lighttable.desktop
通过Gedit文本编辑器打开文件后, 粘贴下面的内容并保存: 通过 Gedit 文本编辑器打开文件后,粘贴下面的内容并保存:
[Desktop Entry] [Desktop Entry]
Version=1.0 Version=1.0
@ -97,8 +94,8 @@ LightTable 0.8.0的改动:
via: http://ubuntuhandbook.org/index.php/2015/12/install-light-table-0-8-ubuntu-14-04/ via: http://ubuntuhandbook.org/index.php/2015/12/install-light-table-0-8-ubuntu-14-04/
作者:[Ji m][a] 作者:[Ji m][a]
译者:[译者ID](https://github.com/译者ID) 译者:[zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,10 +1,10 @@
Linux桌面趣闻:召唤一群企鹅在桌面上行走 Linux/Unix 桌面趣事:召唤一群企鹅在桌面上行走
================================================================================ ================================================================================
XPenguins是一个在窗口播放可爱动物动画的程序。默认情况下将会从屏幕上方掉落企鹅沿着你的窗口顶部行走在窗口变漂浮,滑板,和做其他类似的令人兴奋的事情。现在,你可以把这些可爱的小企鹅大军入侵别人的桌面了。 XPenguins 是一个在窗口播放可爱动物动画的程序。默认情况下,将会从屏幕上方掉落企鹅,沿着你的窗口顶部行走,在窗口漂浮起来,踩上滑板,和做其他类似的有趣的事情。现在,你可以把这些可爱的小企鹅大军入侵别人的桌面了。
### 安装XPenguins ### ### 安装XPenguins ###
打开终端(选择程序->附件->终端接着输入下面的命令来安装XPenguins。首先输入apt-get update通过请求配置的仓库刷新包的信息接着安装需要的程序 打开终端(选择程序->附件->终端),接着输入下面的命令来安装 XPenguins。首先输入 `apt-get update` 通过请求配置的仓库刷新包的信息,接着安装需要的程序:
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install xpenguins $ sudo apt-get install xpenguins
@ -19,15 +19,15 @@ XPenguins是一个在窗口播放可爱动物动画的程序。默认情况下
![An army of cute little penguins invading the screen](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_002_12_07_2011.png) ![An army of cute little penguins invading the screen](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_002_12_07_2011.png)
一支可爱企鹅军队正在入侵屏幕。 *一支可爱企鹅军队正在入侵屏幕。*
![Linux: Cute little penguins walking along the tops of your windows](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_001_12_07_2011.png) ![Linux: Cute little penguins walking along the tops of your windows](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_001_12_07_2011.png)
Linux:可爱的小企鹅沿着窗口的顶部行走。 *可爱的小企鹅沿着窗口的顶部行走。*
![Xpenguins Screenshot](http://files.cyberciti.biz/uploads/tips/2011/07/xpenguins-screenshot.jpg) ![Xpenguins Screenshot](http://files.cyberciti.biz/uploads/tips/2011/07/xpenguins-screenshot.jpg)
Xpenguins截图 *Xpenguins 截图*
移动窗口时小心点小家伙们很容易被压坏。如果你发送中断程序Ctrl-C它们会爆炸。 移动窗口时小心点小家伙们很容易被压坏。如果你发送中断程序Ctrl-C它们会爆炸。
@ -91,7 +91,7 @@ via: http://www.cyberciti.biz/tips/linux-cute-little-xpenguins-walk-along-tops-o
作者Vivek Gite 作者Vivek Gite
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,90 @@
2016如何选择 Linux 桌面环境
=============================================
![](http://www.linux.com/images/stories/66866/DE-2.png)
Linux 创建了一个友好的环境,为我们提供了选择的可能。比方说,现代大多数的 Linux 发行版都提供不同桌面环境给我们来选择。在本文中,我将挑选一些你可能会在 Linux 中见到的最棒的桌面环境来介绍。
## Plasma
我认为,[KDE 的 Plasma 桌面](https://www.kde.org/workspaces/plasmadesktop/) 是最先进的桌面环境 (LCTT 译注:译者认为,没有什么是最好的,只有最合适的,毕竟每个人的喜好都不可能完全相同)。它是我见过功能最完善和定制性最高的桌面环境;在用户完全自主控制方面,即使是 Mac OS X 和 Windows 也无法与之比拟。
我爱 Plasma因为它自带了一个非常好的文件管理器 —— Dolphin。而相对应 Gnome 环境,我更喜欢 Plasma 的原因就在于这个文件管理器。使用 Gnome 最大的痛苦就是它的文件管理器——Files——使我无法完成一些基本任务比如说批量文件重命名操作。而这个操作对我来说相当重要因为我喜欢拍摄但 Gnome 却让我无法批量重命名这些图像文件。而使用 Dolphin 的话,这个操作就像在公园散步一样简单。
而且,你可以通过插件来增强 Plasma 的功能。Plasma 有大量的基础软件,如 Krita、Kdenlive、Calligra 办公套件、digiKam、Kwrite 以及由 KDE 社区开发维护的大量应用。
Plasma 桌面环境唯一的缺陷就是它默认的邮件客户端——Kmail。它的设置比较困难我希望 Kmail 设置可以配置地址簿和日历。
包括 openSUSE 在内的多数主流发行版多使用 Plasma 作为默认桌面。
## GNOME
[GNOME](https://www.gnome.org/) (GNU Network Object Model EnvironmentGNU 网络对象模型环境) 由 [Miguel de Icaza](https://en.wikipedia.org/wiki/Miguel_de_Icaza) 和 Federico Mena 在 1997 年的时候创立,这是因为 KDE 使用了 Qt 工具包,而这个工具包是使用专属许可证 (proprietary license) 发布的。不像提供了大量定制的 KDEGNOME 专注于让事情变得简单。因为其自身的简单性和易用性GNOME 变得相当流行。而我认为 GNOME 之所以流行的原因在于Ubuntu——使用 GNOME 作为默认桌面的主流 Linux 发行版之一——对其有着巨大的推动作用。
随着时代变化GNOME 也需要作出相应的改变了。因此,开发者在 GNOME 3 中推出了 GNOME 3 Shell从而引出了它的全新设计规范。但这同时与 Canonical 的 Ubuntu 计划存在者一些冲突,所以 Canonical 为 GNOME 开发了叫做 Unity 的自己的 Shell。最初GNOME 3 Shell 因很多争议 (issues) 而困扰不已——最明显的是升级之后会导致很多扩展无法正常工作。由于设计上的重大改版以及各种问题的出现GNOME 便产生了很多分支fork比如 Cinnamon 和 Mate 桌面。
另外,使得 GNOME 让人感兴趣的是它针对触摸设备做了优化所以如果你有一台触屏笔记本电脑的话GNOME 则是最合适你这台电脑的桌面环境。
在 3.18 版本中GNOME 已经作出了一些令人印象深刻的改动。其中他们所做的最让人感兴趣的是集成了 Google Drive用户可以把他们的 Google Drive 挂载为远程存储设备,这样就不必再使用浏览器来查看里边的文件了。我也很喜欢 GNOME 里边自带的那个优秀的邮件客户端,它带有日历和地址簿功能。尽管有这么多些优秀的特性,但它的文件管理器使我不再使用 GNOME ,因为我无法处理批量文件重命名。我会坚持使用 Plasma一直到 GNOME 的开发者修复了这个小缺陷。
![](http://www.linux.com/images/stories/66866/DE-fig1.png)
## Unity
从技术上来说,[Unity](https://unity.ubuntu.com/) 并不是一个桌面环境,它只是 Canonical 为 Ubuntu 开发的一个图形化 Shell。Unity 运行于 GNOME 桌面之上,并使用很多 GNOME 的应用和工具。Ubuntu 团队分支了一些 GNOME 组件,以便更好的满足 Unity 用户的需求。
Unity 在 Ubuntu 的融合convergence计划中扮演着重要角色 在 Unity 8 中Canonical 公司正在努力将电脑桌面和移动世界结合到一起。Canonical 同时还为 Unity 开发了许多的有趣技术,比如 HUD (Head-up Display平视显示)。他们还在 lenses 和 scopes 中通过一种独特的技术来让用户方便地找到特定内容。
即将发行的 Ubuntu 16.04,将会搭载 Unity 8那时候用户就可以完全体验开发者为该开源软件添加的所有特性了。其中最大的争议之一Unity 可选取消集成了 Amazon Ads 和其他服务。而在即将发行的版本Canonical 从 Dash 移除了 Amazon ads但却默认保证了系统的隐私性。
## Cinnamon
最初,[Cinnamon](https://en.wikipedia.org/wiki/Cinnamon_(software\)) 由 [Linux Mint](http://www.linuxmint.com/) 开发 —— 这是 DistroWatch.com 上统计出来最流行的发行版。就像 UnityCinnamon 是 GNOME Shell 的一个分支。但最后进化为一个独立的桌面环境,这是因为 Linux Mint 的开发者分支了 GNOME 桌面中很多的组件到 Cinnamon包括 Files ——以满足自身用户的需求。
由于 Linux Mint 基于普通版本的 Ubuntu开发者仍需要去完成 Ubuntu 尚未完成的目标。结果,尽管前途光明,但 Cinnamon 却充满了 Bugs 和问题。随着 17.x 本版的发布Linux Mint 开始转移到 Ubuntu 的 LTS 版本上,从而他们可以专注于开发 Cinnamon 的核心组件,而不必再去担心代码库。转移到 LTS 的好处是Cinnamon 变得非常稳定并且基本没有 Bugs 出现。现在,开发者已经开始向桌面环境中添加更多的新特性了。
对于那些更喜欢在 GNOME 基础上有一个很好的类 Windows 用户界面的用户来说Cinnamon 是他们最好的桌面环境。
## MATE 桌面
[MATE 桌面](http://mate-desktop.com/) 同样是 GNOME 的一个分支,然而,它并不像 Cinnamon 那样由 GNOME 3 分支而来,而是现在已经没有人维护的 GNOME 2 代码库的一个分支。MATE 桌面中的一些开发者并不喜欢 GNOME 3 并且想要“继续坚持” GNOME 2所以他们使用这个代码库来创建来 MATE。为避免和 GNOME 3 的冲突他们重命名了全部的包Nautilus 改为 Caja、Gedit 改为 Pluma 以及 Evince 改为 Atril 等。
尽管 MATE 延续了 GNOME 2但这并不意味着他们使用过时的技术相反他们使用了更新的技术来提供一个现代的 GNOME 2 体验。
拥有相当高的资源使用率才是 MATE 最令人印象深刻之处。你可将它运行在老旧硬件或者更新一些的但不太强大的硬件上,如树梅派 (Raspberry Pi) 或者 Chromebook Flip。使得它更有让人感兴趣的是把它运行在一些强大的硬件上可以节省大多数的资源给其他应用而桌面环境本身只占用很少的资源。
## LXQt
[LXQt](http://lxqt.org/) 继承了 LXDE ——最轻量级的桌面环境之一。它融合了 LXDE 和 Razor-Qt 两个开源项目。LXQt 的首个可用本版v 0.9)发布于 2015 年。最初,开发者使用了 Qt4 ,之后为了加快开发速度,而放弃了兼容性,他们移动到 Qt5 和 KDE 框架上。我也在自己的 Arch 系统上尝试使用了 LXQt它的确是一个非常好的轻量级桌面环境。但在完全接过 LXDE 的传承之前LXQt 仍有一段很长的路需要走。
## Xfce
[Xfce](http://www.xfce.org/) 早于 KDE 桌面环境它是最古老和最轻量级的桌面环境。Xfce 的最新版本是 4.15,发布于 2015 年,使用了诸如 GTK+ 3 的大量的现代科技。很多发行版都使用了 Xfce 环境以满足特定需求,比如 Ubuntu Studio ——与 MATE 类似——尽量节省系统资源给其他的应用。并且,许多的著名的 Linux 发行版——包括 Manjaro Linux、PC/OS、Salix 和 Mythbuntu ——都把它作为默认桌面环境。
## Budgie
[Budgie](https://solus-project.com/budgie/) 是一个新型的桌面环境,由 Solus Linux 团队开发和维护。Solus 是一个从零开始构建的新型发行版,而 Budgie 则是它的一个核心组件。Budgie 使用了大量的 GNOME 组件,从而提供一个华丽的用户界面。由于没有该桌面环境的更多信息,我特地联系了 Solus 的核心开发者—— Ikey Doherty。他解释说“我们搭载了自己的桌面环境—— Budgie 桌面。与其他桌面环境不同的是Budgie 并不是其他桌面的一个分支,它的目标是彻底融入到 GNOME 协议栈之中。它完全从零开始编写,并特意设计来迎合 Solus 提供的体验。我们会尽可能的和 GNOME 的上游团队协同工作,修复 Bugs并提倡和支持他们的工作”。
## Pantheon
我想,[Pantheon](https://elementary.io/) 不需要特别介绍了吧,那个优美的 elementary OS 就使用它作为桌面。类似于 Budgie很多人都认为 Pantheon 也不是 GNOME 的一个分支。elementary OS 团队大多拥有良好的设计从业背景,所以他们会近距离关注每一个细节,这使得 Pantheon 成为一个非常优美的桌面环境。偶尔,它可能缺少像 Plasma 等桌面中的某些特性,但开发者实际上是尽其所能的去坚持设计原则。
![](http://www.linux.com/images/stories/66866/DE-3.png)
## 结论
当我写完本文后,我突然意识到来开源和 Linux 的重大好处。总有一些东西适合你。就像 Jon “maddog” Hall 在最近的 SCaLE 14 上说的那样:“是的,现在有 300 多个 Linux 发行版。我可以一个一个去尝试,然后坚持使用我最喜欢的那一个”。
所以,尽情享受 Linux 的多样性吧,最后使用最合你意的那一个。
------------------------------------------------------------------------------
via: http://www.linux.com/news/software/applications/881107-best-linux-desktop-environments-for-2016
作者:[Swapnil Bhartiya][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/61003

View File

@ -0,0 +1,24 @@
Ubuntu 16.04 为更好支持容器化而采用 ZFS
=======================================================
![](https://www.phoronix.com/assets/categories/ubuntu.jpg)
Ubuntu 开发者正在为 [Ubuntu 16.04 加上 ZFS 支持](http://www.phoronix.com/scan.php?page=news_item&px=ZFS-For-Ubuntu-16.04) ,并且对该文件系统的所有支持都已经准备就绪。
Ubuntu 16.04 的默认安装将会继续是 ext4但是 ZFS 支持将会自动构建进 Ubuntu 发布中模块将在需要时自动加载zfsutils-linux 将放到 Ubuntu 主分支内,并且通过 Canonical 对商业客户提供支持。
对于那些对 Ubuntu 中的 ZFS 感兴趣的人Canonical 的 Dustin Kirkland 已经写了[一篇新的博客](http://blog.dustinkirkland.com/2016/02/zfs-is-fs-for-containers-in-ubuntu-1604.html)介绍了一些细节及为何“ZFS 是 Ubuntu 16.04 中面向容器使用的文件系统!”
------------------------------------------------------------------------------
via: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-ZFS-Continues-16.04&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Phoronix+%28Phoronix%29
作者:[Michael Larabel][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.michaellarabel.com/

View File

@ -1,14 +1,16 @@
Linux最大版本4.1.18 LTS发布带来大量修改 Linux 4.1 系列的最大版本 4.1.18 LTS发布带来大量修改
================================================================================= =================================================================================
LCTT 译注:这是一则过期的消息,但是为了披露更新内容,还是发布出来给大家参考)
**著名的内核维护者Greg Kroah-Hartman貌似正在度假中因为Sasha Levin有幸在今天,2016年2月16日的早些时候来[宣布](http://lkml.iu.edu/hypermail/linux/kernel/1602.2/00520.html)第十八个Linux内核维护版本Linux Kernel 4.1 LTS通用版本正式发布。** **著名的内核维护者Greg Kroah-Hartman貌似正在度假中因为Sasha Levin2016年2月16日的早些时候来[宣布](http://lkml.iu.edu/hypermail/linux/kernel/1602.2/00520.html)第十八个Linux内核维护版本Linux Kernel 4.1 LTS通用版本正式发布。**
作为长期支持的内核分支Linux 4.1将再多几年接收到更新和补丁,而今天的维护构建版本也证明一点,就是内核开发者们正致力于保持该系列在所有使用该版本的GNU/Linux操作系统上稳定和可靠。Linux Kernel 4.1.18 LTS是一个大规模发行版它带来了总计达228个文件修改这些修改包含了多达5304个插入修改和1128个删除修改。 作为长期支持的内核分支Linux 4.1还会在几年内得到更新和补丁而今天的维护构建版本也证明一点就是内核开发者们正致力于保持该系列在所有使用该版本的GNU/Linux操作系统上稳定和可靠。Linux Kernel 4.1.18 LTS是一个大的发布版本它带来了总计达228个文件修改这些修改包含了多达5304个插入修改和1128个删除修改。
Linux Kernel 4.1.18 LTS更新了什么呢好吧首先是对ARMARM64AArch64MIPSPA-RISCm32rPowerPCPPCs390以及x86等硬件架构的改进。此外还有对BtrfsCIFSNFSXFSOCFS2OverlayFS以及UDF文件系统的加强。对网络堆栈的修复尤其是对mac80211的修复。同时还有多核心、加密和mm等方面的改进和对声音的更新。 Linux Kernel 4.1.18 LTS更新了什么呢好吧首先是对ARMARM64AArch64MIPSPA-RISCm32rPowerPCPPCs390以及x86等硬件架构的改进。此外还有对BtrfsCIFSNFSXFSOCFS2OverlayFS以及UDF文件系统的加强。对网络堆栈的修复尤其是对mac80211的修复。同时还有多核心、加密和mm等方面的改进和对声音的更新。
“我宣布4.1.18内核正式发布所有4.1内核系列的用户都应当升级。”Sasha Levin说“更新的4.1.y git树可以在这里找到git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-4.1.y并且可以在常规kernel.org git网站浏览器http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=summary 进行浏览。” “我宣布4.1.18内核正式发布所有4.1内核系列的用户都应当升级。”Sasha Levin说“更新的4.1.y git树可以在这里找到git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-4.1.y并且可以在 kernel.org 的 git 网站上浏览http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=summary 进行浏览。”
## 大量驱动被更新
### 大量驱动被更新
除了架构、文件系统、声音、网络、加密、mm和核心内核方面的改进之外Linux Kernel 4.1.18 LTS更新了各个驱动以提供更好的硬件支持特别是像蓝牙、DMA、EDAC、GPU主要是Radeon和Intel i915、无限带宽技术、IOMMU、IRQ芯片、MD、MMC、DVB、网络主要是无线、PCI、SCSI、USB、散热、暂存和Virtio等此类东西。 除了架构、文件系统、声音、网络、加密、mm和核心内核方面的改进之外Linux Kernel 4.1.18 LTS更新了各个驱动以提供更好的硬件支持特别是像蓝牙、DMA、EDAC、GPU主要是Radeon和Intel i915、无限带宽技术、IOMMU、IRQ芯片、MD、MMC、DVB、网络主要是无线、PCI、SCSI、USB、散热、暂存和Virtio等此类东西。
@ -20,7 +22,7 @@ via: http://news.softpedia.com/news/linux-kernel-4-1-18-lts-is-the-biggest-in-th
作者:[Marius Nestor][a] 作者:[Marius Nestor][a]
译者:[GOLinux](https://github.com/GOLinux) 译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,38 @@
Mozilla 贡献者为大众创建糖尿病项目
================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
我的开源生涯从我还是一名高中生开始我总想着自己能成为一名黑客没有什么恶意的只是喜欢钻研代码和硬件那种。我第一次接触开源是2001年我安装了我的第一个Linux发行版[Lindows](https://en.wikipedia.org/wiki/Linspire)。当然,我也是[Mozilla Firefox](https://www.mozilla.org/en-US/firefox/new/?utm_source=firefox-com&utm_medium=referral)的早期用户。
由于我很早使用Linux我用的第一个版本是 Lindows 1.0.4如果我没记错的话我就立即爱上了它。我没在Lindows上呆太久而是活跃于多个发行版[Debian](https://www.debian.org/),
[Puppy Linux](http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm), [SUSE](https://www.suse.com/),
[Slackware](http://www.slackware.com/), [Ubuntu](http://ubuntu.com/)),多年来我一直每天使用着开源软件,从青少年时候直到我成年。
最后我坚持使用Ubuntu。大概是在Hardy HeronLCTT 译注Ubuntu8.04 LTS发布时候我开始第一次为Ubuntu做些贡献在IRC频道和当地社区帮助那些需要帮助的用户。我是通过Ubuntu认识开源的它在我心里总有着特殊的意义。Ubuntu背后的社区的是非常多样化的、热情的、友好的每个人都做些共享是他们共同的目标也是个人目标这成为他们为开源贡献的动力。
在为Ubuntu贡献一段时间后我开始了为一些上游项目作贡献比如Debian、[GNOME](https://www.gnome.org/)、 [Ganeti](https://code.google.com/p/ganeti/)还有许多其他的开源项目。在过去的几年里我为超过40个开源项目贡献过有些小的也有很大的。
在Ubuntu项目方向上有些变化之后我最终觉得这不仅对于我是一个尝试新东西的机遇而且也是我给一些新东西贡献的时候。所以我在2009年参与了Mozilla项目在IRC帮忙最终通过参与[Mozilla WebFWD program](https://webfwd.org/),成为一名团队成员,然后是[Mozilla Reps Program](https://reps.mozilla.org/)[Mozilla DevRel Program](https://wiki.mozilla.org/Devrel)刚过两年时间我成为了火狐社区的发布经理负责监督Firefox Nightly和Firefox ESR的发布。相比其他开源项目在为Mozilla贡献中会获得更多有益的经验。在所有我参与过的开源社区中Mozilla是最不同的最大的也是最友好的。
这些年来,关于开源我觉得,我越来越遵循自由软件价值观、捍卫隐私和许可协议合规,以及在开放的氛围下工作。我相信这三个主题对于开源来说是非常重要的,虽然许多人并没在意到提倡它们是很重要的。
今天在这我已不再是别人的开源项目的全职贡献者。最近我被诊断出患有糖尿病我看到了开源软件中健康软件不是很丰富这一缺口。确实它不像其它开源软件应用如Linux发行版或浏览器那样活跃。
我最近创立了自己的开源项目[Glucosio](http://www.glucosio.org/)带给人们糖尿病管理和研究的开源软件。经过几年来对开源项目的贡献和见识过的多种组织结构使得我作为项目领导能够得心应手。我对于Glucosio的未来很兴奋但最重要的是未来的开源将在医疗健康领域发展的如何。
医疗保健软件的创新具有很大潜力,我想我们很快就会看到用于改善医疗卫生保健的开源新方案。
------------------------------------------------------------------------------
via: https://opensource.com/life/15/11/my-open-source-story-ben-kerensa
作者:[Benjamin Kerensa][a]
译者:[ynmlml](https://github.com/ynmll)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bkerensa

View File

@ -3,41 +3,41 @@
![](http://www.linux.com/images/stories/66866/STM32_Nucleo_expansion_board.jpg) ![](http://www.linux.com/images/stories/66866/STM32_Nucleo_expansion_board.jpg)
32 位微控制器世界向 Linux 敞开大门。本周,领先的 ARM Cortex-M 供应商意法半导体ST[发布了](http://www.st.com/web/en/press/p3781) 一款自由的 Linux 桌面版开发程序,该软件面向其旗下的 STM32 微控制单元MCU。包含了 ST 的 STM32CubeMX 配置器和初始化工具,以及其 STM32 [系统工作台(SW4STM32)](http://www.st.com/web/catalog/tools/FM147/CL1794/SC961/SS1533/PF261797) ,这个基于 Eclipse 的 IDE 由工具 Ac6 创建。支撑 SW4STM32 的工具链,论坛,博客以及技术会由 [openSTM32.org](http://www.openstm32.org/tiki-index.php?page=HomePage) 开发社区提供。 32 位微控制器世界向 Linux 敞开大门。前一段时间,领先的 ARM Cortex-M 供应商意法半导体ST[发布了](http://www.st.com/web/en/press/p3781) 一款自由的 Linux 桌面版开发程序,该软件面向其旗下的 STM32 微控制单元MCU。包含了 ST 的 STM32CubeMX 配置器和初始化工具,以及其 STM32 [系统工作台SW4STM32](http://www.st.com/web/catalog/tools/FM147/CL1794/SC961/SS1533/PF261797) ,这个基于 Eclipse 的 IDE 由工具 Ac6 创建。支撑 SW4STM32 的工具链,论坛,博客以及技术会由 [openSTM32.org](http://www.openstm32.org/tiki-index.php?page=HomePage) 开发社区提供。
“Linux 社区以吸引富有创意的自由思想者而闻名,他们善于交流心得、高效地克服挑战。” Laurent Desseignes意法半导体微控制器产品部微控制器生态系统市场经理这么说道“我们正着手做的是让他们能极端简单的借力 STM32 系列的特性和性能施展自己的才能,运用到富有想象力的新产品的创造中去。 “Linux 社区以吸引富有创意的自由思想者而闻名,他们善于交流心得、高效地克服挑战。” Laurent Desseignes意法半导体微控制器产品部微控制器生态系统市场经理这么说道“我们正着手做的是让他们能极端简单的借力 STM32 系列的特性和性能施展自己的才能,运用到富有想象力的新产品的创造中去。
Linux 是物联网IoT网关和枢纽及高端 IoT 终端的领先平台。但是,大部分 IoT 革命,以及可穿戴设备市场基于小型的低功耗微控制器,对 Cortex-M 芯片的运用越来越多。虽然其中的一小部分可以运行精简的 uCLinux (见下文),却没能支持更全面的 Linux 发行版。取而代之的是实时操作系统RTOS们或者有时干脆不用 OS 来控制。固件的开发工作一般会在基于 Windows 的集成开发环境IDE上完成。 Linux 是物联网IoT网关和枢纽及高端 IoT 终端的领先平台。但是,大部分 IoT 革命,以及可穿戴设备市场基于小型的低功耗微控制器,对 Cortex-M 芯片的运用越来越多。虽然其中的一小部分可以运行精简的 uCLinux (见下文),却没能支持更全面的 Linux 发行版。取而代之的是实时操作系统RTOS们或者有时干脆不用 OS 来控制。固件的开发工作一般会在基于 Windows 的集成开发环境IDE上完成。
通过 ST 的自由工具Linux 开发者们可以更容易的开疆拓土。ST 工具中的一些技术在第二季度应该登录 Mac OS/X 平台,与 [STM32 Nucleo](http://www.st.com/web/en/catalog/tools/FM146/CL2167/SC2003?icmp=sc2003_pron_pr-stm32f446_dec2014&sc=stm32nucleo-pr5) 开发套件以及评估板同时面世。Nucleo 支持 32 针, 64 针, 和 144 针的版本,并且提供类似 Arduino 连接器这样的插件。 通过 ST 的自由工具Linux 开发者们可以更容易的开疆拓土。ST 工具中的一些技术在第二季度应该登录 Mac OS/X 平台,与 [STM32 Nucleo](http://www.st.com/web/en/catalog/tools/FM146/CL2167/SC2003?icmp=sc2003_pron_pr-stm32f446_dec2014&sc=stm32nucleo-pr5) 、开发套件、以及评估板同时面世。Nucleo 支持 32 针、64 针、和 144 针的版本,并且提供类似 Arduino 连接器这样的插件。
STM32CubeMX 配置器和 IDE SW4STM32 使 Linux 开发者能够配置微控制器并开发调试代码。SW4STM32 支持在 Linux 下通过社区更改版的 [OpenOCD](http://openocd.org/) 使用调试工具 ST-LINK/V2。 STM32CubeMX 配置器和 IDE SW4STM32 使 Linux 开发者能够配置微控制器并开发调试代码。SW4STM32 支持在 Linux 下通过社区更改版的 [OpenOCD](http://openocd.org/) 使用调试工具 ST-LINK/V2。
据 ST 称,软件兼容 STM32Cube 软件包及标准外设库中的微控制器固件。目标是囊括 ST 的全系列 MCU从入门级的 Cortex-M0 内核到高性能的 M7 芯片,包括 M0+M3 和 DSP 扩展的 M4 内核。 据 ST 称,软件兼容 STM32Cube 软件包及标准外设库中的微控制器固件。目标是囊括 ST 的全系列 MCU从入门级的 Cortex-M0 内核到高性能的 M7 芯片,包括 M0+M3 和 DSP 扩展的 M4 内核。
ST 并非首个为 Linux 准备 Cortex-M 芯片 IDE 的 32 位 MCU 供应商,但似乎是第一大自由的 Linux 平台。例如NXPMCU 的市场份额随着近期收购了 Freescale Kinetis 系列 MCU而增加提供了一款 IDE [LPCXpresso IDE](http://www.nxp.com/pages/lpcxpresso-ide:LPCXPRESSO)支持 Linux 、Windows 和 Mac。然而LPCXpresso 每份售价 $450。 ST 并非首个为 Linux 准备 Cortex-M 芯片 IDE 的 32 位 MCU 供应商,但似乎是第一大自由的 Linux 平台。例如 NXPMCU 的市场份额随着近期收购了 Freescale Kinetis 系列 MCU而增加提供了一款 IDE [LPCXpresso IDE](http://www.nxp.com/pages/lpcxpresso-ide:LPCXPRESSO)支持 Linux 、Windows 和 Mac。然而LPCXpresso 每份售价 $450。
在其 [SmartFusion FPGA 系统级芯片SoC](http://www.microsemi.com/products/fpga-soc/soc-processors/arm-cortex-m3)上集成了 Cortex-M3 芯片的Microsemi拥有一款 IDE [Libero IDE](http://www.linux.com/news/embedded-mobile/mobile-linux/884961-st-releases-free-linux-ide-for-32-bit-mcus#device-support)适用于 RHEL 和 Windows。然而Libero 需求许可证,并且 RHEL 版缺乏如 FlashPro 和 SoftConsole 的插件。 在其 [SmartFusion FPGA 系统级芯片SoC](http://www.microsemi.com/products/fpga-soc/soc-processors/arm-cortex-m3)上集成了 Cortex-M3 芯片的 Microsemi拥有一款 IDE [Libero IDE](http://www.linux.com/news/embedded-mobile/mobile-linux/884961-st-releases-free-linux-ide-for-32-bit-mcus#device-support),适用于 RHEL 和 Windows。然而Libero 需要许可证才行,并且 RHEL 版缺乏如 FlashPro 和 SoftConsole 的插件。
## 为什么要学习 MCU? ### 为什么要学习 MCU?
即便 Linux 开发者并没有计划在 Cortex-M 上使用 uClinux但是 MCU 的知识总会派上用场。特别是牵扯到复杂的 IoT 工程,需要扩展 MCU 终端至云端。 即便 Linux 开发者并没有计划在 Cortex-M 上使用 uClinux但是 MCU 的知识总会派上用场。特别是牵扯到复杂的 IoT 工程,需要扩展 MCU 终端至云端。
对于历程和业余爱好者的项目Arduino 板为其访问 MCU 提供了非常便利的接口。然而历程之外,开发者常常就会用更快的 32 位 Cortex-M 芯片以及所带来的附加功能来替代 Arduino 板和板上的那块 8 位 MCU ATmega32u4。这些附加功能包括改进的存储器寻址用于芯片和各种总线的独立时钟设置以及芯片 [Cortex-M7](http://www.electronicsnews.com.au/products/stm32-mcus-with-arm-cortex-m7-processors-and-graph)自带的入门级显示芯片。 对于原型和业余爱好者的项目Arduino 板为其访问 MCU 提供了非常便利的接口。然而原型之外,开发者常常就会用更快的 32 位 Cortex-M 芯片以及所带来的附加功能来替代 Arduino 板和板上的那块 8 位 MCU ATmega32u4。这些附加功能包括改进的存储器寻址用于芯片和各种总线的独立时钟设置以及芯片 [Cortex-M7](http://www.electronicsnews.com.au/products/stm32-mcus-with-arm-cortex-m7-processors-and-graph) 自带的入门级显示芯片。
还有些可能需求 MCU 开发技术的地方有:可穿戴设备、低功耗、低成本和小尺寸给了 MCU 一席之地,还有机器人和无人机这些使用实时处理和电机控制的地方更为受用。在机器人上,你更是有可能看看 Cortex-A 与 Cortex-M 集成在同一个产品中的样子。 还有些可能需求 MCU 开发技术的地方包括可穿戴设备,低功耗、低成本和小尺寸给了 MCU 一席之地,还有机器人和无人机这些使用实时处理和电机控制的地方更为受用。在机器人上,你更是有可能看看 Cortex-A 与 Cortex-M 集成在同一个产品中的样子。
对于 SoC 芯片还有这样的一种温和的局势,即将 MCU 加入到 Linux 驱动的 Cortex-A 核心中,就如同 [NXP i.MX6 SoloX](http://linuxgizmos.com/freescales-popular-i-mx6-soc-sprouts-a-cortex-m4-mcu/)。虽然大多数的嵌入式项目并不使用这种混合型 SoC 或者说将应用处理器和 MCU 结合在同一产品中,但开发者会渐渐地发现自己工作的生产线、设计所基于的芯片正渐渐的从低端的 MCU 模块发展到 Linux 或安卓驱动的 Cortex-A。 对于 SoC 芯片还有这样的一种温和的局势,即将 MCU 加入到 Linux 驱动的 Cortex-A 核心中,就如同 [NXP i.MX6 SoloX](http://linuxgizmos.com/freescales-popular-i-mx6-soc-sprouts-a-cortex-m4-mcu/)。虽然大多数的嵌入式项目并不使用这种混合型 SoC 或者说将应用处理器和 MCU 结合在同一产品中,但开发者会渐渐地发现自己工作的生产线、设计所基于的芯片正渐渐的从低端的 MCU 模块发展到 Linux 或安卓驱动的 Cortex-A。
## uClinux 是 Linux 在 MCU 领域的筹码 ### uClinux 是 Linux 在 MCU 领域的筹码
随着物联网的兴起,我们见到越来越多的 SBC 和模块计算机,它们在 32 位的 MCU 上运行着 uClinux。不同于其他的 Linux 发行版uClinux 并不需要内存管理单元MMU。然而uClinux 对市面上可见 MCU 有更高的内存需求。需求更高端的 Cortex-M4 和 Cortex-M4 微控制器内置内存控制器来支持外部 DRAM 芯片。 随着物联网的兴起,我们见到越来越多的 SBC 和模块计算机,它们在 32 位的 MCU 上运行着 uClinux。不同于其他的 Linux 发行版uClinux 并不需要内存管理单元MMU。然而uClinux 对市面上可见 MCU 有更高的内存需求。需求更高端的 Cortex-M4 和 Cortex-M4 微控制器内置内存控制器来支持外部 DRAM 芯片。
[Amptek](http://www.semiconductorstore.com/Amptek/) SBC 在 NXP LPC Cortex-M3 和 -M4 芯片上运行 uClinux以提供常用的功能类似 WiFi、蓝牙、USB 等众多接口。Arrow 的 [SF2+](http://linuxgizmos.com/iot-dev-kit-runs-uclinux-on-a-microsemi-cortex-m3-fpga-soc/) 物联网开发套件将 uClinux 运行于 SmartFusion2 模块计算机的 Emcraft 系统上,该模块计算机是 Microsemi 的 166MHz Cortex-M3/FPGA SmartFusion2 混合 SoC。 [Amptek](http://www.semiconductorstore.com/Amptek/) SBC 在 NXP LPC Cortex-M3 和 -M4 芯片上运行 uClinux以提供常用的功能类似 WiFi、蓝牙、USB 等众多接口。Arrow 的 [SF2+](http://linuxgizmos.com/iot-dev-kit-runs-uclinux-on-a-microsemi-cortex-m3-fpga-soc/) 物联网开发套件将 uClinux 运行于 SmartFusion2 模块计算机的 Emcraft 系统上,该模块计算机是 Microsemi 的 166MHz Cortex-M3/FPGA SmartFusion2 混合 SoC。
[Emcraft](http://www.emcraft.com/) 销售基于 uClinux 的模块计算机,有 ST 和 NXP 的,也有 Microsemi 的 MCU是 32 位 MCU 上积极推进 uClinux 的重要角色。日益频繁的 uClinux 开始了与 ARM 本身 [Mbed OS](http://linuxgizmos.com/arm-announces-mbed-os-for-iot-devices/)的对抗,至少在高端的 MCU 工程中需要无线通信和更为复杂的操作规则。Mbed 和 modern 的支持者,开源的 RTOS 们,类似 FreeRTOS 认为 uClinux 需要对 RAM 的需求太高以至于难以压低 IoT 终端的价格然而 Emcraft 与其他 uCLinux 拥趸表示价格并没有如此夸张,而且扩展 Linux 的无线和接口也是相当值得的,即使只是在像 uClinux 这样的精简版上。 [Emcraft](http://www.emcraft.com/) 销售基于 uClinux 的模块计算机,有 ST 和 NXP 的,也有 Microsemi 的 MCU是 32 位 MCU 上积极推进 uClinux 的重要角色。日益频繁的 uClinux 开始了与 ARM 本身 [Mbed OS](http://linuxgizmos.com/arm-announces-mbed-os-for-iot-devices/)的对抗,至少在高端的 MCU 工程中需要无线通信和更为复杂的操作规则。Mbed 和 modern 的支持者,开源的 RTOS 们,类似 FreeRTOS 认为 uClinux 需要对 RAM 的需求太高以至于难以压低 IoT 终端的价格然而 Emcraft 与其他 uCLinux 拥趸表示价格并没有如此夸张,而且扩展 Linux 的无线和接口也是相当值得的,即使只是在像 uClinux 这样的精简版上。
当被问及对于这次 ST 发布的看法Emcraft 的主任工程师 Vladimir Khusainov 表示:ST决定将这款开发工具 移植至 Linux 对于 Emcraft 是个好消息,它使得 Linux 用户能轻易的在嵌入式 STM MCU 上展开工作。我们希望那些有机会熟悉 STM 设备,使用 ST 配置器和嵌入式库的用户可以对在目标机上使用嵌入式 Linux (以 uClinux 的形式)感兴趣。“ 当被问及对于这次 ST 发布的看法Emcraft 的主任工程师 Vladimir Khusainov 表示:ST决定将这款开发工具 移植至 Linux 对于 Emcraft 是个好消息,它使得 Linux 用户能轻易的在嵌入式 STM MCU 上展开工作。我们希望那些有机会熟悉 STM 设备,使用 ST 配置器和嵌入式库的用户可能对在目标机上使用嵌入式 Linux (以 uClinux 的形式)感兴趣。”
最近关于 Cortex-M4 上运行 uClinux 的概述,可以查看去年 Jim Huang 与 Jeff Liaws 在嵌入式 Linux 大会上使用的[幻灯片](http://events.linuxfoundation.org/sites/events/files/slides/optimize-uclinux.pdf)。更多关于 Cortex-M 处理器可以查看这里过的 [AnandTech 总结](http://www.anandtech.com/show/8400/arms-cortex-m-even-smaller-and-lower-power-cpu-cores)。 最近关于 Cortex-M4 上运行 uClinux 的概述,可以查看去年 Jim Huang 与 Jeff Liaws 在嵌入式 Linux 大会上使用的[幻灯片](http://events.linuxfoundation.org/sites/events/files/slides/optimize-uclinux.pdf)。更多关于 Cortex-M 处理器可以查看这里过的 [AnandTech 总结](http://www.anandtech.com/show/8400/arms-cortex-m-even-smaller-and-lower-power-cpu-cores)。
@ -47,7 +47,7 @@ via: http://www.linux.com/news/embedded-mobile/mobile-linux/884961-st-releases-f
作者:[Arun Pyasi][a] 作者:[Arun Pyasi][a]
译者:[martin2011qi](https://github.com/martin2011qi) 译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,33 @@
前 Kubuntu 领袖发起了新的 KDE 项目
==============================================
如果你经常阅读 Linux 和[开源新闻](http://itsfoss.com/category/news/)的话应该会对 Jonathan Riddell 这人很熟悉。它是 [Kubuntu](http://www.kubuntu.org/) 发行版的创建者及长期的开发领导。他由于敢于质询 Canonical 基金会对 Kubuntu 的资金筹集情况[而被 Ubuntu 的老板 Mark Shuttleworth 所驱逐](http://www.cio.com/article/2926838/linux/mark-shuttleworth-ubuntu-community-council-ask-kubuntu-developer-to-step-down-as-leader.html) 据我所知Canonical 从来没有真正回答过他的这个关于财务的问题。)
![](http://itsfoss.com/wp-content/uploads/2016/02/kde-neon-e1454448724263.png)
*KDE neon 标志*
在周日Riddell [宣布](https://dot.kde.org/2016/01/30/fosdem-announcing-kde-neon)了一个新项目:[KDE neon](http://neon.kde.org.uk/)。根据 Riddell 的声明“Neon 将会提供一个在最新的 KDE 软件一发布就可以获得的途径。”
在看了声明和网站后,**neon 似乎主要是一个“快速的软件更新仓库”,它让 KDE 粉丝可以用上最新的软件**。除了等上数月来等到开发者在他们的仓库中发布新的 KDE 软件外,你将可以在软件一出来就得到它。
KDE 的确在 [noen FAQ](http://neon.kde.org.uk/faq) 中声明过这不是一个 KDE 创建的发行版。事实上他们说“KDE 相信与许多发行版协作是很重要的,因为它们每个都能给用户提供独特的价值和专长。这是 KDE 成千上万项目中的一个。”
![](http://itsfoss.com/wp-content/uploads/2016/02/kde-neon-e1454448830870.jpg)
然而,网站和公告显示 neon 会运行在 Ubuntu 15.10 中直到有下个长期支持版本并很快会有让我惊奇的情景。KDE 可能要让 Canonical 把此项目视作 Kubuntu 的竞争对手。如果他们发现 KDE neon 有前景,他们就能把它变成一个完整的发行版。网站和通告声称这是一个 KDE 孵化项目,因此未来可能会包含任何东西。
neon 听上去对你有用么,或者你是否对你当前的发行版的 KDE 发布速度满意?你认为是否还有其他 KDE 发行版的空间(如果 KDE 决定朝这个方向进发)?让我在评论栏知道你们的想法。
------------------------------------------------------------------------------
via: http://itsfoss.com/kde-neon-unveiled/
作者:[JOHN PAUL][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/john/

View File

@ -3,19 +3,24 @@ Manjaro Linux 即将推出支持 ARM 处理器的 Manjaro-ARM
![](http://itsfoss.com/wp-content/uploads/2016/02/manjaro-arm.jpg) ![](http://itsfoss.com/wp-content/uploads/2016/02/manjaro-arm.jpg)
最近Manjaro 的开发者为 ARM 处理器发布了一个[ alpha 版本](https://manjaro.github.io/Manjaro-ARM-launched/)。这是这个基于 Arhclinux 的发行版的一大进步,在此之前,它只能在 32 位或者 64 位的个人电脑上运行。 最近Manjaro 的开发者为 ARM 处理器发布了一个 [alpha 版本](https://manjaro.github.io/Manjaro-ARM-launched/)。这是这个基于 Archlinux 的发行版的一大进步,在此之前,它只能在 32 位或者 64 位的个人电脑上运行。
根据公告, “[Manjaro Arm](http://manjaro-arm.org/) 项目致力于将简洁可定制的 Manjaro 移植到使用[ ARM 处理器](https://www.arm.com/)的设备上去。这些设备的数量正在上涨并且应用范围广泛。这些设备中最出名的是树莓派和 BeagleBoard“。目前 Alpha 版本仅支持树莓派2,但是毫无疑问,支持的设备数量会随时间增长。 根据公告, “[Manjaro Arm](http://manjaro-arm.org/) 项目致力于将简洁可定制的 Manjaro 移植到使用 [ARM 处理器](https://www.arm.com/)的设备上去。这些设备的数量越来越多并且应用范围广泛。这些设备中最出名的是树莓派和 BeagleBoard“。目前 Alpha 版本仅支持树莓派2但是毫无疑问,支持的设备数量会随时间增长。
现在这个项目的开发者有 dodgejcr, Torei, Strit, 和 Ringo32。他们正在寻求更多的人来帮助这个项目发展。除了开发者[他们还在寻找维护者,论坛版主,管理员,以及设计师](http://manjaro-arm.org/forums/website/looking-for-contributors/?PHPSESSID=876d5c11400e9c25eb727e9965300a9a)。 现在这个项目的开发者有 dodgejcr, Torei, Strit, 和 Ringo32。他们正在寻求更多的人来帮助这个项目发展。除了开发者[他们还在寻找维护者,论坛版主,管理员,以及设计师](http://manjaro-arm.org/forums/website/looking-for-contributors/?PHPSESSID=876d5c11400e9c25eb727e9965300a9a)。
Manjaro-ARM 将会有四个版本。媒体版本将可以运行Kodi并且允许你很少配置就能创建一个媒体中心。服务器版将会预先配置好 SSHFTPLAMP ,你能把你的 ARM 设备当作服务器使用。基本版是一个桌面版本,自带一个 XFCE 桌面。如果你想自己从头折腾系统的话你可以选择迷你版,它没有任何预先配置的包,仅仅包含一个 root 用户。 Manjaro-ARM 将会有四个版本
## 我的想法 - 媒体版本将可以运行 Kodi 并且允许你用很少的配置就能创建一个媒体中心。
- 服务器版将会预先配置好 SSHFTPLAMP ,你能把你的 ARM 设备当作服务器使用。
- 基本版是一个桌面版本,自带一个 XFCE 桌面。
- 如果你想自己从头折腾系统的话你可以选择迷你版,它没有任何预先配置的软件包,仅仅包含一个 root 用户。
作为一个 Manjaro 的粉丝(我在 4 个电脑上都安了 Manjaro听说他们分支出一个 ARM 版我很高兴。 ARM 处理器被用到了越来越多的设备当中。如同评论员 Robert Cringely 所说, [设备制造商开始注意到昂贵的因特尔或者AMD处理器之外的便宜的多的ARM处理器](http://www.cringely.com/2016/01/21/prediction-8-intel-starts-to-become-irrelevent/)。甚至微软(憋打我)都开始考虑将自己的一些软件移植到 ARM 处理器上去。随着 ARM 处理器设备数量的增多Manjaro 将会带给用户良好的体验。 ### 我的想法
对此,你怎样看待?你希望更多的发行版支持 ARM 吗?或者你认为 ARM 将是昙花一现?在评论区告诉我们。 作为一个 Manjaro 的粉丝(我在 4 个电脑上都安了 Manjaro听说他们分支出一个 ARM 版我很高兴。 ARM 处理器被用到了越来越多的设备当中。如同评论员 Robert Cringely 所说, [设备制造商开始注意到昂贵的因特尔、AMD 处理器之外的便宜的多的 ARM 处理器](http://www.cringely.com/2016/01/21/prediction-8-intel-starts-to-become-irrelevent/)。甚至微软(别打我)都开始考虑将自己的一些软件移植到 ARM 处理器上去。随着 ARM 处理器设备数量的增多Manjaro 将会带给用户良好的体验。
对此,你怎样看?你希望更多的发行版支持 ARM 吗?或者你认为 ARM 将是昙花一现?在评论区告诉我们。
------------------------------------------------------------------------------ ------------------------------------------------------------------------------
@ -23,7 +28,7 @@ via: http://itsfoss.com/manjaro-linux-arm/
作者:[JOHN PAUL][a] 作者:[JOHN PAUL][a]
译者:[name1e5s](https://github.com/name1e5s) 译者:[name1e5s](https://github.com/name1e5s)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,78 @@
初学者 Vi 备忘单
================================================
![](http://itsfoss.com/wp-content/uploads/2016/01/VI.jpg)
一直以来,我都在给你们分享我使用 Linux 的经验。今天我想分享我的 **Vi 备忘单**。这份备忘单节省了我很多时间,因为我再也不用使用 Google 去搜索这些命令了。
## 基本 Vi 命令
这并不是一个教你使用 [Vi 编辑器](https://en.wikipedia.org/wiki/Vi)的各个方面的详尽教程。事实上,这根本就不是一个教程。这仅仅是一些基本 Vi 命令以及这些命令简单介绍的集合。
命令|解释
:--|:--
`:x`|保存文件并退出
`:q!`|退出但不保存文件
`i`|在光标左侧插入
`a`|在光标右侧插入
`ESC`按键|退出插入模式
光标键|移动光标
`/text`|搜索字符串text大小写敏感
`n`|跳到下一个搜索结果
`x`|删除当前光标处的字符
`dd`|删除当前光标所在的行
`u`|撤销上次改变
`:0`数字0|将光标移动到文件开头
`:n`|将光标移动到第n行
`G`|将光标移动到文件结尾
`^`|将光标移动到该行开头
`$`|将光标移动到该行结尾
`:set list`|查看文件中特殊字符
`yy`|复制光标所在行
`5yy`|复制从光标所在行开始的5行
`p`|在光标所在行下面粘贴
你可以通过下面的链接下载 PDF 格式的 Vi 备忘录:
[下载 Vi 备忘录](https://drive.google.com/file/d/0By49_3Av9sT1X3dlWkNQa3g2b2c/view?usp=sharing)
你可以把它打印出来放到你的办公桌上,或者把它保存到你的电脑上来使用。
## 我为什么要建立这个 Vi 备忘录?
几年前,当我刚刚接触 Linux 终端时,使用命令行编辑器这个主意使我一惊。我之前在我自己的电脑上使用过桌面版本的 Linux所以我很乐意使用像 Gedit 这样的有图形界面的编辑器。但是在工作环境中,我不得不使用命令行,并且无法使用图形界面版的编辑器。
我就这么被强迫地使用 Vi 来对远程 Linux 终端上的文件做一些基本的编辑。从这时候我开始了解并钦佩 Vi 的强大之处。
因为在那时候我还是一个 Vi 新手,所以我经常对 Vi 一些操作很困惑。仍然记得第一次使用 Vi 的时候,由于我不知道如何退出 Vi所以我都无法关闭某个文件。我也只能通过 Google 搜索来找到解决办法。我不得不接受这个尴尬的事实。
从那以后,我就决定制作一个列表来列出我经常会用到的基本 Vi 操作。这个列表,或者你可能称它为备忘录。在我早期使用 Vi 的时候,它对我非常有用。慢慢地,我对 Vi 更加熟悉,我已经可以熟记那些基本编辑命令。到现在,我甚至不需要再去查看我的 Vi 备忘录了。
## 你为什么需要 Vi 备忘录?
我能理解一个刚刚接触 Vi 的人的感受。你最喜欢的 `Ctrl`+`S` 快捷键不能像在其他编辑器那样方便地保存文件。`Ctrl`+`C`和`Ctrl`+`V`理应是通用的用来复制和粘贴的快捷键,但是在 Vi 中却不是这样。
很多人都在使用类似的备忘录帮助他们熟悉各种编程语言或工具,以便让他们可以快速找到常用的下一步或命令。相信我,使用备忘录会给程序员日常工作带来很大便利。
如果你刚刚开始接触 Vi 或者你经常使用但是总是记不住 Vi 操作,那么这份 Vi 备忘录对于你来说是非常有用的。你可以把它保存下来留作以后查询使用。
## 你怎么看待这份备忘录?
至今为止,我一直在克制我自己不要过于依赖终端。我想知道你是怎么发现这篇文章的?你是否想让我分享更多类似的备忘录出来以供你们下载?我很期待你的意见和建议。
------------------------------------------------------------------------------
via: http://itsfoss.com/download-vi-cheat-sheet/
作者:[ABHISHEK][a]
译者:[JonathanKang](https://github.com/JonathanKang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/

View File

@ -1,9 +1,11 @@
Xubuntu 16.04 Beta 1 开发发布 Xubuntu 16.04 Beta 1 开发者版本发布
============================================ ============================================
Ubuntu 发布宣布为选定的社区版本而准备的最新的 beta 测试镜像已经可以使用了。新的发布版名称是 16.04 beta 1 ,这个版本只推荐给测试人员测试用,并不适合普通人进行日常使用。 Ubuntu 发布团队宣布为选定的社区版本而准备的最新的 beta 测试镜像已经可以使用了。新的发布版名称是 16.04 beta 1 ,这个版本只推荐给测试人员测试用,并不适合普通人进行日常使用。
“这个 beta 特性的镜像主要是为 [Lubuntu][1], Ubuntu Cloud, [Ubuntu GNOME][2], [Ubuntu MATE][3], [Ubuntu Kylin][4], [Ubuntu Studio][5] 和 [Xubuntu][6] 这几个发布版准备的。Xenial Xerus LCTT 注: ubuntu 16.04 的开发代号) 的这个预发布版本并不推荐那些需要稳定版本的人员使用,同时那些不希望在系统运行中偶尔或频繁的出现 bug 的人也不建议使用这个版本。这个版本主要还是推荐给那些喜欢 ubuntu 的开发人员,以及那些想协助我们测试、报告 bug 和修改 bug 的人使用,和我们一起努力让这个发布版本早日准备就绪。” 更多的信息可以从 [发布日志][7] 获取。 ![](https://www.linux.com/images/stories/66866/xubuntu-small.png)
“这个 beta 特性的镜像主要是为 [Lubuntu][1], Ubuntu Cloud, [Ubuntu GNOME][2], [Ubuntu MATE][3], [Ubuntu Kylin][4], [Ubuntu Studio][5] 和 [Xubuntu][6] 这几个发布版准备的。Xenial Xerus LCTT 译注: ubuntu 16.04 的开发代号)的这个预发布版本并不推荐那些需要稳定版本的人员使用,同时那些不希望在系统运行中偶尔或频繁的出现 bug 的人也不建议使用这个版本。这个版本主要还是推荐给那些喜欢 ubuntu 的开发人员,以及那些想协助我们测试、报告 bug 和修改 bug 的人使用,和我们一起努力让这个发布版本早日准备就绪。” 更多的信息可以从 [发布日志][7] 获取。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------

View File

@ -0,0 +1,30 @@
OpenSSH 7.2发布,支持 SHA-256/512 的 RSA 签名
========================================================
**2016.2.29OpenBSD 项目很高兴地宣布 OpenSSH 7.2发布了,并且很块可在所有支持的平台下载。**
根据内部[发布公告][1]OpenSSH 7.2 主要是 bug 修复,修改了自 OpenSSH 7.1p2 以来由用户报告和开发团队发现的问题,但是我们可以看到几个新功能。
这其中,我们可以提到使用了 SHA-256 或者 SHA-256 512 哈希算法的 RSA 签名;增加了一个 AddKeysToAgent 客户端选项,以添加用于身份验证的 ssh-agent 的私钥和实现了一个“restrict”级别的 authorized_keys 选项,用于存储密钥限制。
此外,现在 ssh_config 中 CertificateFile 选项可以明确列出证书ssh-keygen 现在能够改变所有支持的格式的密钥注释、密钥指纹现在可以来自标准输入,多个公钥可以放到一个文件。
### ssh-keygen 现在支持多证书
除了上面提到的OpenSSH 7.2 增加了 ssh-keygen 多证书的支持,一个一行,实现了 sshd_config ChrootDirectory 及Foreground 的“none”参数“-c”标志允许 ssh-keyscan 获取证书而不是文本密钥。
最后但并非最不重要的OpenSSH 7.3 不再默认启用 rijndael-cbc即 AESblowfish-cbc、cast128-cbc 等古老的算法,同样的还有基于 MD5 和截断的 HMAC 算法。在 Linux 中支持 getrandom() 系统调用。[下载 OpenSSH 7.2][2] 并查看更新日志中的更多细节。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/openssh-7-2-out-now-with-support-for-rsa-signatures-using-sha-256-512-algorithms-501111.shtml
作者:[Marius Nestor][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://news.softpedia.com/editors/browse/marius-nestor
[1]: http://www.openssh.com/txt/release-7.2
[2]: http://linux.softpedia.com/get/Security/OpenSSH-4474.shtml

View File

@ -0,0 +1,25 @@
逾千万使用 https 的站点受到新型解密攻击的威胁
===========================================================================
![](https://www.linux.com/images/stories/66866/drown-explainer.jpg)
低成本的 DROWN 攻击能在数小时内完成数据解密,该攻击对采用了 TLS 的邮件服务器也同样奏效。
一个国际研究小组于周二发出警告,据称逾 1100 万家网站和邮件服务采用的用以保证服务安全的 [传输层安全协议 TLS][1],对于一种新发现的、成本低廉的攻击而言异常脆弱,这种攻击会在几个小时内解密敏感的通信,在某些情况下解密甚至能瞬间完成。 前一百万家最大的网站中有超过 81,000 个站点正处于这种脆弱的 HTTPS 协议保护之下。
这种攻击主要针对依赖于 [RSA 加密系统][2]的 TLS 所保护的通信,密钥会间接的通过 SSLv2 暴露,这是一种在 20 年前就因为自身缺陷而退休了的 TLS 前代协议。该漏洞允许攻击者可以通过反复使用 SSLv2 创建与服务器连接的方式,解密截获的 TLS 连接。
--------------------------------------------------------------------------------
via: https://www.linux.com/news/software/applications/889455--more-than-11-million-https-websites-imperiled-by-new-decryption-attack
作者:[ArsTechnica][a]
译者:[Ezio](https://github.com/oska874)
校对:[martin2011qi](https://github.com/martin2011qi), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/community/forums/person/112
[1]: https://en.wikipedia.org/wiki/Transport_Layer_Security
[2]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)

View File

@ -0,0 +1,49 @@
AlphaGo 的首尔之战带来的启示
================================================================================
围棋并不仅仅是一个游戏——她是一伙活生生的玩家们,分析家们,爱好者们以及传奇大师们。在过去的十天里,在韩国首尔,我们有幸亲眼目睹那份难以置信的激动。我们也有幸也目睹了那前所未有的场景:[DeepMind][1] 的 AlphaGo 迎战并战胜了传奇围棋大师李世石职业9段身负 18 个世界头衔),这是人工智能的里程碑。
![Pedestrians checking in on the AlphaGo vs. Lee Sedol Go match on the streets of Seoul (March 13)](https://1.bp.blogspot.com/-vfgUcjyMOmM/Vumk5gXD98I/AAAAAAAASDI/frbYidb1u6gTKGcvFOf8iQVsr9PLoRlBQ/s1600/Press%2BCenter%2BOutdoor%2BScreen%2B2.jpg)
虽说围棋可能是存世的最为悠久的游戏之一了,但对于这五盘比赛的关注度还是大大的超出了我们的想象。搜索围棋规则和围棋盘的用户在美国迅速飙升。在中国,数以千万计的用户通过直播观看了这场比赛,并且新浪微博“人机围棋大战”话题的浏览量破 2 亿。韩国的围棋盘也销量[激增][2]。
然而我们如此公开的测试 AlphaGo并不仅仅是为了赢棋而已。我们自 2010 年成立 DeepMind为的是创造出具有独立学习能力的通用型人工智能AI并致力于将其作为工具协助解决从气候变化到诊断疾病这类最为棘手且急迫的问题为最终目标。
亦如许多前辈学者们一样,我们也是通过游戏来开发并测试我们的算法的。在一月份,我们第一次披露了 [AlphaGo][3]——作为第一个通过使用 [深度学习][4] 和 [强化学习][5],可以在人类发明的最为复杂的棋盘类游戏中击败职业选手的 AI 程序。而 AlphaGo 迎战过去十年间最厉害的围棋选手——李世石,绝对称得上是 [终极挑战][6]。
结果震惊了包括我们在内的每个人AlphaGo 五战四胜。评论家指出了 AlphaGo 下出的许多前所未见、极富创意或者要用 [“漂亮”][7] 来形容的妙手。基于我们的数据分析AlphaGo 在第 2 局中的 [37 手][8],在人类选手中出现的几率仅有万分之一。而李一反常态的创新下法,如第 4 局中的 [78 手][9]——也是既存下法中的万中之一的——这一手也最终造就了一场胜利。
最后比分定格在 4-1。我们为支持科学、技术、工程、数学STEM教育和围棋的组织以及 UNICEF (联合国儿童基金会)赢得了 $1 百万的捐助。
经此一役,我们将收获总结成以下两点:第一,此次测试很好的预示了 AI 有解决其他问题的潜力。AlphaGo 在棋盘上能够做到兼顾“全局”——并找出人类已经被训化而不会走或想到的妙手。运用 AlphaGo 这类的技术,在人类目所不能及的领域中探索,会有很大的潜力。第二,虽说这场比赛已经被广泛的标榜成“人机大战”,但 AlphaGo 却是人类实实在在的成果。无论是李世石还是 AlphaGo 团队相互之间互相促进,产生了新的想法、观点和答案——并且长远来看,我们都将从中受益。
但正如韩国对于围棋的观点:“胜而不骄,是以常胜。”这只是使机器聪明的漫长道路中的一个小小而显著的一步而已。我们已经证明了尖端深度强化学习技术可以被用作制作强大的围棋选手和 [Atari][10] 玩家。深度神经网络在 Google 已经与 [图像识别][11][语音识别][12] 以及 [搜索排名][13] 一样被应用到具体的任务中了。然而,从会学习的机器到可以像人一样全方位灵活实施智能任务——真正达到 [强人工智能][14] 的特性,此中的道路还很漫长。
![Demis and Lee Sedol hold up the signed Go board from the Google DeepMind Challenge Match](https://4.bp.blogspot.com/-LkxNvsR-e1I/Vumk5gmProI/AAAAAAAASDM/J55Y2psqzOwWZ3kau2Pgz6xmazo7XDj_Q/s1600/A26U6150.jpg)
我们想通过这场比赛来测试 AlphaGo 的极限。李世石大师做的十分出色—我们在接下来的数周内会研究他与 AlphaGo 的对战细节。同时因为我们在 AlphaGo 中使用的机器学习方法是通用型的,我们十分希望在不久的将来,将这种技术应用于其他的挑战中。游戏开始!
--------------------------------------------------------------------------------
via: https://googleblog.blogspot.com/2016/03/what-we-learned-in-seoul-with-alphago.html
作者:[Demis Hassabis][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://demishassabis.com/
[1]:https://deepmind.com/
[2]:http://www.hankookilbo.com/m/v/3e7deaa26a834f76929a1689ecd388ea
[3]:https://googleblog.blogspot.com/2016/01/alphago-machine-learning-game-go.html
[4]:https://en.wikipedia.org/wiki/Deep_learning
[5]:https://en.wikipedia.org/wiki/Reinforcement_learning
[6]:https://deepmind.com/alpha-go.html
[7]:http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/
[8]:https://youtu.be/l-GsfyVCBu0?t=1h17m50s
[9]:https://youtu.be/yCALyQRN3hw?t=3h10m25s
[10]:http://googleresearch.blogspot.sg/2015/02/from-pixels-to-actions-human-level.html
[11]:http://googleresearch.blogspot.sg/2013/06/improving-photo-search-step-across.html
[12]:http://googleresearch.blogspot.sg/2015/08/the-neural-networks-behind-google-voice.html
[13]:http://www.bloomberg.com/news/articles/2015-10-26/google-turning-its-lucrative-web-search-over-to-ai-machines
[14]:https://en.wikipedia.org/wiki/Artificial_general_intelligence

View File

@ -0,0 +1,36 @@
ownCloud Pi 设备将运行在 Snappy Ubuntu Core 16.04 LTS 及树莓派3上
===============================================================================
我们去年[报道了][1] ownCloud 正与西部数据Western Digital实验室沟通帮助他们开发一个社区项目将给用户带来可以在家中自托管的云存储设备。
自托管设备背后的理念,是由 ownCloud 服务端软件承载的,它结合了树莓派和西数硬盘到一个易安装和开箱即用的容器中。
社区的反应看上去很积极ownCloud Pi 项目收到了许多好的提议和点子。今天,我们收到了一个更好的消息,首个镜像可以[下载][2]了。
ownCloud Pi 基于最新的 Snappy Ubuntu Core 16.04 LTS 系统,它由 Canonical 为嵌入式和 物联网Internet of Things设备所设计包括新的树莓派3 Model B。
ownCloud 的开发者在今天的[声明][3]中称:“我们正在寻求来自 ownCloud、Ubuntu、树莓派和西数实验室等社区的帮助来测试和提高它们并且可以在下周发布首批30台设备”。
### 目前的阻碍、挑战及前进的路
目前团队正致力于在基于 Xenial Xerus 版本的 Snappy Ubuntu 内核上完成他们的 ownCloud Pi 设备方案。这样新的64位树莓派3可以帮助它们克服之前在树莓派2上遇到的阻碍比如支持大于2GB的文件。
由此看来,最终的 ownCloud Pi 将在今年春天发布预览版它将会在树莓派3上运行。之后我们应该就可以购买首批可用于产品环境版本的 ownCloud Pi 设备了。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/owncloud-pi-device-to-run-on-snappy-ubuntu-core-16-04-lts-and-raspberry-pi-3-501904.shtml
作者:[Marius Nestor][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://news.softpedia.com/editors/browse/marius-nestor
[1]: http://news.softpedia.com/news/owncloud-partnerships-with-wd-to-bring-self-hosted-cloud-storage-in-users-homes-497512.shtml
[2]: http://people.canonical.com/~kyrofa/owncloud-pi/
[3]: https://owncloud.org/blog/wd-labs-raspberry-pi-owncloud-and-ubuntu/

View File

@ -0,0 +1,66 @@
将 Ubuntu 和 FreeBSD 融合在一起的发行版 UbuntuBSD
========================================================
![](http://itsfoss.com/wp-content/uploads/2016/03/UbuntuBSD.jpg)
不止是在 Linux 的内核上面你才能体验到 Ubuntu 的快捷方便伙计们。UbuntuBSD 可以让你在 FreeBSD 的内核上面也能体验到那种方便快捷。
UbuntuBSD 称自己是 Unix for human beings这一点也不人惊讶。如过你能想起来的话Ubuntu 使用的标语是 Linux for human beings 并且在过去的 11 年里它确实让一个‘正常人’有可能用上 Linux。
UbuntuBSD 有着同样的想法。它想让新手能够接触到 Unix ,以及能使用它——如果我能这样说的话。至少,这就是它的目标。
### 什么是 BSD 它和 Linux 有哪些不同? ###
如果你是新手,那么你需要知道 [Unix 和 Linux 的区别][2].
在 Linux 出现之前Unix 由 [AT&T][3] 的 [Ken Thompson][4]、 [Denis Ricthie][5] 以及他们的团队设计。这是在可以算作计算机上古时期的 1970 发生的事。当你知道 Unix 是一个闭源的有产权的操作系统时你可能会感到惊讶。AT&T 给了很多第三方许可,包括学术机构和企业。
美国加州大学伯克利分校是其中一个拿到许可的学术机构。在那里开发的 Unix 系统叫做 [BSD (Berkeley Software Distribution)][6]。BSD 的最出名的开源分支是 [FreeBSD][7],另一个最流行的闭源分支是苹果的 Mac OS X。
在 1991 年。芬兰的计算机系大学生 Linus Torvalds 从头写了自己的 Unix 系统的复制品。这就是我们今天熟知的 Linux 内核。Linux 的发行版在内核的基础上添加了图形界面、GNU 的那一套cp, mv, ls,date, bash 什么的)、安装/管理工具GNU C/C++ 编译器以及很多应用。
### UbuntuBSD 不是这种发行版的开端
在你知道了 LinuxUnixFreeBSD 之间的区别之后。我要告诉你的是 UbuntuBSD 不是第一个要在 FreeBSD 内核上作出类似 Linux 的感觉的发行版。
当 Debian 选择使用 [systemd][8] 之后,[Debian GNU/kFreeBSD][9]诞生了。它使用的不是通常的 Linux 内核,而是 将 Debian 移植到了 FreeBSD 内核上。
与 Debian GNU/kFreeBSD 类似UbuntuBSD 是将 Ubuntu 移植到了 FreeBSD 内核上。
### UbuntuBSD Beta 版代号: Escape From SystemD
UbuntuBSD 的第一个版本已经发布代号为“Escape From SystemD ”。它基于 Ubuntu 15.10 和 FreeBSD 10.1.
它的默认桌面环境为 [Xfce][10] ,桌面以及服务器均可使用。 对于 [ZFS][11] 的支持也包含在这个版本中。开发者还提供了一个文本界面的安装器。
### 想试试?
我不建议任何人马上就去开心地去尝试这个系统。它仍在开发并且安装器还是文本界面的。不过如果你足够自信的话,直接去下载体验吧。但是如果你是新手的话,请等一段时间,至少不要现在就去尝试:
[UbuntuBSD][12]
你认为 UbuntuBSD 怎么样? 兹瓷不兹瓷它?
--------------------------------------------------------------------------------
via: http://itsfoss.com/ubuntubsd-ubuntu-freebsd/
作者:[ABHISHEK][a]
译者:[name1e5s](https://github.com/name1e5s)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]: http://itsfoss.com/tag/linux/
[2]: https://linux.cn/article-3159-1.html
[3]: https://en.wikipedia.org/wiki/AT%26T
[4]: https://en.wikipedia.org/wiki/Ken_Thompson
[5]: https://en.wikipedia.org/wiki/Dennis_Ritchie
[6]: http://www.bsd.org/
[7]: https://www.freebsd.org/
[8]: https://www.freedesktop.org/wiki/Software/systemd/
[9]: https://www.debian.org/ports/kfreebsd-gnu/
[10]: http://www.xfce.org/
[11]: https://en.wikipedia.org/wiki/ZFS
[12]: https://sourceforge.net/projects/ubuntubsd/

View File

@ -1,25 +0,0 @@
逾千万使用 https 的站点受到新型解密攻击的威胁
===========================================================================
![](https://www.linux.com/images/stories/66866/drown-explainer.jpg)
低成本的 DROWN 攻击能在数小时内完成数据解密,对采用了 TLS 的邮件服务器也同样奏效。
一国际研究小组于周二发出警告,据称逾 1100 万家网站和邮件服务采用的用以保证服务安全的[传输层安全协议][1],对于一种新发现的、成本低廉的攻击而言异常脆弱,这种攻击会在几个小时内解密敏感的通信,在某些情况下解密甚至能瞬间完成。 1 百万家最为流行的网站中有超过 81,000 家正处于这些脆弱的 HTTPS 协议的保护之下。
这种攻击主要针对受 TLS 保护、依赖于 [RSA 加密系统][2]的通信。如果密钥泄露了,即使是间接的由 SSLv2一种在 20 年前就因为自身缺陷而退休了的 LTS 前辈。该漏洞允许攻击者,可以通过反复使用 SSLv2 创建与服务器连接的方式,解密截获的 TLS 连接。
--------------------------------------------------------------------------------
via: https://www.linux.com/news/software/applications/889455--more-than-11-million-https-websites-imperiled-by-new-decryption-attack
作者:[ArsTechnica][a]
译者:[Ezio](https://github.com/oska874)
校对:[martin2011qi](https://github.com/martin2011qi)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/community/forums/person/112
[1]: https://en.wikipedia.org/wiki/Transport_Layer_Security
[2]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)

View File

@ -0,0 +1,276 @@
将程序性能提高十倍的10条建议
================================================================================
提高 web 应用的性能从来没有比现在更重要过。网络经济的比重一直在增长;全球经济超过 5% 的价值是在因特网上产生的(数据参见下面的资料)。这个时刻在线的超连接世界意味着用户对其的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的 app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。
举一个例子一份亚马逊十年前做过的研究可以证明甚至在那个时候网页加载时间每减少100毫秒收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有者在调查中承认它们会因为应用程序性能的问题流失用户。
网站到底需要多快呢对于页面加载每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间而这是提供最高舒适度的速度。很明显这种利害关系对于 web 应用来说很高,而且在不断的增加。
想要提高效率很简单但是看到实际结果很难。为了在你的探索之旅上帮助到你这篇文章会给你提供10条最高可以提升10倍网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章包括充分测试的优化技术和一点 NGINX 的帮助。这个系列也给出了潜在的提高安全性的帮助。
### Tip #1: 通过反向代理来提高性能和增加安全性 ###
如果你的 web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要换一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的 WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么解决方法依然很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。)
问题是机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换通过成千上万的连接和用户交互从磁盘访问文件运行代码等等。应用服务器可能会抖动thrashing-比如说内存不足、将内存数据交换到磁盘以及有多个请求要等待某个任务完成如磁盘I/O。
你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和应用服务器的通讯都是通过一个快速的内部网络完成的。
使用反向代理服务器可以将应用服务器从等待用户与 web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要等待客户端的响应,其运行速度可以接近于优化后的性能水平。
添加反向代理服务器还可以给你的 web 服务器安装带来灵活性。比如,一个某种类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易替代一个新的。
因为反向代理带来的灵活性,所以反向代理也是一些性能加速功能的必要前提,比如:
- **负载均衡** (参见 [Tip #2][2]) 负载均衡运行在反向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以添加应用服务器而根本不用修改应用。
- **缓存静态文件** (参见 [Tip #3][3]) 直接读取的文件,比如图片或者客户端代码,可以保存在反向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快。
- **网站安全** 反向代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。
NGINX 软件为用作反向代理服务器而专门设计也包含了上述的多种功能。NGINX 使用事件驱动的方式处理请求这会比传统的服务器更加有效率。NGINX plus 添加了更多高级的反向代理特性,比如应用的[健康度检查][4],专门用来处理请求路由、高级缓冲和相关支持。
![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
### Tip #2: 添加负载平衡 ###
添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。与其将核心 Web 服务器变得越来越大和越来越强,不如使用负载均衡将流量分配到多个服务器。即使程序写的不好,或者在扩容方面有困难,仅是使用负载均衡服务器就可以很好的提高用户体验。
负载均衡服务器首先是一个反向代理服务器(参见[Tip #1][6])——它接受来自互联网的流量,然后转发请求给另一个服务器。特别是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法round robin每个新的请求都会发给列表里的下一个服务器。其它的复制均衡方法包括将请求发给活动连接最少的服务器。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8]。
负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有需要处理的流量。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。
可以进行负载均衡的协议包括 HTTP、HTTPS、SPDY、HTTP/2、WebSocket、[FastCGI][9]、SCGI、uwsgi、 memcached 等,以及几种其它的应用类型,包括基于 TCP 的应用和其它的第4层协议的程序。分析你的 web 应用来决定你要使用哪些以及哪些地方性能不足。
相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如 SSL 末端服务器,支持客户端的 HTTP/1.x 和 HTTP/2 请求,以及缓存静态文件。
NGINX 经常被用于进行负载均衡;要想了解更多的情况,可以下载我们的电子书 [选择软件负载均衡器的五个理由][10]。你也可以从 [使用 NGINX 和 NGINX Plus 配置负载均衡,第一部分][11] 中了解基本的配置指导,在 NGINX Plus 管理员指南中有完整的 [NGINX 负载均衡][12]的文档。。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性如基于服务器响应时间的加载路由和Microsofts NTLM 协议上的负载均衡。
### Tip #3: 缓存静态和动态的内容 ###
缓存可以通过加速内容的传输速度来提高 web 应用的性能。它可以采用以下几种策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者将这几种方法结合起来使用。
有两种不同类型数据的缓冲:
- **静态内容缓存**。不经常变化的文件,比如图像(JPEG、PNG) 和代码(CSS,JavaScript),可以保存在外围服务器上,这样就可以快速的从内存和磁盘上提取。
- **动态内容缓存**。很多 web 应用会针对每次网页请求生成一个新的 HTML 页面。在短时间内简单的缓存生成的 HTML 内容,就可以很好的减少要生成的内容的数量,而且这些页面足够新,可以满足你的需要。
举个例子如果一个页面每秒会被浏览10次你将它缓存 1 秒90%请求的页面都会直接从缓存提取。如果你分开缓存静态内容,甚至新生成的页面可能都是由这些缓存构成的。
下面由是 web 应用发明的三种主要的缓存技术:
- **缩短数据与用户的网络距离**。把一份内容的拷贝放的离用户更近的节点来减少传输时间。
- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。
- **从过载服务器上移走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的性能,而这是因为主机没有被过度使用。
对 web 应用的缓存机制可以在 web 应用服务器内部实现。首先,缓存动态内容是用来减少应用服务器加载动态内容的时间。其次,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。
改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说静态数据比如大图像文件构成了超过一半的内容。如果没有缓存那么这可能会花费几秒的时间来提取和传输这类数据但是采用了缓存之后不到1秒就可以完成。
举一个在实际中缓存是如何使用的例子, NGINX 和 NGINX Plus 使用了两条指令来[设置缓存机制][16]proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小、文件在缓存中保存的最长时间和其它一些参数。使用第三条而且是相当受欢迎的一条指令 proxy_cache_use_stale如果提供新鲜内容的服务器忙碌或者挂掉了你甚至可以让缓存提供较旧的内容这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的可用时间。
NGINX plus 有个[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。
要想获得更多关于 NGINX 的缓存机制的信息可以浏览 NGINX Plus 管理员指南中的 [参考文档][20] 和 [NGINX 内容缓存][21] 。
**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足它们对站点功能性、响应时间、安全性和商业结果(如完成的交易数)等需要。
### Tip #4: 压缩数据 ###
压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片JPEG 和PNG、视频MPEG-4和音乐MP3等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。
文本数据 —— 包括HTML包含了纯文本和 HTML 标签CSS 和代码,比如 Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生更大的影响。
这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。智能的内容压缩可以减少 HTMLJavascriptCSS和其它文本内容对带宽的要求通常可以减少 30% 甚至更多的带宽和相应的页面加载时间。
如果你使用 SSL压缩可以减少需要进行 SSL 编码的的数据量,而这些编码操作会占用一些 CPU 时间而抵消了压缩数据减少的时间。
压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的 [HTTP/2 部分]就对于头数据来特别适合。另一个例子是可以在 NGINX 里打开使用 GZIP 压缩。你在你的服务里[预先压缩文本数据][25]之后,你就可以直接使用 gzip_static 指令来处理压缩过的 .gz 版本。
### Tip #5: 优化 SSL/TLS ###
安全套接字([SSL][26]) 协议和它的下一代版本传输层安全TLS协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是 Google 正在使用 SSL/TLS这在搜索引擎排名上是一个正面的影响因素。
尽管 SSL/TLS 越来越流行但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二:
1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用 HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。
2. 数据在传输过程中需要不断的在服务器端加密、在客户端解密。
为了鼓励使用 SSL/TLSHTTP/2 和 SPDY在[下一章][27]会描述)的作者设计了新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述第一个原因所浪费的时间。然而现在可以用来提高应用程序使用 SSL/TLS 传输数据的性能的方法不止这些。
web 服务器有对应的机制优化 SSL/TLS 传输。举个例子NGINX 使用 [OpenSSL][28] 运行在普通的硬件上提供了接近专用硬件的传输性能。NGINX 的 [SSL 性能][29] 有详细的文档,而且把对 SSL/TLS 数据进行加解密的时间和 CPU 占用率降低了很多。
更进一步,参考这篇[文章][30]了解如何提高 SSL/TLS 性能的更多细节,可以总结为一下几点:
- **会话缓冲**。使用指令 [ssl_session_cache][31] 可以缓存每个新的 SSL/TLS 连接使用的参数。
- **会话票据或者 ID**。把 SSL/TLS 的信息保存在一个票据或者 ID 里可以流畅的复用而不需要重新握手。
- **OCSP 分割**。通过缓存 SSL/TLS 证书信息来减少握手时间。
NGINX 和 NGINX Plus 可以被用作 SSL/TLS 服务端,用于处理客户端流量的加密和解密,而同时以明文方式和其它服务器进行通信。要设置 NGINX 和 NGINX Plus 作为 SSL/TLS 服务端,参看 [HTTPS 连接][32] 和[加密的 TCP 连接][33]
### Tip #6: 使用 HTTP/2 或 SPDY ###
对于已经使用了 SSL/TLS 的站点HTTP/2 和 SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用 SSL/TLS 的站点来说,从响应速度的角度来说 HTTP/2 和 SPDY 将让迁移到 SSL/TLS 没有什么压力(原本会降低效率)。
Google 在2012年开始把 SPDY 作为一个比 HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前 IETF 通过的标准,是基于 SPDY 的。SPDY 已经被广泛的支持了,但是很快就会被 HTTP/2 替代。
SPDY 和 HTTP/2 的关键是用单一连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。
通过使用单一连接,这些协议可以避免像在实现了 HTTP/1.x 的浏览器中一样建立和管理多个连接。单一连接在对 SSL 特别有效,这是因为它可以最小化 SSL/TLS 建立安全链接时的握手时间。
SPDY 协议需要使用 SSL/TLS而 HTTP/2 官方标准并不需要,但是目前所有支持 HTTP/2 的浏览器只有在启用了 SSL/TLS 的情况下才能使用它。这就意味着支持 HTTP/2 的浏览器只有在网站使用了 SSL 并且服务器接收 HTTP/2 流量的情况下才会启用 HTTP/2。否则的话浏览器就会使用 HTTP/1.x 协议。
当你实现 SPDY 或者 HTTP/2 时,你不再需要那些常规的 HTTP 性能优化方案,比如按域分割、资源聚合,以及图像拼合。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解 HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。
![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
作为支持这些协议的一个样例NGINX 已经从一开始就支持了 SPDY而且[大部分使用 SPDY 协议的网站][35]都运行的是 NGINX。NGINX 同时也[很早][36]对 HTTP/2 的提供了支持从2015 年9月开始开源版 NGINX 和 NGINX Plus 就[支持][37]它了。
经过一段时间,我们 NGINX 希望更多的站点完全启用 SSL 并且向 HTTP/2 迁移。这将会提高安全性,同时也会找到并实现新的优化手段,简化的代码表现的会更加优异。
### Tip #7: 升级软件版本 ###
一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决 bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。
稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉 bug以及提高安全性。
一直使用旧版软件也会阻止你利用新的特性。比如上面说到的 HTTP/2目前要求 OpenSSL 1.0.1。在2016 年中期开始将会要求1.0.2 而它是在2015年1月才发布的。
NGINX 用户可以开始迁移到 [NGINX 最新的开源软件][38] 或者 [NGINX Plus][39];它们都包含了最新的能力,如 socket 分割和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把它们升级到你能升级到的最新版本吧。
### Tip #8: Linux 系统性能调优 ###
Linux 是大多数 web 服务器使用的操作系统而且作为你的架构的基础Linux 显然有不少提高性能的可能。默认情况下,很多 Linux 系统都被设置为使用很少的资源,以符合典型的桌面应用使用。这就意味着 web 应用需要一些微调才能达到最大效能。
这里的 Linux 优化是专门针对 web 服务器方面的。以 NGINX 为例,这里有一些在加速 Linux 时需要强调的变化:
- **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加 net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接限制太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数直到错误信息停止出现。
- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接请求你可能就需要提高sys.fs.file_max ,以增加系统对文件描述符数量整体的限制,这样才能支持不断增加的负载需求。
- **临时端口**。当使用代理时NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口号。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过 net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。
对于 NGINX 来说,可以查阅 [NGINX 性能调优指南][40]来学习如果优化你的 Linux 系统,这样它就可以很好的适应大规模网络流量而不会超过工作极限。
### Tip #9: web 服务器性能调优 ###
无论你是用哪种 web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何 web 服务器,但是一些设置是针对 NGINX 的。关键的优化手段包括:
- **访问日志**。不要把每个请求的日志都直接写回磁盘你可以在内存将日志缓存起来然后批量写回磁盘。对于NGINX 来说,给指令 **access_log** 添加参数 **buffer=size** 可以让系统在缓存满了的情况下才把日志写到磁盘。如果你添加了参数 **flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。
- **缓存**。缓存会在内存中存放部分响应,直到满了为止,这可以让与客户端的通信更加高效。内存放不下的响应会写回磁盘,而这就会降低效能。当 NGINX [启用][42]了缓存机制后,你可以使用指令 **proxy_buffer_size****proxy_buffers** 来管理缓存。
- **客户端保活**。保活连接可以减少开销,特别是使用 SSL/TLS 时。对于 NGINX 来说,你可以从 **keepalive_requests** 的默认值 100 开始增加最大连接数,这样一个客户端就可以在一个指定的连接上请求多次,而且你也可以通过增加 **keepalive_timeout** 的值来允许保活连接存活更长时间,这样就可以让后来的请求处理的更快速。
- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会受益于连接保活。对于上游连接来说,你可以增加 **keepalive**,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新连接的次数。更多关于保活连接的信息可以参见[这篇“ HTTP 保活连接和性能”][41]。
- **限制**。限制客户端使用的资源可以提高性能和安全性。对于 NGINX 来说,指令 **limit_conn****limit_conn_zone** 限制了给定来源的连接数量,而 **limit_rate** 限制了带宽。这些限制都可以阻止合法用户*扒取*资源,同时也避免了攻击。指令 **limit_req****limit_req_zone** 限制了客户端请求。对于上游服务器来说,可以在 upstream 的配置块里的 server 指令使用 max_conns 参数来限制连接到上游服务器的连接数。 这样可以避免服务器过载。关联的 queue 指令会创建一个队列来在连接数抵达 **max_conn** 限制时在指定长度的时间内保存特定数量的请求。
- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和操作系统特定的机制来有效的将请求分发给不同的工人进程。这条建议推荐设置 **worker_processes** 为每个 CPU 一个 。worker_connections 的最大数默认512可以在大部分系统上根据需要增加实验性地找到最适合你的系统的值。
- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会为每个工人进程创建一个套接字监听器,这样一来以当套接字监听器可用时,内核就会将连接分配给它。这可以减少锁竞争,并且提高多核系统的性能,要启用[套接字分隔][43]需要在 **listen** 指令里面加上 **reuseport** 参数。
- **线程池**。计算机进程可能被一个单一的缓慢的操作所占用。对于 web 服务器软件来说,磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在 NGINX 里有两个操作——read() 系统调用和 sendfile() ——被分配到了[线程池][44]
![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
**技巧**。当改变任何操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快,那么就改回去。
在[文章“调优 NGINX 性能”][45]里可以看到更详细的 NGINX 调优方法。
### Tip #10: 监视系统活动来解决问题和瓶颈 ###
在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的 web 基础设施上监控程序活动。
监视活动是最积极的——它会告诉你发生了什么,把问题留给你发现和最终解决掉。
监视可以发现几种不同的问题。它们包括:
- 服务器宕机。
- 服务器出问题一直在丢失连接。
- 服务器出现大量的缓存未命中。
- 服务器没有发送正确的内容。
应用的总体性能监控工具,比如 New Relic 和 Dynatrace可以帮助你监控到从远程加载网页的时间而 NGINX 可以帮助你监控到应用交付端。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。
为了帮助开发者快速的发现、解决问题NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,这可以阻止当前任务完成之前接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的进度。当使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。下图就展示了内建的 NGINX Plus 模块[实时活动监视][48]的仪表盘包括了服务器群TCP 连接和缓存信息等 Web 架构信息。
![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
### 总结: 看看10倍性能提升的效果 ###
这些性能提升方案对任何一个 web 应用都可用并且效果都很好而实际效果取决于你的预算、你能花费的时间、目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升
为了指导你了解每种优化手段的潜在影响,这里是上面详述的每个优化方法的关键点,虽然你的情况肯定大不相同:
- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间歇的性能低谷。增加一个反向代理,比如 NGINX ,可以避免 web 应用程序在内存和磁盘之间波动。负载均衡可以将过载服务器的任务转移到空闲的服务器还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升很容易就可以比你现在的实现方案的最差性能提高10倍对于总体性能来说可能提高的不多但是也是有实质性的提升。
- **缓存动态和静态数据**。如果你有一个负担过重的 web 服务器那么毫无疑问肯定是你的应用服务器只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高几倍的性能。
- **压缩数据**。使用媒体文件压缩格式,比如图像格式 JPEG图形格式 PNG视频格式 MPEG-4音乐文件格式 MP3 可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以将初始页面加载速度提高两倍。
- **优化 SSL/TLS**。安全握手会对性能产生巨大的影响对它们的优化可能会对初始响应产生2倍的提升特别是对于大量文本的站点。优化 SSL/TLS 下媒体文件只会产生很小的性能提升。
- **使用 HTTP/2 和 SPDY*。当你使用了 SSL/TLS这些协议就可以提高整个站点的性能。
- **对 Linux 和 web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49]。
我们希望你亲自尝试这些技术。我们希望知道你说取得的各种性能提升案例。请在下面评论栏分享你的结果或者在标签 #NGINX#webperf 下 tweet 你的故事。
### 网上资源 ###
[Statista.com Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
[Load Impact How Bad Performance Impacts Ecommerce Sales][51]
[Kissmetrics How Loading Time Affects Your Bottom Line (infographic)][52]
[Econsultancy Site speed: case studies, tips and tools for improving your conversion rate][53]
--------------------------------------------------------------------------------
via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/
作者:[Floyd Smith][a]
译者:[Ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nginx.com/blog/author/floyd/
[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
[4]:https://www.nginx.com/products/application-health-checks/
[5]:https://www.nginx.com/solutions/load-balancing/
[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
[10]:https://www.nginx.com/resources/library/five-reasons-choose-software-load-balancer/
[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
[12]:https://www.nginx.com/resources/admin-guide/load-balancer//
[15]:https://www.nginx.com/products/
[16]:https://www.nginx.com/blog/nginx-caching-guide/
[17]:https://www.nginx.com/products/content-caching-nginx-plus/
[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
[19]:https://www.nginx.com/products/live-activity-monitoring/
[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
[21]:https://www.nginx.com/resources/admin-guide/content-caching
[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
[26]:https://www.digicert.com/ssl.htm
[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
[28]:http://openssl.org/
[29]:https://www.nginx.com/blog/nginx-ssl-performance/
[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
[38]:http://nginx.org/en/download.html
[39]:https://www.nginx.com/products/
[40]:https://www.nginx.com/blog/tuning-nginx/
[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
[45]:https://www.nginx.com/blog/tuning-nginx/
[46]:https://www.nginx.com/products/application-health-checks/
[47]:https://www.nginx.com/products/session-persistence/#session-draining
[48]:https://www.nginx.com/products/live-activity-monitoring/
[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
[52]:https://blog.kissmetrics.com/loading-time/?wide=1
[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/

View File

@ -1,6 +1,6 @@
如何在 Debian 中配置 Tripewire IDS 如何在 Debian 中配置 Tripewire IDS
================================================================================ ================================================================================
本文是一篇关于Debian中安装和配置Tripewire的文章。它是Linux环境下基于主机的入侵检测系统IDS。tripwire的高级功能有检测并报告任何Linux中未授权的更改文件和目录。tripewire安装之后会先创建一个基本的数据库tripewire监控并检测新文件的创建修改和谁修改了它等等。如果修改是合法的你可以接受修改并更新tripwire的数据库。 本文是一篇关于 Debian 中安装和配置 Tripewire 的文章。它是 Linux 环境下基于主机的入侵检测系统IDS。tripwire 的高级功能有检测并报告任何 Linux 中未授权的(文件和目录)的更改。tripewire 安装之后会先创建一个基本的数据库tripewire 监控并检测新文件的创建修改和谁修改了它等等。如果修改是合法的,你可以接受修改并更新 tripwire 的数据库。
### 安装和配置 ### ### 安装和配置 ###
@ -14,8 +14,7 @@ tripwire在Debian VM中的安装如下。
#### 站点密钥创建 #### #### 站点密钥创建 ####
tripwire需要一个站点口令来加密tripwire的配置文件tw.cfg和策略文件tw.pol。tripewire使用指定的密码加密两个文件。一个tripewire实例必须指定站点口令。 tripwire 需要一个站点口令site passphrase来加密 tripwire 的配置文件 tw.cfg 和策略文件 tw.pol。tripewire 使用指定的密码加密两个文件。一个 tripewire 实例必须指定站点口令。
![site key1](http://blog.linoxide.com/wp-content/uploads/2015/11/site-key1.png) ![site key1](http://blog.linoxide.com/wp-content/uploads/2015/11/site-key1.png)
@ -25,13 +24,13 @@ tripwire需要一个站点口令来加密tripwire的配置文件tw.cfg和策略
![local key1](http://blog.linoxide.com/wp-content/uploads/2015/11/local-key1.png) ![local key1](http://blog.linoxide.com/wp-content/uploads/2015/11/local-key1.png)
#### Tripwire配置路径 #### #### tripwire 配置路径 ####
tripewire 配置存储在 /etc/tripwire/twcfg.txt。它用于生成加密的配置文件 tw.cfg。 tripewire 配置存储在 /etc/tripwire/twcfg.txt。它用于生成加密的配置文件 tw.cfg。
![configuration file](http://blog.linoxide.com/wp-content/uploads/2015/11/configuration-file.png) ![configuration file](http://blog.linoxide.com/wp-content/uploads/2015/11/configuration-file.png)
**Tripwire策略路径** **tripwire 策略路径**
tripwire 在 /etc/tripwire/twpol.txt 中保存策略文件。它用于生成加密的策略文件 tw.pol。 tripwire 在 /etc/tripwire/twpol.txt 中保存策略文件。它用于生成加密的策略文件 tw.pol。
@ -41,9 +40,9 @@ tripwire在/etc/tripwire/twpol.txt中保存策略文件。它用于生成加密
![installed tripewire1](http://blog.linoxide.com/wp-content/uploads/2015/11/installed-tripewire1.png) ![installed tripewire1](http://blog.linoxide.com/wp-content/uploads/2015/11/installed-tripewire1.png)
#### Tripwire配置文件 (twcfg.txt) #### #### tripwire 配置文件 (twcfg.txt) ####
tripewire配置文件twcfg.txt细节如下图所示。加密策略文件tw.pol,站点密钥site.key和本地密钥hostname-local.key如下所示。 tripewire 配置文件twcfg.txt细节如下图所示。加密策略文件tw.pol、站点密钥site.key和本地密钥hostname-local.key在后面展示。
ROOT =/usr/sbin ROOT =/usr/sbin
@ -79,7 +78,7 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
TEMPDIRECTORY =/tmp TEMPDIRECTORY =/tmp
#### Tripwire策略配置 #### #### tripwire 策略配置 ####
在生成基础数据库之前先配置 tripwire 配置。有必要经用一些策略如 /dev、 /proc 、/root/mail 等。详细的 twpol.txt 策略文件如下所示。 在生成基础数据库之前先配置 tripwire 配置。有必要经用一些策略如 /dev、 /proc 、/root/mail 等。详细的 twpol.txt 策略文件如下所示。
@ -121,10 +120,10 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
# vulnerability # vulnerability
# #
# Tripwire Binaries # tripwire Binaries
# #
( (
rulename = "Tripwire Binaries", rulename = "tripwire Binaries",
severity = $(SIG_HI) severity = $(SIG_HI)
) )
{ {
@ -237,9 +236,9 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
#/proc -> $(Device) ; #/proc -> $(Device) ;
} }
#### Tripwire 报告 #### #### tripwire 报告 ####
**tripwire check** 命令检查twpol.txt文件并基于此文件生成tripwire报告如下。如果twpol.txt中有任何错误tripwire不会生成报告。 **tripwire-check** 命令检查 twpol.txt 文件并基于此文件生成 tripwire 报告如下。如果 twpol.txt 中有任何错误tripwire 不会生成报告。
![tripwire report](http://blog.linoxide.com/wp-content/uploads/2015/11/tripwire-report.png) ![tripwire report](http://blog.linoxide.com/wp-content/uploads/2015/11/tripwire-report.png)
@ -255,7 +254,7 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
Wrote report file: /var/lib/tripwire/report/VMdebian-20151024-122322.twr Wrote report file: /var/lib/tripwire/report/VMdebian-20151024-122322.twr
Open Source Tripwire(R) 2.4.2.2 Integrity Check Report Open Source tripwire(R) 2.4.2.2 Integrity Check Report
Report generated by: root Report generated by: root
@ -299,13 +298,13 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
Other binaries 66 0 0 0 Other binaries 66 0 0 0
Tripwire Binaries 100 0 0 0 tripwire Binaries 100 0 0 0
Other libraries 66 0 0 0 Other libraries 66 0 0 0
Root file-system executables 100 0 0 0 Root file-system executables 100 0 0 0
Tripwire Data Files 100 0 0 0 tripwire Data Files 100 0 0 0
System boot changes 100 0 0 0 System boot changes 100 0 0 0
@ -351,9 +350,9 @@ tripewire配置文件twcfg.txt细节如下图所示。加密策略文件
*** End of report *** *** End of report ***
Open Source Tripwire 2.4 Portions copyright 2000 Tripwire, Inc. Tripwire is a registered Open Source tripwire 2.4 Portions copyright 2000 tripwire, Inc. tripwire is a registered
trademark of Tripwire, Inc. This software comes with ABSOLUTELY NO WARRANTY; trademark of tripwire, Inc. This software comes with ABSOLUTELY NO WARRANTY;
for details use --version. This is free software which may be redistributed for details use --version. This is free software which may be redistributed
@ -373,7 +372,7 @@ via: http://linoxide.com/security/configure-tripwire-ids-debian/
作者:[nido][a] 作者:[nido][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,317 @@
如何在 linux 上配置持续集成服务 - Drone
==============================================================
如果你对一次又一次的克隆、构建、测试和部署代码感到厌倦了,可以考虑一下持续集成。持续集成简称 CI是一种像我们一样的频繁提交的代码库构建、测试和部署的软件工程实践。CI 可以帮助我们快速的集成新代码到已有的代码库。如果这个过程是自动化进行的,那么就会提高开发的速度,因为这可以减少开发人员手工构建和测试的时间。[Drone][1] 是一个自由开源项目,用来提供一个非常棒的持续集成服务的环境,采用 Apache 2.0 协议发布。它已经集成近很多代码库提供商,比如 Github、Bitbucket 以及 Google Code它可以从代码库提取代码使我们可以对包括 PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA 等等在内的各种语言编译构建。它是如此一个强大的平台,它使用了容器和 docker 技术,这让用户每次构建都可以在保证隔离的条件下完全控制他们自己的构建环境。
### 1. 安装 Docker ###
首先,我们要安装 docker因为这是 Drone 的工作流的最关键的元素。Drone 合理的利用了 docker 来构建和测试应用。容器技术提高了应用部署的效率。要安装 docker ,我们需要在不同的 linux 发行版本运行下面对应的命令,我们这里会说明 Ubuntu 14.04 和 CentOS 7 两个版本。
#### Ubuntu ####
要在 Ubuntu 上安装 Docker ,我们只需要运行下面的命令。
# apt-get update
# apt-get install docker.io
安装之后我们需要使用`service` 命令重启 docker 引擎。
# service docker restart
然后我们让 docker 在系统启动时自动启动。
# update-rc.d docker defaults
Adding system startup for /etc/init.d/docker ...
/etc/rc0.d/K20docker -> ../init.d/docker
/etc/rc1.d/K20docker -> ../init.d/docker
/etc/rc6.d/K20docker -> ../init.d/docker
/etc/rc2.d/S20docker -> ../init.d/docker
/etc/rc3.d/S20docker -> ../init.d/docker
/etc/rc4.d/S20docker -> ../init.d/docker
/etc/rc5.d/S20docker -> ../init.d/docker
#### CentOS ####
第一,我们要更新机器上已经安装的软件包。我们可以使用下面的命令。
# sudo yum update
要在 centos 上安装 docker我们可以简单的运行下面的命令。
# curl -sSL https://get.docker.com/ | sh
安装好 docker 引擎之后我么只需要简单使用下面的`systemd` 命令启动 docker因为 centos 7 的默认初始化系统是 systemd。
# systemctl start docker
然后我们要让 docker 在系统启动时自动启动。
# systemctl enable docker
ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'
### 2. 安装 SQlite 驱动 ###
Drone 默认使用 SQlite3 数据库服务器来保存数据和信息。它会在 /var/lib/drone/ 自动创建名为 drone.sqlite 的数据库来处理数据库模式的创建和迁移。要安装 SQlite3 我们要完成以下几步。
#### Ubuntu 14.04 ####
因为 SQlite3 存在于 Ubuntu 14.04 的默认软件库,我们只需要简单的使用 apt 命令安装它。
# apt-get install libsqlite3-dev
#### CentOS 7 ####
要在 Centos 7 上安装需要使用下面的 yum 命令。
# yum install sqlite-devel
### 3. 安装 Drone ###
最后,我们安装好依赖的软件,我们现在更进一步的接近安装 Drone。在这一步里我们只简单的从官方链接下载对应的二进制软件包然后使用默认软件包管理器安装 Drone。
#### Ubuntu ####
我们将使用 wget 从官方的 [Debian 文件下载链接][2]下载 drone 的 debian 软件包。下面就是下载命令。
# wget downloads.drone.io/master/drone.deb
Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98
Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7722384 (7.4M) [application/x-debian-package]
Saving to: 'drone.deb'
100%[======================================>] 7,722,384 1.38MB/s in 17s
2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384]
下载好之后,我们将使用 dpkg 软件包管理器安装它。
# dpkg -i drone.deb
Selecting previously unselected package drone.
(Reading database ... 28077 files and directories currently installed.)
Preparing to unpack drone.deb ...
Unpacking drone (0.3.0-alpha-1442513246) ...
Setting up drone (0.3.0-alpha-1442513246) ...
Your system ubuntu 14: using upstart to control Drone
drone start/running, process 9512
#### CentOS ####
在 CentOS 机器上我们要使用 wget 命令从[下载链接][3]下载 RPM 包。
# wget downloads.drone.io/master/drone.rpm
--2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm
Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18
Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7763311 (7.4M) [application/x-redhat-package-manager]
Saving to: drone.rpm
100%[======================================>] 7,763,311 1.18MB/s in 20s
2015-11-06 11:07:06 (374 KB/s) - drone.rpm saved [7763311/7763311]
然后我们使用 yum 安装 rpm 包。
# yum localinstall drone.rpm
### 4. 配置端口 ###
安装完成之后我们要先进行配置才能工作起来。drone 的配置文件在**/etc/drone/drone.toml** 。默认情况下 drone 的 web 接口使用的是80而这也是 http 默认的端口,如果我们修改它,请按下面所示的修改配置文件里 server 块对应的值。
[server]
port=":80"
### 5. 集成 Github ###
为了运行 Drone 我们必须设置最少一个和 GitHub、GitHub 企业版GitlabGogsBitbucket 关联的集成点。在本文里我们只集成了 github但是如果我们要集成其他的服务我们可以在配置文件做修改。为了集成 github 我们需要在github 的设置里创建一个新的应用https://github.com/settings/developers 。
![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png)
要创建一个应用,我们需要在 `New Application` 页面点击 `Register`,然后如下所示填表。
![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png)
我们应该保证在应用的配置项里设置了**授权回调链接**,链接看起来类似 `http://drone.linoxide.com/api/auth/github.com`。然后我们点击注册应用。所有都做好之后我们会看到我们需要在我们的 Drone 配置文件里配置的客户端 ID 和客户端密钥。
![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png)
在这些都完成之后我们需要使用文本编辑器编辑 drone 配置文件,比如使用下面的命令。
# nano /etc/drone/drone.toml
然后我们会在 drone 的配置文件里面找到`[github]` 部分,紧接着的是下面所示的配置内容
[github]
client="3dd44b969709c518603c"
secret="4ee261abdb431bdc5e96b19cc3c498403853632a"
# orgs=[]
# open=false
![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png)
### 6. 配置 SMTP 服务器 ###
如果我们想让 drone 使用 email 发送通知,那么我们需要在 SMTP 配置里面设置我们的 SMTP 服务器。如果我们已经有了一个 SMTP 服务,那就只需要简单的使用它的配置文件就行了,但是因为我们没有一个 SMTP 服务器,我们需要安装一个 MTA 比如 Postfix然后在 drone 配置文件里配置好 SMTP。
#### Ubuntu ####
在 ubuntu 里使用下面的 apt 命令安装 postfix。
# apt-get install postfix
#### CentOS ####
在 CentOS 里使用下面的 yum 命令安装 postfix。
# yum install postfix
安装好之后,我们需要编辑我们的 postfix 配置文件。
# nano /etc/postfix/main.cf
然后我们要把 myhostname 的值替换为我们自己的 FQDN比如 drone.linoxide.com。
myhostname = drone.linoxide.com
现在开始配置 drone 配置文件里的 SMTP 部分。
# nano /etc/drone/drone.toml
找到`[smtp]` 部分补充上下面的内容。
[smtp]
host = "drone.linoxide.com"
port = "587"
from = "root@drone.linoxide.com"
user = "root"
pass = "password"
![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png)
注意:这里的 **user****pass** 参数强烈推荐一定要改成某个具体用户的配置。
### 7. 配置 Worker ###
如我们所知的 drone 利用了 docker 完成构建、测试任务,我们需要把 docker 配置为 drone 的 worker。要完成这些需要修改 drone 配置文件里的`[worker]` 部分。
# nano /etc/drone/drone.toml
然后取消底下几行的注释并且补充上下面的内容。
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
这里我们只设置了两个节点这意味着上面的配置文件只能同时执行2 个构建操作。要提高并发性可以增大节点的值。
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
使用上面的配置文件 drone 被配置为使用本地的 docker 守护程序可以同时构建4个任务。
### 8. 重启 Drone ###
最后,当所有的安装和配置都准备好之后,我们现在要在本地的 linux 机器上启动 drone 服务器。
#### Ubuntu ####
因为 ubuntu 14.04 使用了 sysvinit 作为默认的初始化系统,所以只需要简单执行下面的 service 命令就可以启动 drone 了。
# service drone restart
要让 drone 在系统启动时也自动运行,需要运行下面的命令。
# update-rc.d drone defaults
#### CentOS ####
因为 CentOS 7使用 systemd 作为初始化系统,所以只需要运行下面的 systemd 命令就可以重启 drone。
# systemctl restart drone
要让 drone 自动运行只需要运行下面的命令。
# systemctl enable drone
### 9. 添加防火墙例外规则 ###
众所周知 drone 默认使用了80 端口而我们又没有修改它所以我们需要配置防火墙程序允许80 端口http开放并允许其他机器可以通过网络连接。
#### Ubuntu 14.04 ####
iptables 是最流行的防火墙程序,并且 ubuntu 默认安装了它。我们需要修改 iptable 以暴露端口80这样我们才能让 drone 的 web 界面在网络上被大家访问。
# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# /etc/init.d/iptables save
#### CentOS 7 ####
因为 CentOS 7 默认安装了 systemd它使用 firewalld 作为防火墙程序。为了在 firewalld 上打开80端口http 服务),我们需要执行下面的命令。
# firewall-cmd --permanent --add-service=http
success
# firewall-cmd --reload
success
### 10. 访问 web 界面 ###
现在我们将在我们最喜欢的浏览器上通过 web 界面打开 drone。要完成这些我们要把浏览器指向运行 drone 的服务器。因为 drone 默认使用80 端口而我们有没有修改过,所以我们只需要在浏览器里根据我们的配置输入`http://ip-address/` 或 `http://drone.linoxide.com` 就行了。在我们正确的完成了上述操作后,我们就可以看到登录界面了。
![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png)
因为在上面的步骤里配置了 Github我们现在只需要简单的选择 github 然后进入应用授权步骤,这些完成后我们就可以进入工作台了。
![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png)
这里它会同步我们在 github 上的代码库,然后询问我们要在 drone 上构建那个代码库。
![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png)
这一步完成后,它会询问我们在代码库里添加`.drone.yml` 文件的新名称,并且在这个文件里定义构建的过程和配置项,比如使用那个 docker 镜像,执行那些命令和脚本来编译,等等。
我们按照下面的内容来配置我们的`.drone.yml`。
image: python
script:
- python helloworld.py
- echo "Build has been completed."
这一步完成后我们就可以使用 drone 应用里的 YAML 格式的配置文件来构建我们的应用了。所有对代码库的提交和改变此时都会同步到这个仓库。一旦提交完成了drone 就会自动开始构建。
![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png)
所有操作都完成后,我们就能在终端看到构建的结果了。
![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png)
### 总结 ###
在本文中我们学习了如何安装一个可以工作的使用 drone 的持续集成平台。如果我们愿意我们甚至可以从 drone.io 官方提供的服务开始工作。我们可以根据自己的需求从免费的服务或者收费服务开始。它通过漂亮的 web 界面和强大的功能改变了持续集成的世界。它可以集成很多第三方应用和部署平台。如果你有任何问题、建议可以直接反馈给我们,谢谢。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/
作者:[Arun Pyasi][a]
译者:[ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://drone.io/
[2]:http://downloads.drone.io/master/drone.deb
[3]:http://downloads.drone.io/master/drone.rpm
[4]:https://github.com/settings/developers

View File

@ -1,28 +1,27 @@
要超越Hello World 容器是件困难的事情 从 Hello World 容器进阶是件困难的事情
================================================================================ ================================================================================
在[我的上一篇文章里][1] 我介绍了 Linux 容器背后的技术的概念。我写了我知道的一切。容器对我来说也是比较新的概念。我写这篇文章的目的就是鼓励我真正的来学习这些东西。 在[我的上一篇文章里][1] 我介绍了 Linux 容器背后的技术的概念。我写了我知道的一切。容器对我来说也是比较新的概念。我写这篇文章的目的就是鼓励我真正的来学习这些东西。
我打算在使用中学习。首先实践,然后上手并记录下我是怎么走过来的。我假设这里肯定有很多"Hello World" 这种类型的知识帮助我快速的掌握基础。然后我能够更进一步,构建一个微服务容器或者其它东西。 我打算在使用中学习。首先实践,然后上手并记录下我是怎么走过来的。我假设这里肯定有很多"Hello World" 这种类型的知识帮助我快速的掌握基础。然后我能够更进一步,构建一个微服务容器或者其它东西。
的意思是还会比着更难吗,对吧? 想,它应该不会有多难的。
错了。 但是我错了。
可能对某些人来说这很简单,因为他们会耗费大量的时间专注在操作工作上。但是对我来说实际上是很困难的可以从我在Facebook 上的状态展示出来的挫折感就可以看出了。 可能对某些人来说这很简单,因为他们在运维工作方面付出了大量的时间。但是对我来说实际上是很困难的可以从我在Facebook 上的状态展示出来的挫折感就可以看出了。
但是还有一个好消息:我最终让它工作了。而且他工作的还不错。所以我准备分享向你分享我如何制作我的第一个微服务容器。我的痛苦可能会节省你不少时间呢。 但是还有一个好消息:我最终搞定了。而且它工作的还不错。所以我准备分享向你分享我如何制作我的第一个微服务容器。我的痛苦可能会节省你不少时间呢。
如果你曾经发现或者从来都没有发现自己处在这种境地:像我这样的人在这里解决一些你不需要解决的问题 如果你曾经发现你也处于过这种境地,不要害怕:像我这样的人都能搞定,所以你也肯定行
让我们开始吧。 让我们开始吧。
### 一个缩略图微服务 ### ### 一个缩略图微服务 ###
我设计的微服务在理论上很简单。以 JPG 或者 PNG 格式在 HTTP 终端发布一张数字照片然后获得一个100像素宽的缩略图。 我设计的微服务在理论上很简单。以 JPG 或者 PNG 格式在 HTTP 终端发布一张数字照片然后获得一个100像素宽的缩略图。
下面是它实际的效果 下面是它的流程
![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png) ![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png)
@ -32,7 +31,7 @@
![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png) ![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png)
我下载了[Docker Toolbox][3]用它安装了Docker 的快速启动终端。Docker 快速启动终端使得创建容器更简单了。终端会启动一个装好了Docker 的Linux 虚拟机它允许你在一个终端里运行Docker 命令。 我下载了 [Docker Toolbox][3],用它安装了 Docker 的快速启动终端Docker Quickstart Terminal。Docker 快速启动终端使得创建容器更简单了。终端会启动一个装好了 Docker 的 Linux 虚拟机,它允许你在一个终端里运行 Docker 命令。
虽然在我的例子里,我的操作系统是 Mac OS X。但是 Windows 下也有相同的工具。 虽然在我的例子里,我的操作系统是 Mac OS X。但是 Windows 下也有相同的工具。
@ -44,9 +43,9 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
### 第一个小问题和第一个大问题### ### 第一个小问题和第一个大问题###
所以我用NodeJS 和ImageMagick 瞎搞了一通然后让我的服务在本地运行起来了。 我用 NodeJS 和 ImageMagick 瞎搞了一通然后让我的服务在本地运行起来了。
然后我创建了Dockerfile这是Docker 用来构建容器的配置脚本。我会在后面深入介绍构建和Dockerfile 然后我创建了 Dockerfile这是 Docker 用来构建容器的配置脚本。(我会在后面深入介绍构建过程 Dockerfile
这是我运行 Docker 快速启动终端的命令: 这是我运行 Docker 快速启动终端的命令:
@ -58,30 +57,29 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
呃。 呃。
我估摸着过了15分钟我忘记了在末尾参数输入一个点`.`。 我估摸着过了15分钟我才反应过来:我忘记了在末尾参数输入一个点`.`。
正确的指令应该是这样的: 正确的指令应该是这样的:
$ docker build -t thumbnailer:0.1 . $ docker build -t thumbnailer:0.1 .
但是这不是我遇到的最后一个问题。
但是这不是我最后一个问题。 我让这个镜像构建好了,然后我在 Docker 快速启动终端输入了 [`run` 命令][4]来启动容器,名字叫 `thumbnailer:0.1`:
我让这个镜像构建好了然后我Docker 快速启动终端输入了[`run` 命令][4]来启动容器,名字叫`thumbnailer:0.1`:
$ docker run -d -p 3001:3000 thumbnailer:0.1 $ docker run -d -p 3001:3000 thumbnailer:0.1
参数`-p 3001:3000` 让NodeJS 微服务在Docker 内运行在端口3000而在主机上则是3001。 参数 `-p 3001:3000` 让 NodeJS 微服务在 Docker 内运行在端口3000而绑定在宿主主机上的3001。
到目前起来都很好,对吧? 到目前起来都很好,对吧?
错了。事情要马上变糟了。 错了。事情要马上变糟了。
指定了在Docker 快速启动中端里用命令`docker-machine` 运行的Docker 虚拟机的ip地址: 通过运行 `docker-machine` 命令为这个 Docker 快速启动终端里创建的虚拟机指定了 ip 地址:
$ docker-machine ip default $ docker-machine ip default
这句话返回了默认虚拟机的IP地址即运行docker 的虚拟机。对于我来说这个ip 地址是192.168.99.100。 这句话返回了默认虚拟机的 IP 地址,它运行在 Docker 快速启动终端里。在我这里,这个 ip 地址是 192.168.99.100。
我浏览网页 http://192.168.99.100:3001/ ,然后找到了我创建的上传图片的网页: 我浏览网页 http://192.168.99.100:3001/ ,然后找到了我创建的上传图片的网页:
@ -93,7 +91,7 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
终端告诉我他无法找到我的微服务需要的 `/upload` 目录。 终端告诉我他无法找到我的微服务需要的 `/upload` 目录。
现在开始记住,我已经在此耗费了将近一天的时间-从浪费时间到研究问题。我此时感到了一些挫折感。 现在,你要知道,我已经在此耗费了将近一天的时间-从浪费时间到研究问题。我此时感到了一些挫折感。
然后灵光一闪。某人记起来微服务不应该自己做任何数据持久化的工作!保存数据应该是另一个服务的工作。 然后灵光一闪。某人记起来微服务不应该自己做任何数据持久化的工作!保存数据应该是另一个服务的工作。
@ -109,7 +107,7 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png) ![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png)
这是我用NodeJS 写的在内存工作、生成缩略图的代码: 这是我用 NodeJS 写的在内存运行、生成缩略图的代码:
// Bind to the packages // Bind to the packages
var express = require('express'); var express = require('express');
@ -171,19 +169,19 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
module.exports = router; module.exports = router;
好了,回到正轨,已经可以在我的本地机器正常工作了。我该去休息了。 好了,一切回到正轨,已经可以在我的本地机器正常工作了。我该去休息了。
但是,在我测试把这个微服务当作一个普通的 Node 应用运行在本地时... 但是,在我测试把这个微服务当作一个普通的 Node 应用运行在本地时...
![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png) ![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png)
它工作的很好。现在我要做的就是让在容器里面工作。 它工作的很好。现在我要做的就是让在容器里面工作。
第二天我起床后喝点咖啡,然后创建一个镜像——这次没有忘记那个"."! 第二天我起床后喝点咖啡,然后创建一个镜像——这次没有忘记那个"."!
$ docker build -t thumbnailer:01 . $ docker build -t thumbnailer:01 .
我从缩略图工程的根目录开始构建。构建命令使用了根目录下的Dockerfile。它是这样工作的把Dockerfile 放到你想构建镜像的地方然后系统就默认使用这个Dockerfile。 我从缩略图项目的根目录开始构建。构建命令使用了根目录下的 Dockerfile。它是这样工作的 Dockerfile 放到你想构建镜像的地方,然后系统就默认使用这个 Dockerfile。
下面是我使用的Dockerfile 的内容: 下面是我使用的Dockerfile 的内容:
@ -225,11 +223,11 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
我搜索了我能想到的所有的错误原因。差不多4个小时后我想为什么不重启一下机器呢 我搜索了我能想到的所有的错误原因。差不多4个小时后我想为什么不重启一下机器呢
重启了,你猜猜结果?错误消失了!(译注:万能的重启) 重启了,你猜猜结果?错误消失了!(LCTT 译注:万能的重启试试”
继续。 继续。
### 将精灵关进瓶子 ### ### 将精灵关进瓶子 ###
跳回正题:我需要完成构建工作。 跳回正题:我需要完成构建工作。
@ -243,7 +241,7 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
$ docker rmi if $(docker images | tail -n +2 | awk '{print $3}') $ docker rmi if $(docker images | tail -n +2 | awk '{print $3}')
我重新执行了命令构建镜像,安装容器,运行微服务。然后过了一个充满自我怀疑和沮丧的一个小时,我告诉我自己:这个错误可能不是微服务的原因。 我重新执行了重新构建镜像、安装容器、运行微服务的整个过程。然后过了一个充满自我怀疑和沮丧的一个小时,我告诉我自己:这个错误可能不是微服务的原因。
所以我重新看到了这个错误: 所以我重新看到了这个错误:
@ -253,17 +251,15 @@ Docker 快速启动终端就运行在你使用的普通终端里,就像这样
这太打击我了:构建脚本好像需要有人从键盘输入 Y 但是,这是一个非交互的 Dockerfile 脚本啊。这里并没有键盘。 这太打击我了:构建脚本好像需要有人从键盘输入 Y 但是,这是一个非交互的 Dockerfile 脚本啊。这里并没有键盘。
回到Dockerfile脚本元来时这样的: 回到 Dockerfile脚本原来是这样的:
RUN apt-get update RUN apt-get update
RUN apt-get install -y nodejs nodejs-legacy npm RUN apt-get install -y nodejs nodejs-legacy npm
RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev
RUN apt-get clean RUN apt-get clean
The second `apt-get` command is missing the `-y` flag which causes "yes" to be given automatically where usually it would be prompted for. 第二个`apt-get` 忘记了`-y` 标志它用于自动应答提示所需要的“yes”。这才是错误的根本原因。
第二个`apt-get` 忘记了`-y` 标志,这才是错误的根本原因。
I added the missing `-y` to the command:
我在这条命令后面添加了`-y` 我在这条命令后面添加了`-y`
RUN apt-get update RUN apt-get update
@ -281,7 +277,6 @@ I added the missing `-y` to the command:
$ docker run -d -p 3001:3000 thumbnailer:0.1 $ docker run -d -p 3001:3000 thumbnailer:0.1
Got the IP address of the Virtual Machine:
获取了虚拟机的IP 地址: 获取了虚拟机的IP 地址:
$ docker-machine ip default $ docker-machine ip default
@ -298,11 +293,11 @@ Got the IP address of the Virtual Machine:
在容器里面工作了,我的第一次啊! 在容器里面工作了,我的第一次啊!
### 这意味着什么? ### ### 这让我学到了什么? ###
很久以前,我接受了这样一个道理:当你刚开始尝试某项技术时,即使是最简单的事情也会变得很困难。因此,我压抑了要成为房间里最聪明的人的欲望。然而最近几天尝试容器的过程就是一个充满自我怀疑的旅程。 很久以前,我接受了这样一个道理:当你刚开始尝试某项技术时,即使是最简单的事情也会变得很困难。因此,我不会把自己当成最聪明的那个人,然而最近几天尝试容器的过程就是一个充满自我怀疑的旅程。
但是你想知道一些其它的事情吗这篇文章是我在凌晨2点完成的而每一个折磨的时都值得了。为什么?因为这段时间你将自己全身心投入了喜欢的工作里。这件事很难,对于所有人来说都不是很容易就获得结果的。但是不要忘记:你在学习技术,运行世界的技术。 但是你想知道一些其它的事情吗这篇文章是我在凌晨2点完成的而每一个折磨的时都值得了。为什么?因为这段时间你将自己全身心投入了喜欢的工作里。这件事很难,对于所有人来说都不是很容易就获得结果的。但是不要忘记:你在学习技术,运行世界的技术。
P.S. 了解一下Hello World 容器的两段视频,这里会有 [Raziel Tabibs][7] 的精彩工作内容。 P.S. 了解一下Hello World 容器的两段视频,这里会有 [Raziel Tabibs][7] 的精彩工作内容。
@ -320,12 +315,12 @@ via: https://deis.com/blog/2015/beyond-hello-world-containers-hard-stuff
作者:[Bob Reselman][a] 作者:[Bob Reselman][a]
译者:[Ezio](https://github.com/oska874) 译者:[Ezio](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://deis.com/blog [a]:https://deis.com/blog
[1]:http://deis.com/blog/2015/developer-journey-linux-containers [1]:https://linux.cn/article-6594-1.html
[2]:https://github.com/rsms/node-imagemagick [2]:https://github.com/rsms/node-imagemagick
[3]:https://www.docker.com/toolbox [3]:https://www.docker.com/toolbox
[4]:https://docs.docker.com/reference/commandline/run/ [4]:https://docs.docker.com/reference/commandline/run/

View File

@ -0,0 +1,77 @@
微软和 Linux :真正的浪漫还是有毒的爱情?
================================================================================
时不时的我们会读到一个能让你喝咖啡呛到或者把热拿铁喷到你显示器上的新闻故事。微软最近宣布的对 Linux 的钟爱就是这样一个鲜明的例子。
从常识来讲微软和自由开源软件FOSS运动就是恒久的敌人。在很多人眼里微软体现了过分的贪婪而这正为自由开源软件运动FOSS所拒绝。另外之前微软就已经给自由开源软件社区贴上了"一伙强盗"的标签。
我们能够理解为什么微软一直以来都害怕免费的操作系统。免费操作系统结合挑战微软核心产品线的开源应用时,就威胁到了微软在台式机和笔记本电脑市场的控制地位。
尽管微软有对在台式机主导地位的担忧,在网络服务器市场 Linux 却有着最高的影响力。今天,大多数的服务器都是 Linux 系统。包括世界上最繁忙的站点服务器。对微软来说,看到这么多无法装到兜里的许可证的营收一定是非常痛苦的。
掌上设备是微软输给自由软件的另一个领域。曾几何时,微软的 Windows CE 和 Pocket PC 操作系统走在移动计算的前沿。Windows PDA 设备是最闪亮的和豪华的产品。但是这一切在苹果公司发布了iphone之后都结束了。从那时起安卓就开始进入公众视野Windows 的移动产品开始被忽略被遗忘。而安卓平台是建立在自由开源的组件的基础上的。
由于安卓平台的开放性,安卓的市场份额在迅速扩大。不像 IOS任何一个手机制造商都可以发布安卓手机。也不像Windows 手机安卓没有许可费用。这对消费者来说是件好事。这也导致了许多强大却又价格低廉的手机制造商在世界各地涌现。这非常明确的证明了自由开源软件FOSS的价值。
在服务器和移动计算的角逐中失利对微软来说是非常惨重的损失。考虑一下服务器和移动计算这两个加起来所占有的市场大小,台式机市场似乎是死水一潭。没有人喜欢失败,尤其是涉及到金钱。并且,微软确实有许多东西正在慢慢失去。你可能期望着微软自尝苦果。在过去,确实如此。
微软使用了各种可以支配的手段来对 Linux 和自由开源软件FOSS进行反击从宣传到专利威胁。尽管这种攻击确实减慢了适配 Linux 的步伐,但却从来没有让 Linux 的脚步停下。
所以当微软在开源大会和重大事件上拿出印有“Microsoft Loves Linux”的T恤和徽章时请原谅我们表现出来的震惊。这是真的吗微软真的爱 Linux
当然公关的口号和免费的T恤并不代表真理。行动胜于雄辩。当你思考一下微软的行动时微软的立场就变得有点模棱两可了。
一方面,微软招募了几百名 Linux 开发者和系统管理员。将 .NET 核心框架作为一个开源的项目进行了发布,并提供了跨平台的支持(这样 .NET 就可以跑在 OS X 和 Linux 上了)。并且,微软与 Linux 公司合作把最流行的发行版本放到了 Azure 平台上。事实上,微软已经走的如此之远以至于要为 Azure 数据中心开发自己的 Linux 发行版了。
另一方面,微软继续直接通过法律或者傀儡公司来对开源项目进行攻击。很明显,微软在与自由软件的所有权较量上并没有发自内心的进行大的道德转变。那为什么要公开申明对 Linux 的钟爱之情呢?
一个显而易见的事实:微软是一个经营性实体。对股东来说是一个投资工具,对雇员来说是收入来源。微软所做的只有一个终极目标:盈利。微软并没有表现出来爱或者恨(尽管这是一个最常见的指控)。
所以问题不应该是"微软真的爱 Linux 吗?"相反,我们应该问,微软是怎么从这一切中获利的。
让我们以 .NET 核心框架的开源发行为例。这一举动使得 .NET 的运行时环境移植到任何平台都很轻松。这使得微软的 .NET 框架所涉及到的范围远远大于 Windows 平台。
开放 .NET 的核心包,最终使得 .NET 开发者开发跨平台的 app 成为可能,比如 OS X、Linux 甚至安卓——都基于同一个核心代码库。
从开发者角度来讲,这使得 .NET 框架比之前更有吸引力了。能够从单一的代码库触及到多个平台,使得使用 .NET 框架开发的任何 app 戏剧性的扩大了潜在的目标市场。
另外,一个强大的开源社区能够提供给开发者一些代码来在他们自己的项目中进行复用。所以,开源项目的可利用性也将会成就 .NET 框架。
更进一步讲,开放 .NET 的核心代码能够减少跨越不同平台所产生的碎片,意味着对消费者来说有对 app 更广的选择。无论是开源软件还是专用的 app都有更多的选择。
从微软的角度来讲,会得到一队开发者大军。微软可以通过销售培训、证书、技术支持、开发者工具(包括 Visual Studio和应用扩展来获利。
我们应该自问的是,这对自由软件社区有利还是有弊?
.NET 框架的大范围适用意味着许多参与竞争的开源项目的消亡,迫使我们会跟着微软的节奏走下去。
先抛开 .NET 不谈,微软正在花费大量的精力在 Azure 云计算平台对 Linux 的支持上。要记得Azure 最初是 Windows 的 Azure。Windows 服务器是唯一能够支持 Azure 的操作系统。今天Azure 也提供了对多个 Linux 发行版的支持。
关于此,有一个原因:付费给需要或者想要 Linux 服务的顾客。如果微软不提供 Linux 虚拟机,那些顾客就会跟别人合作了。
看上去好像是微软意识到“Linux 就在这里”的这样一个现实。微软不能真正的消灭它,所以必须接收它。
这又把我们带回到那个问题:关于微软和 Linux 为什么有这么多的流言?我们在谈论这个问题,因为微软希望我们思考这个问题。毕竟,所有这些谈资都会追溯到微软,不管是在新闻稿、博客还是会议上的公开声明。微软在努力吸引大家对其在 Linux 专业知识方面的注意力。
首席架构师 Kamala Subramaniam 的博文声明 Azure Cloud Switch 背后的其他企图会是什么ACS 是一个定制的 Linux 发行版。微软用它来对 Azure 数据中心的交换机硬件进行自动配置。
ACS 不是公开的。它是用于 Azure 内部使用的。别人也不太可能找到这个发行版其他的用途。事实上Subramaniam 在她的博文中也表述了同样的观点。
所以,微软不会通过卖 ACS 来获利,也不会通过赠送它而增加用户基数。相反,微软在 Linux 和 Azure 上花费精力,以加强其在 Linux 云计算平台方面的地位。
微软最近迷上 Linux 对社区来说是好消息吗?
我们不应该慢慢忘记微软的“拥抱、扩展、消灭Embrace, Extend and Exterminate”的诅咒。现在微软处在拥抱 Linux 的初期阶段。微软会通过定制扩展和专有“标准”来分裂社区吗?
发表评论吧,让我们知道你是怎么想的。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0
作者:[James Darvell][a]
译者:[sonofelice](https://github.com/sonofelice)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/james-darvell

View File

@ -1,11 +1,11 @@
如何在 CentOS 7 中添加新磁盘而不用重启系统 如何在 CentOS 7 中添加新磁盘而不用重启系统
================================================================================ ================================================================================
对大多数系统管理员来说扩充 Linux 服务器的磁盘空间是日常的工作之一。因此这篇文章会通过使用 Linux 命令,在 CentOS 7 系统上演示一些简单的操作步骤来扩充您的磁盘空间而不需要重启您的生产服务器。关于扩充和增加新的磁盘到 Linux 系统,我们会提及多种方法和多种可行性,所以可按您所需选择最适用的一种。 对大多数系统管理员来说扩充 Linux 服务器的磁盘空间是日常的工作之一。因此这篇文章会通过使用 Linux 命令,在 CentOS 7 系统上演示一些简单的操作步骤来扩充您的磁盘空间而不需要重启您的生产服务器。关于扩充和增加新的磁盘到 Linux 系统,我们会提及多种方法和多种可行性,可按您所需选择最适用的一种。
### 1. 虚拟机客户端扩充磁盘空间: ### ### 1. 虚拟机客户端扩充磁盘空间: ###
在为 Linux 系统增加磁盘卷之前,您需要添加一块新的物理磁盘或是从正使用的 VMware vShere、工作站或着其它的基础虚拟环境软件中进行设置从而扩充一块系统正使用的虚拟磁盘空间 在为 Linux 系统增加磁盘卷之前,您首先需要添加一块新的物理磁盘,或在 VMware vShere、VMware 工作站以及你使用的其它虚拟环境软件中进行设置来增加一块虚拟磁盘的容量
![Increase disk](http://blog.linoxide.com/wp-content/uploads/2016/02/1.png) ![Increase disk](http://blog.linoxide.com/wp-content/uploads/2016/02/1.png)
@ -22,7 +22,7 @@
### 3. 扩展空间而无需重启虚拟机 ### ### 3. 扩展空间而无需重启虚拟机 ###
现在运行如下命令就可以来扩展操作系统的物理卷磁盘空间,而且不需要重启虚拟机,系统会重新扫描 SCSI Small Computer System Interface 小型计算机系统接口)总线并添加 SCSI 设备。 现在运行如下命令,通过重新扫描 SCSI Small Computer System Interface 小型计算机系统接口)总线并添加 SCSI 设备,系统就可以扩展操作系统的物理卷磁盘空间,而且不需要重启虚拟机
# ls /sys/class/scsi_host/ # ls /sys/class/scsi_host/
# echo "- - -" > /sys/class/scsi_host/host0/scan # echo "- - -" > /sys/class/scsi_host/host0/scan
@ -35,7 +35,7 @@
# echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan # echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
# echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan # echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
如下图所示,会重新扫描 SCSI 总线,随后我们虚拟机客户端设置的磁盘大小会正常显示。 如下图所示,会重新扫描 SCSI 总线,随后我们虚拟机客户端设置的磁盘大小会正常显示。
![Rescan disk device](http://blog.linoxide.com/wp-content/uploads/2016/02/3.png) ![Rescan disk device](http://blog.linoxide.com/wp-content/uploads/2016/02/3.png)
@ -85,7 +85,7 @@
### 5. 创建物理卷: ### ### 5. 创建物理卷: ###
根据提示运行 'partprob' 或 'kpartx' 命令以使分区表被真正使用,然后使用如下的命令来创建新的物理卷。 根据上述提示运行 'partprob' 或 'kpartx' 命令以使分区表生效,然后使用如下的命令来创建新的物理卷。
# partprobe # partprobe
# pvresize /dev/sda3 # pvresize /dev/sda3
@ -107,7 +107,7 @@
# xfs_growfs /dev/mapper/centos-root # xfs_growfs /dev/mapper/centos-root
'/' 分区的大小已经成功的增加了,可以使用 'df' 命令来检查您磁盘驱动的大小。如图示。 '/' 分区的大小已经成功的增加了,可以使用 'df' 命令来检查您磁盘驱动的大小。如图示。
![Increase disk space](http://blog.linoxide.com/wp-content/uploads/2016/02/3C.png) ![Increase disk space](http://blog.linoxide.com/wp-content/uploads/2016/02/3C.png)
@ -129,7 +129,7 @@
# echo "- - -" > /sys/class/scsi_host/host1/scan # echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan # echo "- - -" > /sys/class/scsi_host/host2/scan
列出您的 SCSI 设备的名称 列出您的 SCSI 设备的名称
# ls /sys/class/scsi_device/ # ls /sys/class/scsi_device/
# echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan # echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan
@ -139,7 +139,7 @@
![Scanning new disk](http://blog.linoxide.com/wp-content/uploads/2016/02/3F.png) ![Scanning new disk](http://blog.linoxide.com/wp-content/uploads/2016/02/3F.png)
一旦新增的磁盘可见就可以运行下面的命令来创建新的物理卷,然后增加到卷组,如下示。 一旦新增的磁盘可见就可以运行下面的命令来创建新的物理卷,然后增加到卷组,如下示。
# pvcreate /dev/sdb # pvcreate /dev/sdb
# vgextend centos /dev/sdb # vgextend centos /dev/sdb
@ -157,16 +157,15 @@
### 结论: ### ### 结论: ###
在 Linux CentOS 7 系统上,使用这篇文章所述的操作步骤来扩充您的任意逻辑卷的磁盘空间,此管理磁盘分区的操作过程是非常简单的。您不需要重启生产线上的服务器,只是简单的重扫描下 SCSI 设备,和扩展您想要的 LVM逻辑卷管理。我们希望这文章对您有用。可自由的发表有用的评论和建议。 在 Linux CentOS 7 系统上管理磁盘分区的操作过程是非常简单的可以使用这篇文章所述的操作步骤来扩充您的任意逻辑卷的磁盘空间。您不需要重启生产线上的服务器,只是简单的重扫描下 SCSI 设备,和扩展您想要的 LVM逻辑卷管理。我们希望这文章对您有用。请随意的发表有用的评论和建议。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/add-new-disk-centos-7-without-rebooting/ via: http://linoxide.com/linux-how-to/add-new-disk-centos-7-without-rebooting/
作者:[Kashif S][a] 作者:[Kashif S][a]
译者:[runningwater](https://github.com/runningwater 译者:[runningwater](https://github.com/runningwater)
) 校对:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,58 @@
输错密码?这个 sudo 会“嘲讽”你
===========================================================
你在 Linux 终端中会有很多的乐趣。我今天要讲的不是在[终端中跑火车](http://itsfoss.com/ubuntu-terminal-train/)。
我今天要讲的技巧可以放松你的心情。前面一篇文章中,你学习了[如何在命令行中增加 sudo 命令的超时](http://itsfoss.com/change-sudo-password-timeout-ubuntu/)。今天的文章中,我会向你展示如何让 sudo 在输错密码的时候“嘲讽”你(或者其他人)。
对我讲的感到疑惑?这里,让我们看下这张 gif 来了解下 sudo 是如何在你输错密码之后“嘲讽”你的。
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux.gif)
那么,为什么要这么做?毕竟,“嘲讽”不会让你的一天开心,不是么?
对我来说,一点小技巧都是有趣的,并且要比以前的“密码错误”的错误提示更有趣。另外,我可以向我的朋友展示来逗弄他们(这个例子中是通过自由开源软件)。我很肯定你有你自己的理由来使用这个技巧的。
## 在 sudo 中启用“嘲讽”
你可以在`sudo`配置中增加下面的行来启用“嘲讽”功能:
```
Defaults insults
```
让我们看看该如何做。打开终端并使用下面的命令:
```
sudo visudo
```
这会在 [nano](http://www.nano-editor.org/)中打开配置文件。
> 是的,我知道传统的 visudo 应该在 vi 中打开 `/etc/sudoers` 文件,但是 Ubuntu 及基于它的发行版会使用 nano 打开。由于我们在讨论vi这里有一份 [vi 速查表](http://itsfoss.com/download-vi-cheat-sheet)可以在你决定使用 vi 的时候使用。
回到编辑 sudeors 文件界面,你需要找出 Defaults 所在的行。简单的很,只需要在文件的开头加上`Defaults insults`,就像这样:
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux-Mint.png)
如果你正在使用 nano使用`Ctrl+X`来退出编辑器。在退出的时候它会询问你是否保存更改。要保存更改按下“Y”。
一旦你保存了 sudoers 文件之后,打开终端并使用 sudo 运行各种命令。故意输错密码并享受嘲讽吧:)
sudo 可能会生气的。看见没,他甚至在我再次输错之后威胁我。哈哈。
![](http://itsfoss.com/wp-content/uploads/2016/02/sudo-insults-Linux-Mint-1.jpeg)
如果你喜欢这个终端技巧,你也可以查看[其他终端技巧的文章](http://itsfoss.com/category/terminal-tricks/)。如果你有其他有趣的技巧,在评论中分享。
------------------------------------------------------------------------------
via: http://itsfoss.com/sudo-insult-linux/
作者:[ABHISHEK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/

View File

@ -3,11 +3,11 @@
![](http://insidehpc.com/wp-content/uploads/2015/08/beegfs.jpg) ![](http://insidehpc.com/wp-content/uploads/2015/08/beegfs.jpg)
今天(2月23日 ThinkParQ 宣布完整的 [BeeGFS 并行文件系统][1] 的源码现已开源。由于 BeeGFS 是专为要求性能的环境开发的,所以它在开发时十分注重安装的简单以及高度的灵活性,包括融合了在存储服务器同时做计算任务时需要的设置。随着系统中的服务器以及存储设备的增加,文件系统的容量以及性能将是需求的拓展点,无论是小型集群还是多达上千个节点的企业级系统。 2月23日 ThinkParQ 宣布完整的 [BeeGFS 并行文件系统][1] 的源码现已开源。由于 BeeGFS 是专为要求性能的环境开发的,所以它在开发时十分注重安装的简易性以及高度灵活性,包括融合了在存储服务器同时做计算任务时需要的设置。随着系统中的服务器以及存储设备的增加,文件系统的容量以及性能将是需求的拓展点,无论是小型集群还是多达上千个节点的企业级系统。
第一次官方声明开放 BeeGFS 的源码是在 2013 年的国际超级计算大会上发布的。这个声明是在欧洲的百亿亿次级超算项目 [DEEP-ER][2] 的背景下做出的,在这个项目里为了得到更好的 I/O 要求,一些微小的进步被设计并应用。对于运算量高达百亿亿次的系统,不同的软硬件必须有效的协同工作才能得到最佳的拓展性。因此,开源 BeeGFS 是让一个百亿亿次的集群的所有组成部分高效的发挥作用的一步。 官方第一次声明开放 BeeGFS 的源码是在 2013 年的国际超级计算大会上发布的。这个声明是在欧洲的百亿亿次级超算项目 [DEEP-ER][2] 的背景下做出的,在这个项目里为了得到更好的 I/O 要求,做出了一些新的改进。对于运算量高达百亿亿次的系统,不同的软硬件必须有效的协同工作才能得到最佳的拓展性。因此,开源 BeeGFS 是让一个百亿亿次的集群的所有组成部分高效的发挥作用的一步。
“当我们的一些用户对于 BeeGFS 十分容易安装并且不用费心管理而感到高兴时,另外一些用户则想要知道它是如何运行的以便于更好的优化他们的应用,使得他们可以监控它或者把它移植到其他的平台上,比如 BSD” Sven Breuner 说道,他是 ThinkParQ BeeGFS 背后的公司)的 CEO“而且把 BeeGFS 移植到其他的非 X86 架构,比如 ARM 或者 Power也是社区等着要做的一件事。” “当我们的一些用户对于 BeeGFS 十分容易安装并且不用费心管理而感到高兴时,另外一些用户则想要知道它是如何运行的以便于更好的优化他们的应用,使得他们可以监控它或者把它移植到其他的平台上,比如 BSD” Sven Breuner 说道,他是 ThinkParQ BeeGFS 背后的公司)的 CEO“而且把 BeeGFS 移植到其他的非 X86 架构,比如 ARM 或者 Power也是社区即将要做的一件事。”
对于未来的采购来说ARM 技术的稳步发展确实使得它成为了一个越来越有趣的技术。因此, BeeGFS 的团队也参与了 [ExaNeSt][3],一个来自欧洲的新的百亿亿次级超算计划,这个计划致力于使 ARM 的生态能为高性能的工作负载做好准备。“尽管现在 BeeGFS 在 ARM 处理器上可以算是开箱即用,这个项目也将给我们机会来证明我们在这个架构上也能完全发挥其性能。”, Bernd Lietzow BeeGFS 中 ExaNeSt 的领导者补充道。 对于未来的采购来说ARM 技术的稳步发展确实使得它成为了一个越来越有趣的技术。因此, BeeGFS 的团队也参与了 [ExaNeSt][3],一个来自欧洲的新的百亿亿次级超算计划,这个计划致力于使 ARM 的生态能为高性能的工作负载做好准备。“尽管现在 BeeGFS 在 ARM 处理器上可以算是开箱即用,这个项目也将给我们机会来证明我们在这个架构上也能完全发挥其性能。”, Bernd Lietzow BeeGFS 中 ExaNeSt 的领导者补充道。
@ -17,11 +17,11 @@
----------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------
via: http://insidehpc.com/2016/02/beegfs-parallel-file-system-now-open-source/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+InsideHPC+%28insideHPC.com%29 via: http://insidehpc.com/2016/02/beegfs-parallel-file-system-now-open-source/
作者:[staff][a] 作者:[staff][a]
译者:[name1e5s](https://github.com/name1e5s) 译者:[name1e5s](https://github.com/name1e5s)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,209 @@
用 Python 打造你的 Eclipse
==============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa)
Eclipse 高级脚本环境([EASE][1]项目虽然还在开发中但是必须要承认它非常强大它让我们可以快速打造自己的Eclipse 开发环境。
依据 Eclipse 强大的框架,可以通过其内建的插件系统全方面的扩展 Eclipse。然而编写和部署一个新的插件还是十分麻烦即使你只是需要一个额外的小功能。不过现在依托于 EASE你可以不用写任何一行 Java 代码就可以方便的做到这点。EASE 是一种使用 Python 或者 Javascript 这样的脚本语言自动实现这些功能的平台。
本文中,根据我在今年北美的 EclipseCon 大会上的[演讲][2],我将介绍如何用 Python 和 EASE 设置你的 Eclipse 环境,并告诉如何发挥 Python 的能量让你的 IDE 跑的飞起。
### 安装并运行 "Hello World"
本文中的例子使用 Python 的 Java 实现 Jython。你可以将 EASE 直接安装到你已有的 Eclipse IDE 中。本例中使用[Eclipse Mars][3],并安装 EASE 环境本身以及它的模块和 Jython 引擎。
使用 Eclipse 安装对话框(`Help>Install New Software`...),安装 EASE[http://download.eclipse.org/ease/update/nightly][4]
选择下列组件:
- EASE Core feature
- EASE core UI feature
- EASE Python Developer Resources
- EASE modules (Incubation)
这会安装 EASE 及其模块。这里我们要注意一下 Resource 模块,此模块可以访问 Eclipse 工作空间、项目和文件 API。
![](https://opensource.com/sites/default/files/1_installease_nightly.png)
成功安装后,接下来安装 EASE Jython 引擎 [https://dl.bintray.com/pontesegger/ease-jython/][5] 。完成后,测试下。新建一个项目并新建一个 hello.py 文件,输入:
```
print "hello world"
```
选中这个文件右击并选择“Run as -> EASE script”。这样就可以在控制台看到“Hello world”的输出。
现在就可以编写 Python 脚本来访问工作空间和项目了。这种方法可以用于各种定制,下面只是一些思路。
### 提升你的代码质量
管理良好的代码质量本身是一件非常烦恼的事情,尤其是当需要处理一个大型代码库或者要许多工程师参与的时候。而这些痛苦可以通过脚本来减轻,比如批量格式化一些文件,或者[去掉文件中的 unix 式的行结束符][6]来使得在 git 之类的源代码控制系统中比较差异更加容易。另外一个更好的用途是使用脚本来生成 Eclipse markers 以高亮你可以改善的代码。这里有一些示例脚本,你可以用来在 java 文件中所有找到的“printStackTrace”方法中加入task markers 。请看[源码][7]。
拷贝该文件到工作空间来运行右击并选择“Run as -> EASE script”。
```
loadModule('/System/Resources')
from org.eclipse.core.resources import IMarker
for ifile in findFiles("*.java"):
file_name = str(ifile.getLocation())
print "Processing " + file_name
with open(file_name) as f:
for line_no, line in enumerate(f, start=1):
if "printStackTrace" in line:
marker = ifile.createMarker(IMarker.TASK)
marker.setAttribute(IMarker.TRANSIENT, True)
marker.setAttribute(IMarker.LINE_NUMBER, line_no)
marker.setAttribute(IMarker.MESSAGE, "Fix in Sprint 2: " + line.strip())
```
如果你的 java 文件中包含了 printStackTraces你就可以看见任务视图和编辑器侧边栏上自动新加的标记。
![](https://opensource.com/sites/default/files/2_codequality.png)
### 自动构建繁琐任务
当同时工作在多个项目的时候,肯定需要完成许多繁杂、重复的任务。可能你需要在所有源文件头上加入版权信息,或者采用新框架时候自动更新文件。例如,当首次切换到 Tycho 和 Maven 的时候,我们需要 giel每个项目添加 pom.xml 文件。使用几行 Python 代码可以很轻松的完成这个任务。然后当 Tycho 支持无 pom 构建后,我们需要移除不要的 pom 文件。同样,几行代码就可以搞定这个任务,例如,这里有个脚本可以在每一个打开的工作空间项目上加入 README.md。请看源代码 [add_readme.py][8]。
拷贝该文件到工作空间来运行右击并选择“Run as -> EASE script”。
```
loadModule('/System/Resources')
for iproject in getWorkspace().getProjects():
if not iproject.isOpen():
continue
ifile = iproject.getFile("README.md")
if not ifile.exists():
contents = "# " + iproject.getName() + "\n\n"
if iproject.hasNature("org.eclipse.jdt.core.javanature"):
contents += "A Java Project\n"
elif iproject.hasNature("org.python.pydev.pythonNature"):
contents += "A Python Project\n"
writeFile(ifile, contents)
```
脚本运行的结果会在每个打开的项目中加入 README.mdjava 和 Python 的项目还会自动加上一行描述。
![](https://opensource.com/sites/default/files/3_tedioustask.png)
### 构建新功能
你可以使用 Python 脚本来快速构建一些急需的功能,或者做个原型给团队和用户演示你想要的功能。例如,一个 Eclipse 目前不支持的功能是自动保存你正在工作的文件。即使这个功能将会很快提供,但是你现在就可以马上拥有一个能每隔 30 秒或处于后台时自动保存的编辑器。以下是主要方法的片段。请看下列代码:[autosave.py][9]。
```
def save_dirty_editors():
workbench = getService(org.eclipse.ui.IWorkbench)
for window in workbench.getWorkbenchWindows():
for page in window.getPages():
for editor_ref in page.getEditorReferences():
part = editor_ref.getPart(False)
if part and part.isDirty():
print "Auto-Saving", part.getTitle()
part.doSave(None)
```
在运行脚本之前,你需要勾选 'Allow Scripts to run code in UI thread' 设定,这个设定在 Window > Preferences > Scripting 中。然后添加该脚本到工作空间右击并选择“Run as > EASE Script”。每次编辑器自动保存时控制台就会输出一个保存的信息。要关掉自动保存脚本只需要点击控制台的红色方块的停止按钮即可。
![](https://opensource.com/sites/default/files/4_prototype.png)
### 快速扩展用户界面
EASE 最棒的事情是可以将你的脚本与 IDE 界面上元素(比如一个新的按钮或菜单)结合起来。不需要编写 java 代码或者安装新的插件,只需要在你的脚本前面增加几行代码。
下面是一个简单的脚本示例,用来创建三个新项目。
```
# name : Create fruit projects
# toolbar : Project Explorer
# description : Create fruit projects
loadModule("/System/Resources")
for name in ["banana", "pineapple", "mango"]:
createProject(name)
```
上述注释会专门告诉 EASE 增加了一个按钮到 Project Explorer 工具条。下面这个脚本是用来增加一个删除这三个项目的按钮的。请看源码 [createProjects.py][10] 和 [deleteProjects.py][11]。
```
# name :Delete fruit projects
# toolbar : Project Explorer
# description : Get rid of the fruit projects
loadModule("/System/Resources")
for name in ["banana", "pineapple", "mango"]:
project = getProject(name)
project.delete(0, None)
```
为了使按钮显示出来,增加这两个脚本到一个新的项目,假如叫做 'ScriptsProject'。然后到 Windows > Preference > Scripting > Script Location点击 'Add Workspace' 按钮并选择 ScriptProject 项目。这个项目现在会成为放置脚本的默认位置。你可以发现 Project Explorer 上出现了这两个按钮,这样你就可以通过这两个新加的按钮快速增加和删除项目。
![](https://opensource.com/sites/default/files/5_buttons.png)
### 整合第三方工具
不管怎么说,你可能需要除了 Eclipse 生态系统以外的工具(这是真的,虽然 Eclipse 已经很丰富了,但是不是什么都有)。这些时候你会发现将他们包装在一个脚本来调用会非常方便。这里有一个简单的例子让你整合资源管理器,并将它加入到右键菜单栏,这样点击图标就可以打开资源管理器浏览当前文件。请看源码 [explorer.py][12]。
```
# name : Explore from here
# popup : enableFor(org.eclipse.core.resources.IResource)
# description : Start a file browser using current selection
loadModule("/System/Platform")
loadModule('/System/UI')
selection = getSelection()
if isinstance(selection, org.eclipse.jface.viewers.IStructuredSelection):
selection = selection.getFirstElement()
if not isinstance(selection, org.eclipse.core.resources.IResource):
selection = adapt(selection, org.eclipse.core.resources.IResource)
if isinstance(selection, org.eclipse.core.resources.IFile):
selection = selection.getParent()
if isinstance(selection, org.eclipse.core.resources.IContainer):
runProcess("explorer.exe", [selection.getLocation().toFile().toString()])
```
为了让这个菜单显示出来,像之前一样将该文件加入一个新项目,比如说 'ScriptProject'。然后到 Windows > Preference > Scripting > Script Locations点击“Add Workspace”并选择 'ScriptProject' 项目。当你在文件上右击鼠标键,你会看到弹出菜单出现了新的菜单项。点击它就会出现资源管理器。(注意,这个功能已经出现在 Eclipse 中了,但是你可以在这个例子中换成其它第三方工具。)
![](https://opensource.com/sites/default/files/6_explorer.png)
Eclipse 高级基本环境 EASE提供一套很棒的扩展功能使得 Eclipse IDE 能使用 Python 来轻松扩展。虽然这个项目还在早期,但是[关于这个项目][13]更多更棒的功能也正在加紧开发中,如果你想为它做出贡献,请到[论坛][14]讨论。
我会在 2016 年的 [Eclipsecon North America][15] 会议上发布更多 EASE 细节。我的演讲 [Scripting Eclipse with Python][16] 也会不单会介绍 Jython也包括 C-Python 和这个功能在科学领域是如何扩展的。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/2/how-use-python-hack-your-ide
作者:[Tracy Miranda][a]
译者:[VicYu/Vic020](http://vicyu.net)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tracymiranda
[1]: https://eclipse.org/ease/
[2]: https://www.eclipsecon.org/na2016/session/scripting-eclipse-python
[3]: https://www.eclipse.org/downloads/packages/eclipse-ide-eclipse-committers-451/mars1
[4]: http://download.eclipse.org/ease/update/nightly
[5]: https://dl.bintray.com/pontesegger/ease-jython/
[6]: http://code.activestate.com/recipes/66434-change-line-endings/
[7]: https://gist.github.com/tracymiranda/6556482e278c9afc421d
[8]: https://gist.github.com/tracymiranda/f20f233b40f1f79b1df2
[9]: https://gist.github.com/tracymiranda/e9588d0976c46a987463
[10]: https://gist.github.com/tracymiranda/55995daaea9a4db584dc
[11]: https://gist.github.com/tracymiranda/baa218fc2c1a8e898194
[12]: https://gist.github.com/tracymiranda/8aa3f0fc4bf44f4a5cd3
[13]: https://eclipse.org/ease/
[14]: https://dev.eclipse.org/mailman/listinfo/ease-dev
[15]: https://www.eclipsecon.org/na2016
[16]: https://www.eclipsecon.org/na2016/session/scripting-eclipse-python

View File

@ -0,0 +1,43 @@
Ubuntu Budgie 将在 Ubuntu 16.10 中成为新官方分支发行版
===
> Budgie-Remix Beta 2 已经就绪以供测试。
上个月我们介绍了一个新 GNU/Linux 发行版 [Budgie-Remix][1],它的终极目标是成为一个 Ubuntu 官方分支发行版,可能会使用 Ubuntu Budgie 这个名字。
![Budgie-Remix 16.04 Beta 2](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-2.jpg)
今天Budgie-Remix 的开发者 David Mohammed 向 Softpedia 通报了项目进度,以及为即将到来的 16.04 发布了第二个 Beta 版本。Cononical 的创始人 [Mark Shuttleworth 说过][2]如果能够有围绕这个发行版的社区,它肯定会得到支持。
自我们[最初的报道][3]以来David Mohammed 似乎与 Ubuntu MATE 项目的领导者 Martin Wimpress 取得了联系,后者敦促他以 Ubuntu 16.10 作为他还未正式命名的 Ubuntu 分支的官方版本目标。这个分支发行版构建于 Budgie 桌面环境之上,该桌面环境是由超赞的 [Solus][4] 开发者团队创建的。
“我们本周完成了 Beta 2 版本的开发以及很多其它东西,而且我们还有 Martin Wimpress Ubuntu MATE 项目领导者的支持”David Mohammed 对 Softpedia 独家爆料。”他还敦促我们以 16.10 作为成为官方版本的目标——那当然是个主要的挑战——并且我们还需要社区的帮助/加入我们来让这一切成为现实!”
### Ubuntu Budgie 16.10 可能在 2016 年 10 月到来 ###
4 月 21 日Canonical 将会发布 Ubuntu Linux 的下一个 LTSLong Term Support长期支持版本Xenial Xerus——好客的非洲地松鼠——也就是 Ubuntu 16.04。我们有可能能够提前体验到 Budgie-Remix 16.04,以后它也许成为了官方分支的 Ubuntu Budgie 。但在那之前,你可以帮助开发者[测试 Beta 2 版本][5]。
在 Ubuntu 16.04 LTSXenial Xerus发布之后Ubuntu 的开发者们就会立即将注意力转移到下一个版本的开发上。下一个版本 Ubuntu 16.10 应该会在 10 月底到来,并且 Ubuntu Budgie 也可能宣布成为 Ubuntu 官方分支发行版。
![Budgie Applications Menu](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-3.jpg)
![Budgie Raven notification and customization center](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-4.jpg)
![Nautilus file manager](http://i1-news.softpedia-static.com/images/news2/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573-5.jpg)
----------------------------------
via: http://news.softpedia.com/news/ubuntu-budgie-could-be-the-new-flavor-of-ubuntu-linux-as-part-of-ubuntu-16-10-502573.shtml
作者Marius Nestor
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]: https://launchpad.net/budgie-remix
[2]: https://plus.google.com/+programmerslab/posts/CSvbSvgcdcv
[3]: http://news.softpedia.com/news/budgie-remix-could-become-ubuntu-budgie-download-and-test-it-501231.shtml
[4]: https://solus-project.com/
[5]: https://sourceforge.net/projects/budgie-remix/files/beta2/

View File

@ -0,0 +1,31 @@
因 SUSE Linux Enterprise 12Linux Kernel 3.12 的支持延长至 2017 年
==================================================================================
>Linux 内核开发者 Jiri Slaby 已经宣布了 Linux 3.12 系列内核的第 58 个长期支持维护版本,以及关于生命周期状态的一些修改。
Linux Kernel 3.12.58 LTS长期支持版已经可用那些运行该版本内核的 Linux 操作系统应尽快升级。有个好消息是 Linux 3.12 分支的支持将延长一年至 2017 年,因为 SUSE Linux Enterprise (SLE) 12 Service Pack 1 正是基于该分支。
“因为 SLE12-SP1 基于 3.12 之上,并且它的生命周期持续到 2017 年,所以将 3.12 的生命周期结束也修改到 2017 年”Jiri Slaby 在发到 Linux 内核邮件列表的一个[补丁声明][1]中说道同时他还公布了其它内核分支的生命周期结束时间EOLend of life比如 Linux 3.14 为 2016 年 8 月Linux 3.18 为 2017 年 1 月Linux 4.1 是 2017 年 9 月。
### Linux kernel 3.12.58 LTS 改动
如果你想知道 Linux kernel 3.12.58 LTS 中有哪些改动,我们能告诉你的是,通过[附加的短日志][2]可知,这是一个健康的更新,改动涉及 114 个文件,有 835 行插入和 298 行删除。在这些改动之中,我们注意到多数是声音和网络栈的改进,以及许多驱动的更新。
更新中还有多个对 x86 硬件架构的改进,以及一些对 Xtensa 和 s390 的小修复。文件系统,如 NFSOCFS2BtrfsJBD2 还有 XFS 也都收到了不同的修复,在驱动的更新中可以提到的还有 USBXenMDMTD媒体SCSITTY网络ATA蓝牙hwmonEDAC 以及 CPUFreq。
和往常一样你现在就可以从我们网站linux.com或直接从 kernel.org [下载 Linux kernel 3.12.58 LTS 的源码][3]。如果您的 GNU/Linux 操作系统运行的是 Linux 3.12 系列内核,请尽快升级您的内核。
------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/191-linux-training/834644-7-steps-to-start-your-linux-sysadmin-career
作者:[Marius Nestor][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]: http://lkml.iu.edu/hypermail/linux/kernel/1604.1/00792.html
[2]: http://www.spinics.net/lists/stable/msg128634.html
[3]: http://linux.softpedia.com/get/System/Operating-Systems/Kernels/Linux-Kernel-Stable-1960.shtml

View File

@ -0,0 +1,226 @@
LFCS 系列第一讲:如何在 Linux 上使用 GNU sed 等命令来创建、编辑和操作文件
================================================================================
Linux 基金会宣布了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证计划。这一计划旨在帮助遍布全世界的人们获得其在处理 Linux 系统管理任务上能力的认证。这些能力包括支持运行的系统服务,以及第一手的故障诊断、分析,以及为工程师团队在升级时提供明智的决策。
![Linux Foundation Certified Sysadmin](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-1.png)
*Linux 基金会认证系统管理员——第一讲*
请观看下面关于 Linux 基金会认证计划的演示:
<embed src="http://static.video.qq.com/TPout.swf?vid=l0163eohhs9&auto=0" allowFullScreen="true" quality="high" width="480" height="400" align="middle" allowScriptAccess="always" type="application/x-shockwave-flash"></embed>
该系列将命名为《LFCS 系列第一讲》至《LFCS 系列第十讲》并覆盖关于 Ubuntu、CentOS 以及 openSUSE 的下列话题。
- 第一讲:如何在 Linux 上使用 GNU sed 等命令来创建、编辑和操作文件
- 第二讲:如何安装和使用 vi/m 全功能文字编辑器
- 第三讲:归档文件/目录并在文件系统中寻找文件
- 第四讲:为存储设备分区,格式化文件系统和配置交换分区
- 第五讲:在 Linux 中挂载/卸载本地和网络Samba & NFS文件系统
- 第六讲:组合分区作为 RAID 设备——创建&管理系统备份
- 第七讲:管理系统启动进程和服务(使用 SysVinit, Systemd 和 Upstart
- 第八讲:管理用户和组,文件权限和属性以及启用账户的 sudo 权限
- 第九讲:用 YumRPMAptDpkgAptitudeZypper 进行 Linux 软件包管理
- 第十讲:学习简单的 Shell 脚本编程和文件系统故障排除
重要提示:由于自 2016/2 开始 LFCS 认证要求有所变化,我们增加发布了下列必需的内容。要准备这个考试,推荐你也看看我们的 LFCE 系列。
- 第十一讲:怎样使用 vgcreate、lvcreate 和 lvextend 命令创建和管理 LVM
- 第十二讲:怎样安装帮助文档和工具来探索 Linux
- 第十三讲:怎样配置和排错 GRUB
本文是覆盖这个参加 LFCS 认证考试的所必需的范围和技能的十三个教程的第一讲。话说了那么多,快打开你的终端,让我们开始吧!
### 处理 Linux 中的文本流 ###
Linux 将程序中的输入和输出当成字符流或者字符序列。在开始理解重定向和管道之前我们必须先了解三种最重要的I/OInput and Output输入和输出事实上它们都是特殊的文件根据 UNIX 和 Linux 中的约定,数据流和外围设备(设备文件)也被视为普通文件)。
在 \> (重定向操作符) 和 | (管道操作符)之间的区别是:前者将命令与文件相连接,而后者将命令的输出和另一个命令相连接。
# command > file
# command1 | command2
由于重定向操作符会静默地创建或覆盖文件,我们必须特别小心谨慎地使用它,并且永远不要把它和管道混淆起来。在 Linux 和 UNIX 系统上管道的优势是:第一个命令的输出不会写入一个文件而是直接被第二个命令读取。
在下面的操作练习中我们将会使用这首诗——《A happy child》作者未知)
![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png)
*cat 命令样例*
#### 使用 sed ####
sed 是流编辑器stream editor的缩写。为那些不懂术语的人额外解释一下流编辑器是用来在一个输入流文件或者管道中的输入执行基本的文本转换的工具。
sed 最基本的用法是字符替换。我们将通过把每个出现的小写 y 改写为大写 Y 并且将输出重定向到 ahappychild2.txt 开始。g 标志表示 sed 应该替换文件每一行中所有应当替换的实例。如果这个标志省略了sed 将会只替换每一行中第一次出现的实例。
**基本语法:**
# sed 's/term/replacement/flag' file
**我们的样例:**
# sed 's/y/Y/g' ahappychild.txt > ahappychild2.txt
![sed command](http://www.tecmint.com/wp-content/uploads/2014/10/sed-command.png)
*sed 命令样例*
如果你要在替换文本中搜索或者替换特殊字符(如 /\,&),你需要使用反斜杠对它进行转义。
例如,我们要用一个符号来替换一个文字,与此同时我们将把一行最开始出现的第一个 I 替换为 You。
# sed 's/and/\&/g;s/^I/You/g' ahappychild.txt
![sed replace string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-replace-string.png)
*sed 替换字符串*
在上面的命令中,众所周知 \^(插入符号)是正则表达式中用来表示一行开头的符号。
正如你所看到的,我们可以通过使用分号分隔以及用括号包裹来把两个或者更多的替换命令(并在它们中使用正则表达式)连接起来。
另一种 sed 的用法是显示或者删除文件中选中的一部分。在下面的样例中,将会显示 /var/log/messages 中从6月8日开始的头五行。
# sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p
请注意在默认的情况下sed 会打印每一行。我们可以使用 -n 选项来覆盖这一行为并且告诉 sed 只需要打印(用 p来表示文件或管道中匹配的部分第一个命令中指定以“Jun 8” 开头的行,第二个命令中指定一到五行)。
最后,可能有用的技巧是当检查脚本或者配置文件的时候可以保留文件本身并且删除注释。下面的单行 sed 命令删除d空行或者是开头为`#`的行(| 字符对两个正则表达式进行布尔 OR 操作)。
# sed '/^#\|^$/d' apache2.conf
![sed match string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-match-string.png)
*sed 匹配字符串*
#### uniq 命令 ####
uniq 命令允许我们返回或者删除文件中重复的行默认写到标准输出。我们必须注意到除非两个重复的行相邻否则uniq 命令不会删除他们。因此uniq 经常和一个前置的 sort 命令一种用来对文本行进行排序的算法搭配使用。默认情况下sort 使用第一个字段(用空格分隔)作为关键字段。要指定一个不同的关键字段,我们需要使用 -k 选项。
**样例**
du sch /path/to/directory/* 命令将会以人类可读的格式返回在指定目录下每一个子文件夹和文件的磁盘空间使用情况(也会显示每个目录总体的情况),而且不是按照大小输出,而是按照子文件夹和文件的名称。我们可以使用下面的命令来让它通过大小排序。
# du -sch /var/* | sort -h
![sort command](http://www.tecmint.com/wp-content/uploads/2014/10/sort-command.jpg)
*sort 命令样例*
你可以通过使用下面的命令告诉 uniq 比较每一行的前6个字符-w 6这里是指定的日期来统计日志事件的个数而且在每一行的开头输出出现的次数-c
# cat /var/log/mail.log | uniq -c -w 6
![Count Numbers in File](http://www.tecmint.com/wp-content/uploads/2014/10/count-numbers-in-file.jpg)
*文件中的统计数字*
最后,你可以组合使用 sort 和 uniq 命令(通常如此)。看看下面文件中捐助者、捐助日期和金额的列表。假设我们想知道有多少个捐助者。我们可以使用下面的命令来分隔第一字段(字段由冒号分隔),按名称排序并且删除重复的行。
# cat sortuniq.txt | cut -d: -f1 | sort | uniq
![Find Unique Records in File](http://www.tecmint.com/wp-content/uploads/2014/10/find-uniqu-records-in-file.jpg)
*寻找文件中不重复的记录*
- 也可阅读: [13个“cat”命令样例][1]
#### grep 命令 ####
grep 在文件(或命令输出)中搜索指定正则表达式,并且在标准输出中输出匹配的行。
**样例**
显示文件 /etc/passwd 中用户 gacanepa 的信息,忽略大小写。
# grep -i gacanepa /etc/passwd
![grep Command](http://www.tecmint.com/wp-content/uploads/2014/10/grep-command.jpg)
*grep 命令样例*
显示 /etc 文件夹下所有 rc 开头并跟随任意数字的内容。
# ls -l /etc | grep rc[0-9]
![List Content Using grep](http://www.tecmint.com/wp-content/uploads/2014/10/list-content-using-grep.jpg)
*使用 grep 列出内容*
- 也可阅读: [12个“grep”命令样例][2]
#### tr 命令使用技巧 ####
tr 命令可以用来从标准输入中转换(改变)或者删除字符,并将结果写入到标准输出中。
**样例**
把 sortuniq.txt 文件中所有的小写改为大写。
# cat sortuniq.txt | tr [:lower:] [:upper:]
![Sort Strings in File](http://www.tecmint.com/wp-content/uploads/2014/10/sort-strings.jpg)
*排序文件中的字符串*
压缩`ls l`输出中的分隔符为一个空格。
# ls -l | tr -s ' '
![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg)
*压缩分隔符*
#### cut 命令使用方法 ####
cut 命令可以基于字节(-b选项、字符-c或者字段-f提取部分输入从标准输入或者文件中并且将结果输出到标准输出。在最后一种情况下基于字段默认的字段分隔符是一个制表符但可以由 -d 选项来指定不同的分隔符。
**样例**
从 /etc/passwd 中提取用户账户和他们被分配的默认 shell-d 选项允许我们指定分界符,-f 选项指定那些字段将被提取)。
# cat /etc/passwd | cut -d: -f1,7
![Extract User Accounts](http://www.tecmint.com/wp-content/uploads/2014/10/extract-user-accounts.jpg)
*提取用户账户*
将以上命令结合起来,我们将使用 last 命令的输出中第一和第三个非空文件创建一个文本流。我们将使用 grep 作为第一过滤器来检查用户 gacanepa 的会话然后将分隔符压缩至一个空格tr -s ' ')。下一步,我们将使用 cut 来提取第一和第三个字段最后使用第二个字段本样例中指的是IP地址来排序之后再用 uniq 去重。
# last | grep gacanepa | tr -s | cut -d -f1,3 | sort -k2 | uniq
![last command](http://www.tecmint.com/wp-content/uploads/2014/10/last-command.png)
*last 命令样例*
上面的命令显示了如何将多个命令和管道结合起来,以便根据我们的要求得到过滤后的数据。你也可以逐步地使用它以帮助你理解输出是如何从一个命令传输到下一个命令的(顺便说一句,这是一个非常好的学习经验!)
### 总结 ###
尽管这个例子(以及在当前教程中的其他实例)第一眼看上去可能不是非常有用,但是他们是体验在 Linux 命令行中创建、编辑和操作文件的一个非常好的开始。请随时留下你的问题和意见——不胜感激!
#### 参考链接 ####
- [关于 LFCS][3]
- [为什么需要 Linux 基金会认证?][4]
- [注册 LFCS 考试][5]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
作者:[Gabriel Cánepa][a]
译者:[Xuanwo](https://github.com/Xuanwo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://linux.cn/article-2336-1.html
[2]:https://linux.cn/article-2250-1.html
[3]:https://training.linuxfoundation.org/certification/LFCS
[4]:https://training.linuxfoundation.org/certification/why-certify-with-us
[5]:https://identity.linuxfoundation.org/user?destination=pid/1
[6]:http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/

View File

@ -0,0 +1,251 @@
LFCS 系列第二讲:如何安装和使用纯文本编辑器 vi/vim
================================================================================
几个月前, Linux 基金会发起了 LFCS Linux Foundation Certified System administratorLinux 基金会认证系统管理员)认证,以帮助世界各地的人来验证他们能够在 Linux 系统上做从基础的到中级的系统管理任务:如系统支持、第一手的故障诊断和处理、以及何时向上游支持团队提出问题的智能决策。
![Learning VI Editor in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/LFCS-Part-2.png)
*在 Linux 中学习 vi 编辑器*
请简要看看一下视频,里边介绍了 Linux 基金会认证的程序。
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
这篇文章是系列教程的第二讲,在这个部分中,我们会介绍 vi/vim 基本的文件编辑操作,帮助读者理解编辑器中的三个模式,这是 LFCS 认证考试中必须掌握的。
### 使用 vi/vim 执行基本的文件编辑操作 ###
vi 是为 Unix 而生的第一个全屏文本编辑器。它的设计小巧简单,对于仅仅使用过诸如 NotePad++ 或 gedit 等图形界面的文本编辑器的用户来说,使用起来可能存在一些困难。
为了使用 vi我们必须首先理解这个强大的程序操作中的三种模式方便我们后边学习这个强大的文本处理软件的相关操作。
请注意,大多数的现代 Linux 发行版都集成了 vi 的变种——— vimVi IMproved改进版 VI相比于 vi它有更多新功能。所以我们会在本教程中交替使用 vi 和 vim。
如果你的发行版还没有安装 vim你可以通过以下方法来安装
- Ubuntu 及其衍生版apt-get update && apt-get install vim
- 以 Red-Hat 为基础的发行版yum update && yum install vim
- openSUSE zypper update && zypper install vim
### 我为什么要学习 vi ###
至少有以下两个理由:
1. 因为它是 POSIX 标准的一部分,所以不管你使用什么发行版 vi 总是可用的。
2. vi 基本不消耗多少系统资源,并且允许我们仅仅通过键盘来完成任何可能的任务。
此外vi 有着非常丰富的内置帮助手册,程序打开后就可以通过 `:help` 命令来查看。这个内置帮助手册比 vi/vim 的 man 页面包含了更多信息。
![vi Man Pages](http://www.tecmint.com/wp-content/uploads/2014/10/vi-man-pages.png)
*vi Man 页面*
#### 启动 vi ####
可以通过在命令提示符下输入 vi 来启动。
![Start vi Editor](http://www.tecmint.com/wp-content/uploads/2014/10/start-vi-editor.png)
*使用 vi 编辑器*
然后按下字母 i你就可以开始输入了。或者通过下面的方法来启动 vi
# vi filename
这样会打开一个名为 filename 的缓存区buffer稍后会详细介绍缓存区在你编辑完成之后就可以存储在磁盘中了。
#### 理解 vi 的三个模式 ####
1. 在命令command模式中vi 允许用户浏览该文件并输入由一个或多个字母组成的、简短的、大小写敏感的 vi 命令。这些命令的大部分都可以增加一个前缀数字表示执行次数。
比如:`yy`(或`Y` 复制当前的整行,`3yy`(或`3Y` 复制当前整行和下边紧接着的两行总共3行。通过 `Esc` 键可以随时进入命令模式(而不管当前工作在什么模式下)。事实上,在命令模式下,键盘上所有的输入都被解释为命令而非文本,这往往使得初学者困惑不已。
2. 在末行ex模式中我们可以处理文件包括保存当前文件和运行外部程序。我们必须在命令模式下输入一个冒号`:`),才能进入这个模式,紧接着是要在末行模式下使用的命令。执行之后 vi 自动回到命令模式。
3. 在文本输入insert模式通常在命令模式下使用字母 `i` 进入这个模式)中,我们可以随意输入文本。大多数的键入将以文本形式输出到屏幕(一个重要的例外是`Esc`键,它将退出文本编辑模式并回到命令模式)。
![vi Insert Mode](http://www.tecmint.com/wp-content/uploads/2014/10/vi-insert-mode.png)
*vi 文本插入模式*
#### vi 命令 ####
下面的表格列出常用的 vi 命令。文件编辑的命令可以通过添加叹号的命令强制执行(如,`:q!` 命令强制退出编辑器而不保存文件)。
|关键命令|描述|
|------|:--:|
|`h` 或 ←|光标左移一个字符|
|`j` 或 ↓|光标下移一行|
|`k` 或 ↑|光标上移一行|
|`l` (小写字母 L) 或 →|光标右移一个字符|
|`H`|光标移至屏幕顶行|
|`L`|光标移至屏幕末行|
|`G`|光标移至文件末行|
|`w`|光标右移一个词|
|`b`|光标左移一个词|
|`0` (数字零)|光标移至行首|
|`^`|光标移至当前行第一个非空格字符|
|`$`|光标移至当前行行尾|
|`Ctrl-B`|向后翻页|
|`Ctrl-F`|向前翻页|
|`i`|在光标所在位置插入文本|
|`I` (大写字母 i)|在当前行首插入文本|
|`J` (大写字母 j)|将下一行与当前行合并(下一行上移到当前行)|
|`a`|在光标所在位置后追加文本|
|`o` (小写字母 O)|在当前行下边插入空白行|
|`O` (大写字母 O)|在当前行上边插入空白行|
|`r`|替换光标所在位置的一个字符|
|`R`|从光标所在位置开始覆盖插入文本|
|`x`|删除光标所在位置的字符|
|`X`|立即删除光标所在位置之前(左边)的一个字符|
|`dd`|剪切当前整行文本(为了之后进行粘贴)|
|`D`|剪切光标所在位置到行末的文本(该命令等效于 `d$`|
|`yX`|给出一个移动命令 X (如 `h`、`j`、`H`、`L` 等),复制适当数量的字符、单词或者从光标开始到一定数量的行|
|`yy` 或 `Y`|复制当前整行|
|`p`|粘贴在光标所在位置之后(下一行)|
|`P`|粘贴在光标所在位置之前(上一行)
|`.` (句点)|重复最后一个命令|
|`u`|撤销最后一个命令|
|`U`|撤销最后一行的最后一个命令,只有光标仍在最后一行才能执行。|
|`n`|在查找中跳到下一个匹配项|
|`N`|在查找中跳到前一个匹配项|
|`:n`|下一个文件,编辑多个指定文件时,该命令加载下一个文件。|
|`:e file`|加载新文件来替代当前文件|
|`:r file`|将新文件的内容插入到光标所在位置的下一行|
|`:q`|退出并放弃更改|
|`:w file`|将当期打开的缓存区保存为file。如果是追加到已存在的文件中则使用 `w >> file` 命令|
|`:wq`|保存当前文件的内容并退出。等效于 `x!``ZZ`|
|`:r! command`|执行 command 命令,并将命令的输出插入到光标所在位置的下一行|
#### vi 选项 ####
下列选项可以让你在运行 Vim 的时候很方便(需要写入到 `~/.vimrc` 文件):
# echo set number >> ~/.vimrc
# echo syntax on >> ~/.vimrc
# echo set tabstop=4 >> ~/.vimrc
# echo set autoindent >> ~/.vimrc
![vi Editor Options](http://www.tecmint.com/wp-content/uploads/2014/10/vi-options.png)
*vi编辑器选项*
- set number 当 vi 打开或新建文件时,显示行号。
- syntax on 打开语法高亮(对应多个文件扩展名),以便源码文件和配置文件更具可读性。
- set tabstop=4 设置制表符间距为 4 个空格(默认为 8
- set autoindent 将前一行的缩进应用于下一行。
#### 查找和替换 ####
vi 具有通过查找将光标移动到(在单独一行或者整个文件中的)指定位置。它还可自动或者通过用户确认来执行文本替换。
a) 在行内查找。`f` 命令在当前行查找指定字符,并将光标移动到指定字符出现的位置。
例如,命令 `fh` 会在本行中将光标移动到字母`h`下一次出现的位置。注意,字母 `f` 和你要查找的字符都不会出现在屏幕上,但是当你按下回车的时候,要查找的字符会被高亮显示。
比如,以下是在命令模式按下 `f4` 之后的结果。
![Search String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-string.png)
*在 vi 中查找字符*
b) 在整个文件内查找。使用 `/` 命令,紧接着需要查找的单词或短语。这个查找可以通过使用 `n` 命令或者 `N` 重复查找上一个查找的字符串。以下是在命令模式键入 `/Jane` 的查找结果。
![Vi Search String in File](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-line.png)
*在 vi 中查找字符*
c) vi 通过使用命令来完成多行或者整个文件的替换操作(类似于 sed。我们可以使用以下命令使得整个文件中的单词 “old” 替换为 “young”。
:%s/old/young/g
**注意**:冒号位于命令的最前面。
![Vi Search and Replace](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-and-replace.png)
*vi 的查找和替换*
冒号 (`:`) 进入末行模式,在本例中 `s` 表示替换,`%` 是从第一行到最后一行的表示方式(也可以使用 nm 表示范围,即第 n 行到第 m 行old 是查找模式young 是用来替换的文本,`g` 表示在每个查找出来的字符串都进行替换。
另外,在命令最后增加一个 `c`,可以在每一个匹配项替换前进行确认。
:%s/old/young/gc
将旧文本替换为新文本前vi/vim 会向我们显示以下信息:
![Replace String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-replace-old-with-young.png)
*vi 中替换字符串*
- `y`: 执行替换yes
- `n`: 跳过这个匹配字符的替换并转到下一个no
- `a`: 在当前匹配字符及后边的相同项全部执行替换
- `q``Esc`: 取消替换
- `l` (小写 L): 执行本次替换并退出
- `Ctrl-e`, `Ctrl-y`: 下翻页,上翻页,查看相应的文本来进行替换
#### 同时编辑多个文件 ####
我们在命令提示符输入 vim file1 file2 file3 如下:
# vim file1 file2 file3
vim 会首先打开 file1要跳到 file2 需用 :n 命令。当需要打开前一个文件时,:N 就可以了。
为了从 file1 跳到 file3
a) `:buffers` 命令会显示当前正在编辑的文件列表
:buffers
![Edit Multiple Files](http://www.tecmint.com/wp-content/uploads/2014/10/vi-edit-multiple-files.png)
*编辑多个文件*
b) `:buffer 3` 命令(后边没有 s会打开第三个文件 file3 进行编辑。
在上边的图片中,标记符号 `#` 表示该文件当前已被打开,但是是在后台,而 `%a` 标记的文件是正在被编辑的。另外,文件号(如上边例子的 3后边的空格表示该文件还没有被打开。
#### vi 的临时缓存区 ####
为了复制连续的多行(比如,假设为 4 行)到一个名为 a 的临时缓存区(与文件无关),并且还要将这些行粘贴到在当前 vi 会话文件中的其它位置,我们需要:
1. 按下 `Esc` 键以确认 vi 处在命令模式
2. 将光标放在我们希望复制的第一行文本
3. 输入 `a4yy` 复制当前行和接下来的 3 行,进入一个名为 a 的缓存区。我们可以继续编辑我们的文件————我们不需要立即插入刚刚复制的行。
4. 当到了需要使用刚刚复制的那些行的位置,在 `p`(小写)或 `P`(大写)命令前使用`a`来将复制行插入到名为 a 的 缓存区:
- 输入 `ap`,复制行将插入到光标位置所在行的下一行。
- 输入 `aP`,复制行将插入到光标位置所在行的上一行。
如果愿意,我们可以重复上述步骤,将缓存区 a 中的内容插入到我们文件的多个位置。像本节中这样的一个临时缓存区,会在当前窗口关闭时释放掉。
### 总结 ###
像我们看到的一样vi/vim 在命令接口下是一个强大而灵活的文本编辑器。通过以下链接,随时分享你自己的技巧和评论。
#### 参考链接 ####
- [关于 LFCS][1]
- [为什么需要 Linux 基金会认证?][2]
- [注册 LFCS 考试][3]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/vi-editor-usage/
作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy)
校对:[东风唯笑](https://github.com/dongfengweixiao), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://training.linuxfoundation.org/certification/LFCS
[2]:https://training.linuxfoundation.org/certification/why-certify-with-us
[3]:https://identity.linuxfoundation.org/user?destination=pid/1

View File

@ -1,157 +1,64 @@
GHLandy Translated LFCS 系列第三讲:归档/压缩文件及目录、设置文件属性和搜索文件
LFCS 系列第三讲:如何在 Linux 中归档/压缩文件及目录、设置文件属性和搜索文件
================================================================================ ================================================================================
最近Linux 基金会发起了 一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让遍布全世界的人都有机会参加该认证的考试,并且通过考试的人将会得到关于他们有能力在 Linux 上执行基本的中间系统管理任务的认证证书。这项认证包括了对已运行的系统和服务的支持、一流水平的问题解决和分析以及决定何时将问题反映给工程师团队的能力。
最近Linux 基金会发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让遍布全世界的人都有机会参加该认证的考试,通过考试的人将表明他们有能力在 Linux 上执行基本的中级系统管理任务。这项认证包括了对已运行的系统和服务的支持、一流水平的问题解决和分析以及决定何时将问题反映给工程师团队的能力。
![Linux Foundation Certified Sysadmin Part 3](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-3.png) ![Linux Foundation Certified Sysadmin Part 3](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-3.png)
LFCS 系列第三讲 *LFCS 系列第三讲*
请看以下视频,这里边讲给出 Linux 基金会认证程序的一些想法。 请看以下视频,这里边讲给出 Linux 基金会认证程序的一些想法。
youtube 视频 youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe> <iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是《十套教程》系列中的第三讲,在这一讲中,我们会涵盖如何在文件系统中归档/压缩文件及目录、设置文件属性和搜索文件等内容,这些都是 LFCS 认证中必须掌握的知识。 本讲是系列教程中的第三讲,在这一讲中,我们会涵盖如何在文件系统中归档/压缩文件及目录、设置文件属性和搜索文件等内容,这些都是 LFCS 认证中必须掌握的知识。
### 归档和压缩的相关工具 ### ### 归档和压缩的相关工具 ###
文件归档工具将一堆文件整合到一个单独的归档文件之后,我们可以将归档文件备份到不同类型的介或者通过网络传输和发送 Email 来备份。在 Linux 中使用频率最高的归档实用工具是 tar。当归档工具和压缩工具一起使用的时候可以减少同一文件和信息在硬盘中的存储空间。 文件归档工具将一堆文件整合到一个单独的归档文件之后,我们可以将归档文件备份到不同类型的介或者通过网络传输和发送 Email 来备份。在 Linux 中使用频率最高的归档实用工具是 tar。当归档工具和压缩工具一起使用的时候可以减少同一文件和信息在硬盘中的存储空间。
#### tar 使用工具 #### #### tar 使用工具 ####
tar 将一组文件打包到一个单独的归档文件(通常叫做 tar 文件或者 tarball。tar 这个名称最初代表磁带存档程序tape archiver但现在我们可以用它来归档任意类型的可读写介上边的数据而不是只能归档磁带数据。tar 通常与 gzip、bzip2 或者 xz 等压缩工具一起使用,生成一个压缩的 tarball。 tar 将一组文件打包到一个单独的归档文件(通常叫做 tar 文件或者 tarball。tar 这个名称最初代表磁带存档程序tape archiver但现在我们可以用它来归档任意类型的可读写介上边的数据而不是只能归档磁带数据。tar 通常与 gzip、bzip2 或者 xz 等压缩工具一起使用,生成一个压缩的 tarball。
**基本语法:** **基本语法:**
# tar [选项] [路径名 ...] # tar [选项] [路径名 ...]
其中 ... 代表指定些文件进行归档操作的表达式 其中 ... 代表指定些文件进行归档操作的表达式
#### tar 的常用命令 #### #### tar 的常用命令 ####
注:表格
<table cellspacing="0" border="0"> |长选项|简写|描述|
<colgroup width="150"> |-----|:---:|:---|
</colgroup> | -create| c| 创建 tar 归档文件|
<colgroup width="109"> | -concatenate| A| 将一存档与已有的存档合并|
</colgroup> | -append| r| 把要存档的文件追加到归档文件的末尾|
<colgroup width="351"> | -update| u| 更新新文件到归档文件中去|
</colgroup> | -diff 或 -compare| d| 比较存档与当前文件的不同之处|
<tbody> | -file archive| f| 使用档案文件或归档设备|
<tr> | -list| t| 列出 tarball 中的内容|
<td bgcolor="#999999" height="18" align="CENTER" style="border: 1px solid #000001;"><b>长选项</b></td> | -extract 或 -get| x| 从归档文件中释放文件|
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b>简写</b></td>
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b>描述</b></td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;create</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;c</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;创建 tar 归档文件</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;concatenate</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;A</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;将一存档与已有的存档合并</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;append</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;r</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;把要存档的文件追加到归档文件的末尾</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;update</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;u</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;更新新文件到归档文件中去</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;diff 或 &ndash;compare</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;d</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;比较存档与当前文件的不同之处</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;file archive</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;f</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;使用档案文件或设备</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;list</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;t</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;列出 tarball 中的内容</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;extract 或 &ndash;get</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;x</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;从归档文件中释放文件</td>
</tr>
</tbody>
</table>
#### 常用的操作修饰符 #### #### 常用的操作修饰符 ####
注:表格 注:表格
<table cellspacing="0" border="0">
<colgroup width="162"> |长选项|缩写|描述|
</colgroup> |-----|:--:|:--|
<colgroup width="109"> | -directory dir| C| 执行归档操作前,先转到指定目录|
</colgroup> | -same-permissions| p| 保持原始的文件权限|
<colgroup width="743"> | -verbose| v| 列出所有的读取或提取的文件。但这个标识符与 -list 一起使用的时候,还会显示出文件大小、属主和时间戳的信息|
</colgroup> | -verify| W| 写入存档后进行校验|
<tbody> | -exclude file| | 不把指定文件包含在内|
<tr class="alt"> | -exclude=pattern| X| 以PATTERN模式排除文件|
<td bgcolor="#999999" height="18" align="CENTER" style="border: 1px solid #000001;"><b><span style="font-family: Droid Sans;">长选项</span></b></td> | -gzip 或 -gunzip| z| 通过gzip压缩归档|
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b><span style="font-family: Droid Sans;">缩写</span></b></td> | -bzip2| j| 通过bzip2压缩归档|
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b><span style="font-family: Droid Sans;">描述</span></b></td> | -xz| J| 通过xz压缩归档|
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;directory dir</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;C</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;执行归档操作前,先转到指定目录</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;same-permissions</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;p</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Droid Sans;">&nbsp;保持原始的文件权限</span></td>
</tr>
<tr>
<td height="38" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;verbose</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;v</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;列出所有的读取或提取文件。但这个标识符与 &ndash;list 一起使用的时候,还会显示出文件大小、属主和时间戳的信息</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;verify</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;W</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Droid Sans;">&nbsp;写入存档后进行校验</span></td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;exclude file</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&mdash;</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;不把指定文件包含在内</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;exclude=pattern</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;X</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Droid Sans;">&nbsp;以PATTERN模式排除文件</span></td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;gzip 或 &ndash;gunzip</td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;z</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;通过gzip压缩归档</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;bzip2</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;j</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;通过bzip2压缩归档</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;xz</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;J</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;通过xz压缩归档</td>
</tr>
</tbody>
</table>
Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高一点。另外xz 是最新的压缩工具压缩率最好。xz 具有最佳压缩率的代价是:完成压缩操作花费最多时间,压缩过程中占有较多系统资源。 Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高一点。另外xz 是最新的压缩工具压缩率最好。xz 具有最佳压缩率的代价是:完成压缩操作花费最多时间,压缩过程中占有较多系统资源。
@ -159,7 +66,7 @@ Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高
**通过 gzip、bzip2 和 xz 压缩归档** **通过 gzip、bzip2 和 xz 压缩归档**
归档当前工作目录的所有文件,并以 gzip、bzip2 和 xz 压缩刚刚的归档文件(请注意,用正则表达式来指定那些文件应该归档——这是为了防止归档工具包前一步生成的文件打包进来)。 归档当前工作目录的所有文件,并以 gzip、bzip2 和 xz 压缩刚刚的归档文件(请注意,用正则表达式来指定哪些文件应该归档——这是为了防止将归档工具包前一步生成的文件打包进来)。
# tar czf myfiles.tar.gz file[0-9] # tar czf myfiles.tar.gz file[0-9]
# tar cjf myfiles.tar.bz2 file[0-9] # tar cjf myfiles.tar.bz2 file[0-9]
@ -167,7 +74,7 @@ Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高
![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png) ![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png)
压缩多个文件 *压缩多个文件*
**列举 tarball 中的内容和更新/追加文件到归档文件中** **列举 tarball 中的内容和更新/追加文件到归档文件中**
@ -177,7 +84,7 @@ Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高
![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png) ![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png)
列举归档文件中的内容 *列举归档文件中的内容*
运行一下任意一条命令: 运行一下任意一条命令:
@ -206,7 +113,7 @@ Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高
假设你现在需要备份用户的家目录。一个有经验的系统管理员会选择忽略所有视频和音频文件再备份(也可能是公司规定)。 假设你现在需要备份用户的家目录。一个有经验的系统管理员会选择忽略所有视频和音频文件再备份(也可能是公司规定)。
可能你最先想到的方法是在备份时候,忽略扩展名为 .mp3 和 .mp4或者其他格式的文件。但如果你有些自作聪明的用户将扩展名改为 .txt 或者 .bkp那你的方法就不灵了。为了发现并排除音频或者视频文件你需要先检查文件类型。以下 shell 脚本可以代你完成类型检查: 可能你最先想到的方法是在备份时候,忽略扩展名为 .mp3 和 .mp4或者其他格式的文件。但如果你有些自作聪明的用户将扩展名改为 .txt 或者 .bkp那你的方法就不灵了。为了发现并排除音频或者视频文件你需要先检查文件类型。以下 shell 脚本可以代你完成类型检查:
#!/bin/bash #!/bin/bash
# 把需要进行备份的目录传递给 $1 参数. # 把需要进行备份的目录传递给 $1 参数.
@ -218,7 +125,7 @@ Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高
![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png) ![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png)
排除文件进行备份 *排除文件进行备份*
**使用 tar 保持文件的原有权限进行恢复** **使用 tar 保持文件的原有权限进行恢复**
@ -228,7 +135,7 @@ Gzip 是最古老的压缩工具压缩率最小bzip2 的压缩率稍微高
![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png) ![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png)
从归档文件中恢复 *从归档文件中恢复*
**扩展阅读:** **扩展阅读:**
@ -247,27 +154,27 @@ find 命令用于递归搜索目录树中包含指定字符的文件和目录,
**通过文件大小递归搜索文件** **通过文件大小递归搜索文件**
以下命令会搜索当前目录(.)及其下两层子目录(-maxdepth 3包含当前目录及往下两层的子目录大于 2 MB-size +2M的所有文件-f 以下命令会搜索当前目录(.)及其下两层子目录(-maxdepth 3包含当前目录及往下两层的子目录大于 2 MB-size +2M的所有文件-f
# find . -maxdepth 3 -type f -size +2M # find . -maxdepth 3 -type f -size +2M
![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png) ![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png)
*
通过文件大小搜索文件 通过文件大小搜索文件*
**搜索符合一定规则的文件并将其删除** **搜索符合一定规则的文件并将其删除**
有时候777 权限的文件通常为外部攻击者打开便利之门。不管是以何种方式,让所有人都可以对文件进行任意操作都是不安全的。对此,我们采取一个相对激进的方法——删除这些文件({ }用来“聚集”搜索的结果)。 有时候777 权限的文件通常为外部攻击者打开便利之门。不管是以何种方式,让所有人都可以对文件进行任意操作都是不安全的。对此,我们采取一个相对激进的方法——删除这些文件('{}' + 用来“聚集”搜索的结果)。
# find /home/user -perm 777 -exec rm '{}' + # find /home/user -perm 777 -exec rm '{}' +
![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png) ![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png)
搜索 777 权限的文件 *搜索 777 权限的文件*
**按访问时间和修改时间搜索文件** **按访问时间和修改时间搜索文件**
搜索 /etc 目录下访问时间(-atime或修改时间-mtime大于或小于 6 个月或者刚好 6 个月的配置文件。 搜索 /etc 目录下访问时间(-atime或修改时间-mtime大于+180或小于-180 6 个月或者刚好180 6 个月的配置文件。
按照下面例子对命令进行修改: 按照下面例子对命令进行修改:
@ -275,7 +182,7 @@ find 命令用于递归搜索目录树中包含指定字符的文件和目录,
![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png) ![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png)
按修改时间搜索文件 *按修改时间搜索文件*
- 扩展阅读: [35 Practical Examples of Linux find Command][3] - 扩展阅读: [35 Practical Examples of Linux find Command][3]
@ -301,11 +208,11 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
八进制数值可以从二进制数值进行等值转换,通过下列方法来计算文件属主、同组用户和其他用户权限对应的二进制数值: 八进制数值可以从二进制数值进行等值转换,通过下列方法来计算文件属主、同组用户和其他用户权限对应的二进制数值:
一个确定权限的二进制数值表现为 2 的幂r=2^2w=2^1x=2^0当权限省缺时二进制数值为 0。如下 一个确定权限的二进制数值表现为 2 的幂r=2\^2w=2\^1x=2\^0当权限省缺时二进制数值为 0。如下
![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png) ![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png)
文件权限 *文件权限*
使用八进制数值设置上图的文件权限,请输入: 使用八进制数值设置上图的文件权限,请输入:
@ -313,7 +220,6 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
通过 u、g 和 o 分别代表用户、同组用户和其他用户,然后你也可以使用权限表达式来单独对用户设置文件的权限模式。也可以通过 a 代表所有用户,然后设置文件权限。通过 + 号或者 - 号相应的赋予或移除文件权限。 通过 u、g 和 o 分别代表用户、同组用户和其他用户,然后你也可以使用权限表达式来单独对用户设置文件的权限模式。也可以通过 a 代表所有用户,然后设置文件权限。通过 + 号或者 - 号相应的赋予或移除文件权限。
**为所有用户撤销一个 shell 脚本的执行权限** **为所有用户撤销一个 shell 脚本的执行权限**
正如之前解释的那样,我们可以通过 - 号为需要移除权限的属主、同组用户、其他用户或者所有用户去掉指定的文件权限。下面命令中的短横线(-)可以理解为:移除(-所有用户a的 backup.sh 文件执行权限x 正如之前解释的那样,我们可以通过 - 号为需要移除权限的属主、同组用户、其他用户或者所有用户去掉指定的文件权限。下面命令中的短横线(-)可以理解为:移除(-所有用户a的 backup.sh 文件执行权限x
@ -324,9 +230,11 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
当我们使用 3 位八进制数值为文件设置权限的时候,第一位数字代表属主权限,第二位数字代表同组用户权限,第三位数字代表其他用户的权限: 当我们使用 3 位八进制数值为文件设置权限的时候,第一位数字代表属主权限,第二位数字代表同组用户权限,第三位数字代表其他用户的权限:
- 属主:(r=2^2 + w=2^1 + x=2^0 = 7) - 属主:(r=2\^2 + w=2\^1 + x=2\^0 = 7)
- 同组用户:(r=2^2 + w=2^1 + x=2^0 = 7) - 同组用户:(r=2\^2 + w=2\^1 + x=2\^0 = 7)
- 其他用户:(r=2^2 + w=0 + x=0 = 4), - 其他用户:(r=2\^2 + w=0 + x=0 = 4)
命令如下:
# chmod 774 myfile # chmod 774 myfile
@ -336,7 +244,7 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png) ![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png)
列举 Linux 文件 *列举 Linux 文件*
通过 chown 命令可以对文件的归属权进行更改,可以同时或者分开更改属主和属组。其基本语法为: 通过 chown 命令可以对文件的归属权进行更改,可以同时或者分开更改属主和属组。其基本语法为:
@ -367,9 +275,9 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
先行感谢! 先行感谢!
参考链接 参考链接
- [About the LFCS][4] - [关于 LFCS][4]
- [Why get a Linux Foundation Certification?][5] - [为什么需要 Linux 基金会认证?][5]
- [Register for the LFCS exam][6] - [注册 LFCS 考试][6]
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -377,7 +285,7 @@ via: http://www.tecmint.com/compress-files-and-finding-files-in-linux/
作者:[Gabriel Cánepa][a] 作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy) 译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,13 +1,11 @@
GHLandy Translated LFCS 系列第四讲:对存储设备分区、格式化文件系统和配置交换分区
LFCS 系列第四讲:分区存储设备、格式化文件系统和配置交换分区
================================================================================ ================================================================================
去年八月份Linux 基金会发起了 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,给所有系统管理员一个展现自己的机会。通过基础考试后,他们可以胜任在 Linux 上的整体运维工作:包括系统支持、一流水平的诊断和监控以及在必要之时向其他支持团队提交帮助请求等。 去年八月份Linux 基金会发起了 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,给所有系统管理员一个展现自己的机会。通过基础考试后,他们可以胜任在 Linux 上的整体运维工作:包括系统支持、一流水平的诊断和监控以及在必要之时向其他支持团队提交帮助请求等。
![Linux Foundation Certified Sysadmin Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png) ![Linux Foundation Certified Sysadmin Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png)
LFCS 系列第四讲 *LFCS 系列第四讲*
需要注意的是Linux 基金会认证是非常严格的,通过与否完全要看个人能力。通过在线链接,你可以随时随地参加 Linux 基金会认证考试。所以,你再也不用到考试中心了,只需要不断提高自己的专业技能和经验就可去参加考试了。 需要注意的是Linux 基金会认证是非常严格的,通过与否完全要看个人能力。通过在线链接,你可以随时随地参加 Linux 基金会认证考试。所以,你再也不用到考试中心了,只需要不断提高自己的专业技能和经验就可去参加考试了。
@ -16,13 +14,13 @@ LFCS 系列第四讲
youtube 视频 youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe> <iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是《十套教程》系列中的第四讲。在本讲中,我们将涵盖分区存储设备、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。 本讲是系列教程中的第四讲。在本讲中,我们将涵盖对存储设备进行分区、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。
### 分区存储设备 ### ### 对存储设备分区 ###
分区是一种将单独的硬盘分成一个或多个区的手段。一个分区只是硬盘的一部分,我们可以认为这部分是独立的磁盘,里边包含一个单一类型的文件系统。分区表则是将硬盘上这些分区与分区标识符联系起来的索引。 分区是一种将单独的硬盘分成一个或多个区的手段。一个分区只是硬盘的一部分,我们可以认为这部分是独立的磁盘,里边包含一个单一类型的文件系统。分区表则是将硬盘上这些分区与分区标识符联系起来的索引。
在 Linux IBM PC 兼容系统里边用于管理传统 MBR最新到2009年分区的工具是 fdisk。对于 GPT2010年至今分区我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称如 /dev/sdb进行调用。 在 Linux IBM PC 兼容系统里边用于管理传统 MBR到2009年分区的工具是 fdisk。对于 GPT2010年至今分区我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称如 /dev/sdb进行调用。
#### 使用 fdisk 管理 MBR 分区 #### #### 使用 fdisk 管理 MBR 分区 ####
@ -34,17 +32,17 @@ LFCS 系列第四讲
![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png) ![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png)
fdisk 帮助菜单 *fdisk 帮助菜单*
上图中,使用频率最高的选项已高亮显示。你可以随时按下 “p” 显示分区表。 上图中,使用频率最高的选项已高亮显示。你可以随时按下 “p” 显示分区表。
![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png) ![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png)
显示分区表 *显示分区表*
Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id。一个分区类型代表一种文件系统的标识符简单来说包括该分区上数据的访问方法。 Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id。一个分区类型代表一种文件系统的标识符简单来说包括该分区上数据的访问方法。
请注意,每个分区类型的全面都全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,因能力为主。 请注意,每个分区类型的全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,以考试为主。
**下面列出一些 fdisk 常用选项:** **下面列出一些 fdisk 常用选项:**
@ -58,25 +56,25 @@ Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id。一
![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png) ![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png)
fdisk 命令选项 *fdisk 命令选项*
按下 “n” 后接着按下 “p” 会创建新一个主分区。最后,你可以使用所有的默认值(这将占用所有的可用空间),或者像下面一样自定义分区大小。 按下 “n” 后接着按下 “p” 会创建新一个主分区。最后,你可以使用所有的默认值(这将占用所有的可用空间),或者像下面一样自定义分区大小。
![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png) ![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png)
创建新分区 *创建新分区*
若 fdisk 分配的分区 Id 并不是我们想用的,可以按下 “t” 来更改。 若 fdisk 分配的分区 Id 并不是我们想用的,可以按下 “t” 来更改。
![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png) ![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png)
更改分区类型 *更改分区类型*
全部设置好分区后,按下 “w” 将更改保存到硬盘分区表上。 全部设置好分区后,按下 “w” 将更改保存到硬盘分区表上。
![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png) ![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png)
保存分区更改 *保存分区更改*
#### 使用 gdisk 管理 GPT 分区 #### #### 使用 gdisk 管理 GPT 分区 ####
@ -88,7 +86,7 @@ fdisk 命令选项
![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png) ![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png)
创建 GPT 分区 *创建 GPT 分区*
使用 GPT 分区方案,我们可以在同一个硬盘上创建最多 128 个分区,单个分区最大以 PB 为单位,而 MBR 分区方案最大的只能 2TB。 使用 GPT 分区方案,我们可以在同一个硬盘上创建最多 128 个分区,单个分区最大以 PB 为单位,而 MBR 分区方案最大的只能 2TB。
@ -96,7 +94,7 @@ fdisk 命令选项
![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png) ![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png)
gdisk 命令选项 *gdisk 命令选项*
### 格式化文件系统 ### ### 格式化文件系统 ###
@ -106,12 +104,12 @@ gdisk 命令选项
![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png) ![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png)
检查文件系统类型 *检查文件系统类型*
选择文件系统取决于你的需求。你应该考虑到每个文件系统的优缺点以及其特点。选择文件系统需要看的两个重要属性: 选择文件系统取决于你的需求。你应该考虑到每个文件系统的优缺点以及其特点。选择文件系统需要看的两个重要属性:
- 日志支持,允许从系统崩溃事件中快速恢复数据。 - 日志支持,允许从系统崩溃事件中快速恢复数据。
- 安全增强式 LinuxSELinux支持按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的把握访问控制权限”。 - 安全增强式 LinuxSELinux支持按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的控制访问控制权限”。
在接下来的例子中,我们通过 mkfs 在 /dev/sdb1 上创建 ext4 文件系统(支持日志和 SELinux标卷为 Tecmint。mkfs 基本语法如下: 在接下来的例子中,我们通过 mkfs 在 /dev/sdb1 上创建 ext4 文件系统(支持日志和 SELinux标卷为 Tecmint。mkfs 基本语法如下:
@ -121,7 +119,7 @@ gdisk 命令选项
![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png) ![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png)
创建 ext4 文件系统 *创建 ext4 文件系统*
### 创建并启用交换分区 ### ### 创建并启用交换分区 ###
@ -129,7 +127,7 @@ gdisk 命令选项
下面列出选择交换分区大小的经验法则: 下面列出选择交换分区大小的经验法则:
物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。 > 物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。
所以,如果: 所以,如果:
@ -163,15 +161,15 @@ M为物理内存大小S 为交换分区大小,单位 GB那么
![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png) ![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png)
创建交换分区 *创建交换分区*
![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png) ![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png)
启用交换分区 *启用交换分区*
### 结论 ### ### 结论 ###
在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。 在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。
参考链接 参考链接
@ -185,7 +183,7 @@ via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
作者:[Gabriel Cánepa][a] 作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy) 译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,18 @@
GHLandy Translated
LFCS 系列第五讲:如何在 Linux 中挂载/卸载本地文件系统和网络文件系统Samba 和 NFS LFCS 系列第五讲:如何在 Linux 中挂载/卸载本地文件系统和网络文件系统Samba 和 NFS
================================================================================ ================================================================================
Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。 Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。
![Linux Foundation Certified Sysadmin Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png) ![Linux Foundation Certified Sysadmin Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png)
LFCS 系列第五讲 *LFCS 系列第五讲*
请看以下视频,这里边介绍了 Linux 基金会认证程序。 请看以下视频,这里边介绍了 Linux 基金会认证程序。
youtube 视频 youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe> <iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是《十套教程》系列中的第三讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。 本讲是系列教程中的第五讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。
### 挂载文件系统 ### ### 挂载文件系统 ###
@ -26,20 +22,19 @@ LFCS 系列第五讲
换句话说,管理存储设备的第一步就是把设备关联到文件系统树。要完成这一步,通常可以这样:用 mount 命令来进行临时挂载(用完的时候,使用 umount 命令来卸载),或者通过编辑 /etc/fstab 文件之后重启系统来永久性挂载,这样每次开机都会进行挂载。 换句话说,管理存储设备的第一步就是把设备关联到文件系统树。要完成这一步,通常可以这样:用 mount 命令来进行临时挂载(用完的时候,使用 umount 命令来卸载),或者通过编辑 /etc/fstab 文件之后重启系统来永久性挂载,这样每次开机都会进行挂载。
不带任何选项的 mount 命令,可以显示当前已挂载的文件系统。 不带任何选项的 mount 命令,可以显示当前已挂载的文件系统。
# mount # mount
![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png) ![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png)
检查已挂载的文件系统 *检查已挂载的文件系统*
另外mount 命令通常用来挂载文件系统。其基本语法如下: 另外mount 命令通常用来挂载文件系统。其基本语法如下:
# mount -t type device dir -o options # mount -t type device dir -o options
该命令会指引内核在设备上找到的文件系统如已格式化为指定类型的文件系统挂载到指定目录。像这样的形式mount 命令不会再到 /etc/fstab 文件中进行确认。 该命令会指引内核在设备上找到的文件系统如已格式化为指定类型的文件系统挂载到指定目录。像这样的形式mount 命令不会再到 /etc/fstab 文件中进行确认。
除非像下面,挂载指定的目录或者设备: 除非像下面,挂载指定的目录或者设备:
@ -59,20 +54,17 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
读作: 读作:
设备 dev/mapper/debian-home 的格式为 ext4挂载在 /home 下,并且有以下挂载选项: rwrelatimeuser_xattrbarrier=1data=ordered。 设备 dev/mapper/debian-home 挂载在 /home 下,它被格式化为 ext4,并且有以下挂载选项: rwrelatimeuser_xattrbarrier=1data=ordered。
**mount 命令选项** **mount 命令选项**
下面列出 mount 命令的常用选项 下面列出 mount 命令的常用选项
- async允许在将要挂载的文件系统上进行异步 I/O 操作
- async运许在将要挂载的文件系统上进行异步 I/O 操作 - auto标示该文件系统通过 mount -a 命令挂载,与 noauto 相反。
- auto标志文件系统通过 mount -a 命令挂载,与 noauto 相反。 - defaults该选项相当于 `async,auto,dev,exec,nouser,rw,suid` 的组合。注意多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格mount 命令会把后边的字符解释为另一个参数。
- defaults该选项为 async,auto,dev,exec,nouser,rw,suid 的一个别名。注意多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格mount 命令会把后边的字符解释为另一个参数。
- loop将镜像文件如 .iso 文件)挂载为 loop 设备。该选项可以用来模拟显示光盘中的文件内容。 - loop将镜像文件如 .iso 文件)挂载为 loop 设备。该选项可以用来模拟显示光盘中的文件内容。
- noexec阻止该文件系统中可执行文件的执行。与 exec 选项相反。 - noexec阻止该文件系统中可执行文件的执行。与 exec 选项相反。
- nouser阻止任何用户除 root 用户外) 挂载或卸载文件系统。与 user 选项相反。 - nouser阻止任何用户除 root 用户外) 挂载或卸载文件系统。与 user 选项相反。
- remount重新挂载文件系统。 - remount重新挂载文件系统。
- ro只读模式挂载。 - ro只读模式挂载。
@ -91,7 +83,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png) ![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png)
可读写模式挂载设备 *可读写模式挂载设备*
**以默认模式挂载设备** **以默认模式挂载设备**
@ -102,26 +94,25 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png) ![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png)
挂载设备 *挂载设备*
在这个例子中,我们发现写入文件和命令都完美执行了。 在这个例子中,我们发现写入文件和命令都完美执行了。
### 卸载设备 ### ### 卸载设备 ###
使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的文件系统,就会有造成设备损坏和数据丢失的风险。 使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的设备,就会有造成设备损坏和数据丢失的风险。
也就是说,你必须设备的盘符或者挂载点中退出,才能卸载设备。换言之,当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。 也就是说,你必须“离开”设备的块设备描述符或者挂载点,才能卸载设备。换言之,你的当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。
![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png) ![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png)
卸载设备 *卸载设备*
离开需卸载设备的挂载点最简单的方法就是,运行不带任何选项的 cd 命令,这样会回到当前用户的家目录。 离开需卸载设备的挂载点最简单的方法就是,运行不带任何选项的 cd 命令,这样会回到当前用户的家目录。
### 挂载常见的网络文件系统 ### ### 挂载常见的网络文件系统 ###
最常用的两种网络文件系统是 SMBServer Message Block服务器消息块和 NFSNetwork File System网络文件系统。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix客户端提供共享服务就需要用到 Samba 了。
最常用的两种网络文件系统是 SMBServer Message Block服务器消息块和 NFSNetwork File System网络文件系统。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix 客户端提供共享服务,就需要用到 Samba 了。
扩展阅读 扩展阅读
@ -130,13 +121,13 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
下面的例子中,假设 Samba 和 NFS 已经在地址为 192.168.0.10 的服务器上架设好了(请注意,架设 NFS 服务器也是 LFCS 考试中需要考核的能力,我们会在后边中提到)。 下面的例子中,假设 Samba 和 NFS 已经在地址为 192.168.0.10 的服务器上架设好了(请注意,架设 NFS 服务器也是 LFCS 考试中需要考核的能力,我们会在后边中提到)。
#### 在 Linux 中挂载 Samba 共享 #### #### 在 Linux 中挂载 Samba 共享 ####
第一步:在 Red Hat 以 Debian 系发行版中安装 samba-client、samba-common 和 cifs-utils 软件包,如下: 第一步:在 Red Hat 以 Debian 系发行版中安装 samba-client、samba-common 和 cifs-utils 软件包,如下:
# yum update && yum install samba-client samba-common cifs-utils # yum update && yum install samba-client samba-common cifs-utils
# aptitude update && aptitude install samba-client samba-common cifs-utils # aptitude update && aptitude install samba-client samba-common cifs-utils
然后运行下列命令,查看服务器上可用的 Samba 共享。 然后运行下列命令,查看服务器上可用的 Samba 共享。
# smbclient -L 192.168.0.10 # smbclient -L 192.168.0.10
@ -145,7 +136,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png) ![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png)
挂载 Samba 共享 *挂载 Samba 共享*
上图中,已经对可以挂载到我们本地系统上的共享进行高亮显示。你只需要与一个远程服务器上的合法用户名及密码就可以访问共享了。 上图中,已经对可以挂载到我们本地系统上的共享进行高亮显示。你只需要与一个远程服务器上的合法用户名及密码就可以访问共享了。
@ -164,7 +155,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png) ![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png)
挂载有密码保护的 Samba 共享 *挂载有密码保护的 Samba 共享*
#### 在 Linux 系统中挂载 NFS 共享 #### #### 在 Linux 系统中挂载 NFS 共享 ####
@ -185,7 +176,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png) ![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png)
挂载 NFS 共享 *挂载 NFS 共享*
### 永久性挂载文件系统 ### ### 永久性挂载文件系统 ###
@ -197,13 +188,12 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
其中: 其中:
- <file system>: 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷label或者 UUID 来指定。这样做可以避免分区号改变是带来的错误。 - \<file system>: 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷label或者 UUID 来指定。这样做可以避免分区号改变时带来的错误。
- <mount point>: 第二字段指定挂载点。 - \<mount point>: 第二个字段指定挂载点。
- <type> :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。 - \<type> :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。
- <options>: 一个(或多个)挂载选项。 - \<options>: 一个(或多个)挂载选项。
- <dump>: 你可能把这个字段设置为 0否则设置为 1使得系统启动时禁用 dump 工具dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。 - \<dump>: 你可能把这个字段设置为 0否则设置为 1使得系统启动时禁用 dump 工具dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。
- \<pass>: 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1其他所有需要检查的分区则是以数字 2.
- <pass>: 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1其他所有需要检查的分区则是以数字 2.
**Mount 命令例示** **Mount 命令例示**
@ -211,7 +201,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
LABEL=TECMINT /mnt ext4 rw,noexec 0 0 LABEL=TECMINT /mnt ext4 rw,noexec 0 0
2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加下语句。 2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加下语句。
/dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0 /dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0
@ -219,7 +209,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
### 总结 ### ### 总结 ###
可以放心,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。 不用怀疑,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。
参考链接 参考链接
@ -234,7 +224,7 @@ via: http://www.tecmint.com/mount-filesystem-in-linux/
作者:[Gabriel Cánepa][a] 作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy) 译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,283 @@
LFCS 系列第六讲组装分区为RAID设备——创建和管理系统备份
=========================================================
Linux 基金会已经发起了一个全新的 LFCSLinux Foundation Certified SysadminLinux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中级系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。
![Linux Foundation Certified Sysadmin Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png)
*LFCS 系列第六讲*
以下视频介绍了 Linux 基金会认证程序。
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
本讲是系列教程中的第六讲,在这一讲里,我们将会解释如何将分区组装为 RAID 设备——创建和管理系统备份。这些都是 LFCS 认证中的必备知识。
### 了解RAID ###
这种被称为独立磁盘冗余阵列Redundant Array of Independent Disks(RAID)的技术是将多个硬盘组合成一个单独逻辑单元的存储解决方案,它提供了数据冗余功能并且改善硬盘的读写操作性能。
然而,实际的容错和磁盘 I/O 性能硬盘取决于如何将多个硬盘组装成磁盘阵列。根据可用的设备和容错/性能的需求RAID 被分为不同的级别,你可以参考 RAID 系列文章以获得每个 RAID 级别更详细的解释。
- [在 Linux 下使用 RAID介绍 RAID 的级别和概念][1]
我们选择用于创建、组装、管理、监视软件 RAID 的工具,叫做 mdadm (multiple disk admin 的简写)。
```
---------------- Debian 及衍生版 ----------------
# aptitude update && aptitude install mdadm
```
```
---------------- Red Hat 和基于 CentOS 的系统 ----------------
# yum update && yum install mdadm
```
```
---------------- openSUSE 上 ----------------
# zypper refresh && zypper install mdadm #
```
#### 将分区组装成 RAID 设备 ####
组装已有分区作为 RAID 设备的过程由以下步骤组成。
**1. 使用 mdadm 创建阵列**
如果先前其中一个分区已经格式化,或者作为了另一个 RAID 阵列的一部分,你会被提示以确认创建一个新的阵列。假设你已经采取了必要的预防措施以避免丢失重要数据,那么可以安全地输入 Y 并且按下回车。
```
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
```
![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png)
*创建 RAID 阵列*
**2. 检查阵列的创建状态**
在创建了 RAID 阵列之后,你可以检查使用以下命令检查阵列的状态。
# cat /proc/mdstat
or
# mdadm --detail /dev/md0 [More detailed summary]
![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png)
*检查 RAID 阵列的状态*
**3. 格式化 RAID 设备**
如本系列[第四讲][2]所介绍的,按照你的需求/要求采用某种文件系统格式化你的设备。
**4. 监控 RAID 阵列服务**
让监控服务时刻监视你的 RAID 阵列。把`# mdadm --detail --scan`命令输出结果添加到 `/etc/mdadm/mdadm.conf`(Debian及其衍生版)或者`/etc/mdadm.conf`(Cent0S/openSUSE),如下。
# mdadm --detail --scan
![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png)
*监控 RAID 阵列*
# mdadm --assemble --scan [Assemble the array]
为了确保服务能够开机启动,需要以 root 权限运行以下命令。
**Debian 及其衍生版**
Debian 及其衍生版能够通过下面步骤使服务默认开机启动:
# update-rc.d mdadm defaults
`/etc/default/mdadm` 文件中添加下面这一行
AUTOSTART=true
**CentOS 和 openSUSE(systemd-based)**
# systemctl start mdmonitor
# systemctl enable mdmonitor
**CentOS 和 openSUSE(SysVinit-based)**
# service mdmonitor start
# chkconfig mdmonitor on
**5. 检查RAID磁盘故障**
在支持冗余的的 RAID 级别中,在需要时会替换故障的驱动器。当磁盘阵列中的设备出现故障时,仅当存在我们第一次创建阵列时预留的备用设备时,磁盘阵列会将自动启动重建。
![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png)
*检查 RAID 故障磁盘*
否则,我们需要手动将一个额外的物理驱动器插入到我们的系统,并且运行。
# mdadm /dev/md0 --add /dev/sdX1
/dev/md0 是出现了问题的阵列,而 /dev/sdx1 是新添加的设备。
**6. 拆解一个工作阵列**
如果你需要使用工作阵列的设备创建一个新的阵列,你可能不得不去拆解已有工作阵列——(可选步骤)
# mdadm --stop /dev/md0 # Stop the array
# mdadm --remove /dev/md0 # Remove the RAID device
# mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes
**7. 设置邮件通知**
你可以配置一个用于发送通知的有效邮件地址或者系统账号(确保在 mdadm.conf 文件中有下面这一行)。——(可选步骤)
MAILADDR root
在这种情况下,来自 RAID 后台监控程序所有的通知将会发送到你的本地 root 账号的邮件箱中。其中一个类似的通知如下。
说明此次通知事件和第5步中的例子相关。此处一个设备被标志为错误并且一个空闲的设备自动地被 mdadm 加入到阵列。我们用完了所有“健康的”空闲设备,因此我们得到了通知。
![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png)
*RAID 监控通知*
#### 了解 RAID 级别 ####
**RAID 0**
阵列总大小是最小分区大小的 n 倍n 是阵列中独立磁盘的个数(你至少需要两个驱动器/磁盘)。运行下面命令,使用 /dev/sdb1 和 /dev/sdc1 分区组装一个 RAID 0 阵列。
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
常见用途:用于支持性能比容错更重要的实时应用程序的设置
**RAID 1 (又名镜像)**
阵列总大小等于最小分区大小(你至少需要两个驱动器/磁盘)。运行下面命令,使用 /dev/sdb1 和 /dev/sdc1 分区组装一个 RAID 1 阵列。
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
常见用途:操作系统的安装或者重要的子文件夹,例如 /home
**RAID 5 (又名奇偶校验码盘)**
阵列总大小将是最小分区大小的 (n-1) 倍。所减少的大小用于奇偶校验(冗余)计算(你至少需要3个驱动器/磁盘)。
说明:你可以指定一个空闲设备 (/dev/sde1) 替换问题出现时的故障部分(分区)。运行下面命令,使用 /dev/sdb1, /dev/sdc1, /dev/sdd1/dev/sde1 组装一个 RAID 5 阵列,其中 /dev/sde1 作为空闲分区。
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1
常见用途Web 和文件服务
**RAID 6 (又名双重奇偶校验码盘)**
阵列总大小为(n*s)-2*s其中n为阵列中独立磁盘的个数s为最小磁盘大小。
说明:你可以指定一个空闲分区(在这个例子为 /dev/sdf1)替换问题出现时的故障部分(分区)。
运行下面命令,使用 /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1 和 /dev/sdf1 组装 RAID 6 阵列,其中 /dev/sdf1 作为空闲分区。
# mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1
常见用途:大容量、高可用性要求的文件服务器和备份服务器。
**RAID 1+0 (又名镜像条带)**
因为 RAID 1+0 是 RAID 0 和 RAID 1 的组合,所以阵列总大小是基于两者的公式计算的。首先,计算每一个镜像的大小,然后再计算条带的大小。
# mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1
常见用途:需要快速 IO 操作的数据库和应用服务器
#### 创建和管理系统备份 ####
记住, RAID 其所有的价值不是在于备份的替换者在黑板上写上1000次如果你需要的话但无论何时一定要记住它。在我们开始前我们必须注意的是没有一个放之四海皆准的针对所有系统备份的解决方案但这里有一些东西是你在规划一个备份策略时需要考虑的。
- 你的系统将用于什么?(桌面或者服务器?如果系统是应用于后者,那么最重要的服务是什么?哪个配置是痛点?)
- 你每隔多久备份你的系统?
- 你需要备份的数据是什么(比如文件/文件夹/数据库转储)?你还可以考虑是否需要备份大型文件(比如音频和视频文件)。
- 这些备份将会存储在哪里(物理位置和媒体)
**备份你的数据**
方法1使用 dd 命令备份整个磁盘。你可以在任意时间点通过创建一个准确的镜像来备份一整个硬盘或者是分区。注意当设备是离线时,这种方法效果最好,也就是说它没有被挂载并且没有任何进程的 I/O 操作访问它。
这种备份方法的缺点是镜像将具有和磁盘或分区一样的大小即使实际数据占用的是一个很小的比例。比如如果你想要为只使用了10%的20GB的分区创建镜像那么镜像文件将仍旧是20GB。换句话来讲它不仅包含了备份的实际数据而且也包含了整个分区。如果你想完整备份你的设备那么你可以考虑使用这个方法。
**从现有的设备创建一个镜像文件**
# dd if=/dev/sda of=/system_images/sda.img
或者
--------------------- 可选地,你可以压缩镜像文件 -------------------
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
**从镜像文件恢复备份**
# dd if=/system_images/sda.img of=/dev/sda
或者
--------------------- 根据你创建镜像文件时的选择(译者注:比如压缩) ----------------
# gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
方法2使用 tar 命令备份确定的文件/文件夹——已经在本系列[第三讲][3]中讲了。如果你想要备份指定的文件/文件夹(配置文件,用户主目录等等),你可以使用这种方法。
方法3使用 rsync 命令同步文件。rsync 是一种多功能远程和本地文件复制工具。如果你想要从网络设备备份或同步文件rsync 是一种选择。
无论是你是正在同步两个本地文件夹还是本地 < — > 挂载在本地文件系统的远程文件夹,其基本语法是一样的。
# rsync -av source_directory destination_directory
在这里,-a 递归遍历子目录(如果它们存在的话),维持符号链接、时间戳、权限以及原本的属主/属组,-v 显示详细过程。
![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png)
*rsync 同步文件*
除此之外,如果你想增加在网络上传输数据的安全性,你可以通过 ssh 协议使用 rsync。
**通过 ssh 同步本地到远程文件夹**
# rsync -avzhe ssh backups root@remote_host:/remote_directory/
这个示例,本地主机上的 backups 文件夹将与远程主机上的 /root/remote_directory 的内容同步。
在这里,-h 选项以易读的格式显示文件的大小,-e 标志用于表示一个 ssh 连接。
![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png)
*rsync 同步远程文件*
**通过ssh同步远程到本地文件夹**
在这种情况下,交换前面示例中的 source 和 destination 文件夹。
# rsync -avzhe ssh root@remote_host:/remote_directory/ backups
请注意这些只是 rsync 用法的三个示例而已(你可能遇到的最常见的情形)。对于更多有关 rsync 命令的示例和用法 ,你可以查看下面的文章。
- [在 Linux 下同步文件的10个 rsync命令][4]
### 总结 ###
作为一个系统管理员,你需要确保你的系统表现得尽可能好。如果你做好了充分准备,并且如果你的数据完整性能被诸如 RAID 和系统日常备份的存储技术支持,那你将是安全的。
如果你有有关完善这篇文章的问题、评论或者进一步的想法,可以在下面畅所欲言。除此之外,请考虑通过你的社交网络简介分享这系列文章。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/
作者:[Gabriel Cánepa][a]
译者:[cpsoture](https://github.com/cposture)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://linux.cn/article-6085-1.html
[2]:https://linux.cn/article-7187-1.html
[3]:https://linux.cn/article-7171-1.html
[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/

View File

@ -1,53 +0,0 @@
# Recognizing correct code
Automatic bug-repair system fixes 10 times as many errors as its predecessors.
------
DongShuaike is translating.
MIT researchers have developed a machine-learning system that can comb through repairs to open-source computer programs and learn their general properties, in order to produce new repairs for a different set of programs.
The researchers tested their system on a set of programming errors, culled from real open-source applications, that had been compiled to evaluate automatic bug-repair systems. Where those earlier systems were able to repair one or two of the bugs, the MIT system repaired between 15 and 18, depending on whether it settled on the first solution it found or was allowed to run longer.
While an automatic bug-repair tool would be useful in its own right, professor of electrical engineering and computer science Martin Rinard, whose group developed the new system, believes that the work could have broader ramifications.
“One of the most intriguing aspects of this research is that weve found that there are indeed universal properties of correct code that you can learn from one set of applications and apply to another set of applications,” Rinard says. “If you can recognize correct code, that has enormous implications across all software engineering. This is just the first application of what we hope will be a brand-new, fabulous technique.”
Fan Long, a graduate student in electrical engineering and computer science at MIT, presented a paper describing the new system at the Symposium on Principles of Programming Languages last week. He and Rinard, his advisor, are co-authors.
Users of open-source programs catalogue bugs they encounter on project websites, and contributors to the projects post code corrections, or “patches,” to the same sites. So Long was able to write a computer script that automatically extracted both the uncorrected code and patches for 777 errors in eight common open-source applications stored in the online repository GitHub.
**Feature performance**
As with [all][1] machine-learning systems, the crucial aspect of Long and Rinards design was the selection of a “[feature set][2]” that the system would analyze. The researchers concentrated on values stored in memory — either variables, which can be modified during a programs execution, or constants, which cant. They identified 30 prime characteristics of a given value: It might be involved in an operation, such as addition or multiplication, or a comparison, such as greater than or equal to; it might be local, meaning it occurs only within a single block of code, or global, meaning that its accessible to the program as a whole; it might be the variable that represents the final result of a calculation; and so on.
Long and Rinard wrote a computer program that evaluated all the possible relationships between these characteristics in successive lines of code. More than 3,500 such relationships constitute their feature set. Their machine-learning algorithm then tried to determine what combination of features most consistently predicted the success of a patch.
“All the features were trying to look at are relationships between the patch you insert and the code you are trying to patch,” Long says. “Typically, there will be good connections in the correct patches, corresponding to useful or productive program logic. And there will be bad patterns that mean disconnections in program logic or redundant program logic that are less likely to be successful.”
**Ranking candidates**
In earlier work, Long had developed an algorithm that attempts to repair program bugs by systematically modifying program code. The modified code is then subjected to a suite of tests designed to elicit the buggy behavior. This approach may find a modification that passes the tests, but it could take a prohibitively long time. Moreover, the modified code may still contain errors that the tests dont trigger.
Long and Rinards machine-learning system works in conjunction with this earlier algorithm, ranking proposed modifications according to the probability that they are correct before subjecting them to time-consuming tests.
The researchers tested their system, which they call Prophet, on a set of 69 program errors that had cropped up in eight popular open-source programs. Of those, 19 are amenable to the type of modifications that Longs algorithm uses; the other 50 have more complicated problems that involve logical inconsistencies across larger swaths of code.
When Long and Rinard configured their system to settle for the first solution that passed the bug-eliciting tests, it was able to correctly repair 15 of the 19 errors; when they allowed it to run for 12 hours per problem, it repaired 18.
Of course, that still leaves the other 50 errors in the test set untouched. In ongoing work, Long is working on a machine-learning system that will look at more coarse-grained manipulation of program values across larger stretches of code, in the hope of producing a bug-repair system that can handle more complex errors.
“A revolutionary aspect of Prophet is how it leverages past successful patches to learn new ones,” says Eran Yahav, an associate professor of computer science at the Technion in Israel. “It relies on the insight that despite differences between software projects, fixes — patches — applied to projects often have commonalities that can be learned from. Using machine learning to learn from big code holds the promise to revolutionize many programming tasks — code completion, reverse-engineering, et cetera.”
--------------------------------------------------------------------------------
via: http://news.mit.edu/2016/faster-automatic-bug-repair-code-errors-0129
作者Larry Hardesty
译者:[译者ID](https://github.com/翻译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://news.mit.edu/2013/teaching-computers-to-see-by-learning-to-see-like-computers-0919
[2]:http://news.mit.edu/2015/automating-big-data-analysis-1016

View File

@ -1,72 +0,0 @@
Zephyr Project for Internet of Things, releases from Facebook, IBM, Yahoo, and more news
===========================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/weekly_news_roundup_tv.png?itok=eqUoW1gU)
In this week's edition of our open source news roundup, we take a look at the new IoT project from the Linux Foundation, three big corporations releasing open source, and more.
**News roundup for February 21 - 26, 2016**
### Linux Foundation unveils the Zephyr Project
The Internet of Things (IoT) is shaping up to be the next big thing in consumer technology. At the moment, most IoT solutions are proprietary and closed source. Open source is making numerous in-roads into the IoT world, and that's undoubtedly going to accelerate now that the Linux Foundation has [announced the Zephyr Project][1].
The Zephyr Project, according to ZDNet, "hopes to bring vendors and developers together under a single operating system which could make the development of connected devices an easier, less expensive and more stable process." The Project "aims to incorporate input from the open source and embedded developer communities and to encourage collaboration on the RTOS (real-time operating system)," according to the [Linux Foundation's press release][2].
Currently, Intel Corporation, NXP Semiconductors N.V., Synopsys, Inc., and UbiquiOS Technology Limited are the main supporters of the project. The Linux Foundation intends to attract other IoT vendors to this effort as well.
### Releases from Facebook, IBM, Yahoo
As we all know, open source isn't just about individuals or small groups hacking on code and hardware. Quite a few large corporations have significant investments in open source. This past week, three of them affirmed their commitment to open source.
Yahoo again waded into open source waters this week with the [release of CaffeOnSpark][3] artificial intelligence software under an Apache 2.0 license. CaffeOnSpark performs "a popular type of AI called 'deep learning' on the vast swaths of data kept in its Hadoop open-source file system for storing big data," according to VentureBeat. If you're curious, you can [find the source code on GitHub][4].
Earlier this week, Facebook "[unveiled a new project that seeks not only to accelerate the evolution of technologies that drive our mobile networks, but to freely share this work with the worlds telecoms][5]," according to Wired. The company plans to build "everything from new wireless radios to nee optical fiber equipment." The designs, according to Facebook, will be open source so any telecom firm can use them.
As part of the [Open Mainframe Project][6], IBM has open sourced the code for its Anomaly Detection Engine (ADE) for Linux logs. [According to IBM][7], "ADE detects anomalous time slices and messages in Linux logs using statistical learning" to detect suspicious behaviour. You can grab the [source code for ADE][8] from GitHub.
### European Union to fund research
The European Research Council, the European Union's science and technology funding body, is [funding four open source research projects][9] to the tune of about €2 million. According to joinup.ec.europa.eu, the projects being funded are:
- A code audit of Mozilla's open source Rust programming language
- An initiative at INRIA (France's national computer science research center) studying secure programming
- A project at Austria's Technische Universitat Graz testing "ways to secure code against attacks that exploit certain properties of the computer hardware"
- The "development of techniques to prove popular cryptographic protocols and schemes" at IST Austria
### In other news
- [Infosys' newest weapon: open source][10]
- [Intel demonstrates Android smartphone running a Linux desktop][11]
- [BeeGFS file system goes open source][12]
A big thanks, as always, to the Opensource.com moderators and staff for their help this week.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/2/weekly-news-feb-26
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[1]: http://www.zdnet.com/article/the-linux-foundations-zephyr-project-building-an-operating-system-for-iot-devices/
[2]: http://www.linuxfoundation.org/news-media/announcements/2016/02/linux-foundation-announces-project-build-real-time-operating-system
[3]: http://venturebeat.com/2016/02/24/yahoo-open-sources-caffeonspark-deep-learning-framework-for-hadoop/
[4]: https://github.com/yahoo/CaffeOnSpark
[5]: http://www.wired.com/2016/02/facebook-open-source-wireless-gear-forge-5g-world/
[6]: https://www.openmainframeproject.org/
[7]: http://openmainframeproject.github.io/ade/
[8]: https://github.com/openmainframeproject/ade
[9]: https://joinup.ec.europa.eu/node/149541
[10]: http://www.businessinsider.in/Exclusive-Infosys-is-using-Open-Source-as-its-mostlethal-weapon-yet/articleshow/51109129.cms
[11]: http://www.theregister.co.uk/2016/02/23/move_over_continuum_intel_shows_android_smartphone_powering_bigscreen_linux/
[12]: http://insidehpc.com/2016/02/beegfs-parallel-file-system-now-open-source/

View File

@ -1,42 +0,0 @@
Node.js 5.7 released ahead of impending OpenSSL updates
=============================================================
![](http://images.techhive.com/images/article/2014/09/nodejs-100449932-primary.idge.jpg)
>Once again, OpenSSL fixes must be evaluated by keepers of the popular server-side JavaScript platform
The Node.js Foundation is gearing up this week for fixes to OpenSSL that could mean updates to Node.js itself.
Releases to OpenSSL due on Tuesday will fix defects deemed to be of "high" severity, Rod Vagg, foundation technical steering committee director, said [in a blog post][1] on Monday. Within a day of the OpenSSL releases, the Node.js crypto team will assess their impacts, saying, "Please be prepared for the possibility of important updates to Node.js v0.10, v0.12, v4 and v5 soon after Tuesday, the 1st of March."
[ Deep Dive: [How to rethink security for the new world of IT][2]. | Discover how to secure your systems with InfoWorld's [Security newsletter][3]. ]
The high severity status actually means the issues are of lower risks than critical, perhaps affecting less-common configurations or less likely to be exploitable. Due to an embargo, the exact nature of these fixes and their impact on Node.js remain uncertain, said Vagg. "Node.js v0.10 and v0.12 both use OpenSSL v1.0.1, and Node.js v4 and v5 both use OpenSSL v1.0.2, and releases from nodejs.org and some other popular distribution sources are statically compiled. Therefore, all active release lines are impacted by this update." OpenSSL also impacted Node.js in December, [when two critical vulnerabilities were fixed][4].
The latest OpenSSL developments follow [the release of Node.js 5.7.0][5], which is clearing a path for the upcoming Node.js 6. Version 5 is the main focus for active development, said foundation representative Mikeal Rogers, "However, v5 won't be supported long-term, and most users will want to wait for v6, which will be released by the end of April, for the new features that are landing in v5."
Release 5.7 has more predictability for C++ add-ons' interactions with JavaScript. Node.js can invoke JavaScript code from C++ code, and in version 5.7, the C++ node::MakeCallback() API is now re-entrant; calling it from inside another MakeCallback() call no longer causes the nextTick queue or Promises microtask queue to be processed out of order, [according to release notes][6].
Also fixed is an HTTP bug where handling headers mistakenly trigger an "upgrade" event where the server just advertises protocols. The bug can prevent HTTP clients from communicating with HTTP2-enabled servers. Version 5.7 performance improvements are featured in the path, querystring, streams, and process.nextTick modules.
This story, "Node.js 5.7 released ahead of impending OpenSSL updates" was originally published by [InfoWorld][7].
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3039005/security/nodejs-57-released-ahead-of-impending-openssl-updates.html
作者:[Paul Krill][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Paul-Krill/
[1]: https://nodejs.org/en/blog/vulnerability/openssl-march-2016/
[2]: http://www.infoworld.com/resources/17273/security-management/how-to-rethink-security-for-the-new-world-of-it#tk.ifw-infsb
[3]: http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
[4]: http://www.infoworld.com/article/3012157/security/why-nodejs-waited-for-openssl-security-update-before-patching.html
[5]: https://nodejs.org/en/blog/release/v5.7.0/
[6]: https://nodejs.org/en/blog/release/v5.7.0/
[7]: http://www.infoworld.com/

View File

@ -1,32 +0,0 @@
OpenSSH 7.2 Out Now with Support for RSA Signatures Using SHA-256/512 Algorithms
========================================================
**Today, February 29, 2016, the OpenBSD project had the great pleasure of announcing the release and immediate availability for download of OpenSSH 7.2 for all supported platforms.**
According to the internal [release notes][1], also attached at the end of the article for reference, OpenSSH 7.2 is primarily a bugfix release, fixing most of the issues reported by users or discovered by the development team since the release of OpenSSH 7.1p2, but we can see several new features as well.
Among these, we can mention support for RSA signatures using SHA-256 or SHA-256 512 hash algorithms, the addition of an AddKeysToAgent client option to add private keys used for authentication to the ssh-agent, and the implementation of the "restrict" authorized_keys option for storing key restrictions.
Furthermore, there's now an ssh_config CertificateFile option for explicitly listing certificates, the ssh-keygen is now capable of changing the key comment for all supported formats, fingerprinting is now allowed from standard input and for multiple public keys in a file.
### ssh-keygen now supports multiple certificates
In addition to the changes mentioned above, OpenSSH 7.2 adds support for multiple certificates to ssh-keygen, one per line, implements the "none" argument for sshd_config ChrootDirectory and Foreground, and the "-c" flag allows ssh-keyscan to fetch certificates instead of plain keys.
Last but not least, OpenSSH 7.2 no longer enables by default all flavors of the rijndael-cbc aliases for AES, blowfish-cbc, and cast128-cbc legacy ciphers, as well as MD5-based and truncated HMAC algorithms. The getrandom() syscall is now supported under Linux. [Download OpenSSH 7.2][2] and check the changelog below for some additional details about exactly what has been fixed in this major release.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/openssh-7-2-out-now-with-support-for-rsa-signatures-using-sha-256-512-algorithms-501111.shtml
作者:[Marius Nestor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://news.softpedia.com/editors/browse/marius-nestor
[1]: http://www.openssh.com/txt/release-7.2
[2]: http://linux.softpedia.com/get/Security/OpenSSH-4474.shtml

View File

@ -1,194 +0,0 @@
5 best open source board games to play online
================================================================================
I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons.
I had a panache for abstract strategy games such as chess and draughts, as well as word games. I can still never resist a game of Escape from Colditz, a strategy card and dice-based board game, or Risk; two timeless multi-player strategy board games. But Catan remains my favourite board game.
Board games have seen a resurgence in recent years, and Linux has a good range of board games to choose from. There is a credible implementation of Catan called Pioneers. But for my favourite implementations of classic board games to play online, check out the recommendations below.
----------
### TripleA ###
![TripleA in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-TripleA.png)
TripleA is an open source online turn based strategy game. It allows people to implement and play various strategy board games (ie. Axis & Allies). The TripleA engine has full networking support for online play, support for sounds, XML support for game files, and has its own imaging subsystem that allows for customized user editable maps to be used. TripleA is versatile, scalable and robust.
TripleA started out as a World War II simulation, but now includes different conflicts, as well as variations and mods of popular games and maps. TripleA comes with multiple games and over 100 more games can be downloaded from the user community.
Features include:
- Good interface and attractive graphics
- Optional scenarios
- Multiplayer games
- TripleA comes with the following supported games that uses its game engine (just to name a few):
- Axis & Allies : Classic edition (2nd, 3rd with options enabled)
- Axis & Allies : Revised Edition
- Pact of Steel A&A Variant
- Big World 1942 A&A Variant
- Four if by Sea
- Battle Ship Row
- Capture The Flag
- Minimap
- Hot-seat
- Play By EMail mode allows persons to play a game via EMail without having to be connected to each other online
- More time to think out moves
- Only need to come online to send your turn to the next player
- Dice rolls are done by a dedicated dice server that is independent of TripleA
- All dice rolls are PGP Verified and email to every player
- Every move and every dice roll is logged and saved in TripleA's History Window
- An online game can be later continued under PBEM mode
- Hard for others to cheat
- Hosted online lobby
- Utilities for editing maps
- Website: [triplea.sourceforge.net][1]
- Developer: Sean Bridges (original developer), Mark Christopher Duncan
- License: GNU GPL v2
- Version Number: 1.8.0.7
----------
### Domination ###
![Domination in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Domination.png)
Domination is an open source game that shares common themes with the hugely popular Risk board game. It has many game options and includes many maps.
In the classic “World Domination” game of military strategy, you are battling to conquer the world. To win, you must launch daring attacks, defend yourself to all fronts, and sweep across vast continents with boldness and cunning. But remember, the dangers, as well as the rewards, are high. Just when the world is within your grasp, your opponent might strike and take it all away!
Features include:
- Simple to learn
- Domination - you must occupy all countries on the map, and thereby eliminate all opponents. These can be long, drawn out games
- Capital - each player has a country they have selected as a Capital. To win the game, you must occupy all Capitals
- Mission - each player draws a random mission. The first to complete their mission wins. Missions may include the elimination of a certain colour, occupation of a particular continent, or a mix of both
- Map editor
- Simple map format
- Multiplayer network play
- Single player
- Hotseat
- 5 user interfaces
- Game types:
- Play online
- Website: [domination.sourceforge.net][2]
- Developer: Yura Mamyrin, Christian Weiske, Mike Chaten, and many others
- License: GNU GPL v3
- Version Number: 1.1.1.5
----------
### PyChess ###
![Micro-Max in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-Pychess.jpg)
PyChess is a Gnome inspired chess client written in Python.
The goal of PyChess, is to provide a fully featured, nice looking, easy to use chess client for the gnome-desktop.
The client should be usable both to those totally new to chess, those who want to play an occasional game, and those who wants to use the computer to further enhance their play.
Features include:
- Attractive interface
- Chess Engine Communication Protocol (CECP) and Univeral Chess Interface (UCI) Engine support
- Free online play on the Free Internet Chess Server (FICS)
- Read and writes PGN, EPD and FEN chess file formats
- Built-in Python based engine
- Undo and pause functions
- Board and piece animation
- Drag and drop
- Tabbed interface
- Hints and spyarrows
- Opening book sidepanel using sqlite
- Score plot sidepanel
- "Enter game" in pgn dialog
- Optional sounds
- Legal move highlighting
- Internationalised or figure pieces in notation
- Website: [www.pychess.org][3]
- Developer: Thomas Dybdahl Ahle
- License: GNU GPL v2
- Version Number: 0.12 Anderssen rc4
----------
### Scrabble ###
![Scrabble in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Scrabble3D.png)
Scrabble3D is a highly customizable Scrabble game that not only supports Classic Scrabble and Superscrabble but also 3D games and own boards. You can play local against the computer or connect to a game server to find other players.
Scrabble is a board game with the goal to place letters crossword like. Up to four players take part and get a limited amount of letters (usually 7 or 8). Consecutively, each player tries to compose his letters to one or more word combining with the placed words on the game array. The value of the move depends on the letters (rare letter get more points) and bonus fields which multiply the value of a letter or the whole word. The player with most points win.
This idea is extended with Scrabble3D to the third dimension. Of course, a classic game with 15x15 fields or Superscrabble with 21x21 fields can be played and you may configure any field setting by yourself. The game can be played by the provided freeware program against Computer, other local players or via internet. Last but not least it's possible to connect to a game server to find other players and to obtain a rating. Most options are configurable, including the number and valuation of letters, the used dictionary, the language of dialogs and certainly colors, fonts etc.
Features include:
- Configurable board, letterset and design
- Board in OpenGL graphics with user-definable wavefront model
- Game against computer with support of multithreading
- Post-hoc game analysis with calculation of best move by computer
- Match with other players connected on a game server
- NSA rating and highscore at game server
- Time limit of games
- Localization; use of non-standard digraphs like CH, RR, LL and right to left reading
- Multilanguage help / wiki
- Network games are buffered and asynchronous games are possible
- Running games can be kibitzed
- International rules including italian "Cambio Secco"
- Challenge mode, What-if-variant, CLABBERS, etc
- Website: [sourceforge.net/projects/scrabble][4]
- Developer: Heiko Tietze
- License: GNU GPL v3
- Version Number: 3.1.3
----------
### Backgammon ###
![Backgammon in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-gnubg.png)
GNU Backgammon (gnubg) is a strong backgammon program (world-class with a bearoff database installed) usable either as an engine by other programs or as a standalone backgammon game. It is able to play and analyze both money games and tournament matches, evaluate and roll out positions, and more.
In addition to supporting simple play, it also has extensive analysis features, a tutor mode, adjustable difficulty, and support for exporting annotated games.
It currently plays at about the level of a championship flight tournament player and is gradually improving.
gnubg can be played on numerous on-line backgammon servers, such as the First Internet Backgammon Server (FIBS).
Features include:
- A command line interface (with full command editing features if GNU readline is available) that lets you play matches and sessions against GNU Backgammon with a rough ASCII representation of the board on text terminals
- Support for a GTK+ interface with a graphical board window. Both 2D and 3D graphics are available
- Tournament match and money session cube handling and cubeful play
- Support for both 1-sided and 2-sided bearoff databases: 1-sided bearoff database for 15 checkers on the first 6 points and optional 2-sided database kept in memory. Optional larger 1-sided and 2-sided databases stored on disk
- Automated rollouts of positions, with lookahead and race variance reduction where appropriate. Rollouts may be extended
- Functions to generate legal moves and evaluate positions at varying search depths
- Neural net functions for giving cubeless evaluations of all other contact and race positions
- Automatic and manual annotation (analysis and commentary) of games and matches
- Record keeping of statistics of players in games and matches (both native inside GNU Backgammon and externally using relational databases and Python)
- Loading and saving analyzed games and matches as .sgf files (Smart Game Format)
- Exporting positions, games and matches to: (.eps) Encapsulated Postscript, (.gam) Jellyfish Game, (.html) HTML, (.mat) Jellyfish Match, (.pdf) PDF, (.png) Portable Network Graphics, (.pos) Jellyfish Position, (.ps) PostScript, (.sgf) Gnu Backgammon File, (.tex) LaTeX, (.txt) Plain Text, (.txt) Snowie Text
- Import of matches and positions from a number of file formats: (.bkg) Hans Berliner's BKG Format, (.gam) GammonEmpire Game, (.gam) PartyGammon Game, (.mat) Jellyfish Match, (.pos) Jellyfish Position, (.sgf) Gnu Backgammon File, (.sgg) GamesGrid Save Game, (.tmg) TrueMoneyGames, (.txt) Snowie Text
- Python Scripting
- Native language support; 10 languages complete or in progress
- Website: [www.gnubg.org][5]
- Developer: Joseph Heled, Oystein Johansen, Jonathan Kinsey, David Montgomery, Jim Segrave, Joern Thyssen, Gary Wong and contributors
- License: GPL v2
- Version Number: 1.05.000
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150830011533893/BoardGames.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://triplea.sourceforge.net/
[2]:http://domination.sourceforge.net/
[3]:http://www.pychess.org/
[4]:http://sourceforge.net/projects/scrabble/
[5]:http://www.gnubg.org/

View File

@ -1,336 +0,0 @@
Bossie Awards 2015: The best open source application development tools
================================================================================
InfoWorld's top picks among platforms, frameworks, databases, and all the other tools that programmers use
![](http://images.techhive.com/images/article/2015/09/bossies-2015-app-dev-100613767-orig.jpg)
### The best open source development tools ###
There must be a better way, right? The developers are the ones who find it. This year's winning projects in the application development category include client-side frameworks, server-side frameworks, mobile frameworks, databases, languages, libraries, editors, and yeah, Docker. These are our top picks among all of the tools that make it faster and easier to build better applications.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613773-orig.jpg)
### Docker ###
The darling of container fans almost everywhere, [Docker][2] provides a low-overhead way to isolate an application or services environment, which serves its stated goal of being an open platform for building, shipping, and running distributed applications. Docker has been widely supported, even among those seeking to replace the Docker container format with an alternative, more secure runtime and format, specifically Rkt and AppC. Heck, Microsoft Visual Studio now supports deploying into a Docker container too.
Dockers biggest impact has been on virtual machine environments. Since Docker containers run inside the operating system, many more Docker containers than virtual machines can run in a given amount of RAM. This is important because RAM is usually the scarcest and most expensive resource in a virtualized environment.
There are hundreds of thousands of runnable public images on Docker Hub, of which a few hundred are official, and the rest are from the community. You describe Docker images with a Dockerfile and build images locally from the Docker command line. You can add both public and private image repositories to Docker Hub.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613778-orig.jpg)
### Node.js and io.js ###
[Node.js][2] -- and its recently reunited fork [io.js][3] -- is a platform built on [Google Chrome's V8 JavaScript runtime][4] for building fast, scalable, network applications. Node uses an event-driven, nonblocking I/O model without threads. In general, Node tends to take less memory and CPU resources than other runtime engines, such as Java and the .Net Framework. For example, a typical Node.js Web server can run well in a 512MB instance on Cloud Foundry or a 512MB Docker container.
The Node repository on GitHub has more than 35,000 stars and more than 8,000 forks. The project, sponsored primarily by Joyent, has more than 600 contributors. Some of the more famous Node applications are 37Signals, [Ancestry.com][5], Chomp, the Wall Street Journal online, FeedHenry, [GE.com][6], Mockingbird, [Pearson.com][7], Shutterstock, and Uber. The popular IoT back-end Node-RED is built on Node, as are many client apps, such as Brackets and Nuclide.
-- Martin Heller
![](rticle/2015/09/bossies-2015-angularjs-100613766-orig.jpg)
### AngularJS ###
[AngularJS][8] (or simply Angular, among friends) is a Model-View-Whatever (MVW) JavaScript AJAX framework that extends HTML with markup for dynamic views and data binding. Angular is especially good for developing single-page Web applications and linking HTML forms to models and JavaScript controllers.
The weird sounding Model-View-Whatever pattern is an attempt to include the Model-View-Controller, Model-View-ViewModel, and Model-View-Presenter patterns under one moniker. The differences among these three closely related patterns are the sorts of topics that programmers love to argue about fiercely; the Angular developers decided to opt out of the discussion.
Basically, Angular automatically synchronizes data from your UI (view) with your JavaScript objects (model) through two-way data binding. To help you structure your application better and make it easy to test, AngularJS teaches the browser how to do dependency injection and inversion of control.
Angular was created by Google and open-sourced under the MIT license; there are currently more than 1,200 contributors to the project on GitHub, and the repository has more than 40,000 stars and 18,000 forks. The Angular site lists [210 “neat things” built with Angular][9].
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-react-100613782-orig.jpg)
### React ###
[React][10] is a JavaScript library for building a UI or view, typically for single-page applications. Note that React does not implement anything having to do with a model or controller. React pages can render on the server or the client; rendering on the server (with Node.js) is typically much faster. People often combine React with AngularJS to create complete applications.
React combines JavaScript and HTML in a single file, optionally a JSX component. React fans like the way JSX components combine views and their related functionality in one file, though that flies in the face of the last decade of Web development trends, which were all about separating the markup and the code. React fans also claim that you cant understand it until youve tried it. Perhaps you should; the React repository on GitHub has 26,000 stars.
[React Native][11] implements React with native iOS controls; the React Native command line uses Node and Xcode. [ReactJS.Net][12] integrates React with [ASP.Net][13] and C#. React is available under a BSD license with a patent license grant from Facebook.
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-atom-100613768-orig.jpg)
### Atom ###
[Atom][14] is an open source, hackable desktop editor from GitHub, based on Web technologies. Its a full-featured tool with a fuzzy finder; fast projectwide search and replace; multiple cursors and selections; multiple panes, snippets, code folding; and the ability to import TextMate grammars and themes. Out of the box, Atom displayed proper syntax highlighting for every programming language on which I tried it, except for F# and C#; I fixed that easily by loading those packages from within Atom. Not surprising, Atom has tight integration with GitHub.
The skeleton of Atom has been separated from the guts and called the Electron shell, providing an open source way to build cross-platform desktop apps with Web technologies. Visual Studio Code is built on the Electron shell, as are a number of proprietary and open source apps, including Slack and Kitematic. Facebook Nuclide adds significant functionality to Atom, including remote development and support for Flow, Hack, and Mercurial.
On the downside, updating Atom packages can become painful, especially if you have many of them installed. The Nuclide packages seem to be the worst offenders -- they not only take a long time to update, they run CPU-intensive Node processes to do so.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-brackets-100613769-orig.jpg)
### Brackets ###
[Brackets][15] is a lightweight editor for Web design that Adobe developed and open-sourced, drawing heavily on other open source projects. The idea is to build better tooling for JavaScript, HTML, CSS, and related open Web technologies. Brackets itself is written in JavaScript, HTML, and CSS, and the developers use Brackets to build Brackets. The editor portion is based on another open source project, CodeMirror, and the Brackets native shell is based on Googles Chromium Embedded Framework.
Brackets features a clean UI, with the ability to open a quick inline editor that displays all of the related CSS for some HTML, or all of the related JavaScript for some scripting, and a live preview for Web pages that you are editing. New in Brackets 1.4 is instant search in files, easier preferences editing, the ability to enable and disable extensions individually, improved text rendering on Macs, and Greek and Cyrillic character support. Last November, Adobe started shipping a preview version of Extract for Brackets, which can pull out design information from Photoshop files, as part of the default download for Brackets.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-typescript-100613786-orig.jpg)
### TypeScript ###
[TypeScript][16] is a portable, duck-typed superset of JavaScript that compiles to plain JavaScript. The goal of the project is to make JavaScript usable for large applications. In pursuit of that goal, TypeScript adds optional types, classes, and modules to JavaScript, and it supports tools for large-scale JavaScript applications. Typing gets rid of some of the nonsensical and potentially buggy default behavior in JavaScript, for example:
> 1 + "1"
'11'
“Duck” typing means that the type checking focuses on the shape of the data values; TypeScript describes basic types, interfaces, and classes. While the current version of JavaScript does not support traditional, class-based, object-oriented programming, the ECMAScript 6 specification does. TypeScript compiles ES6 classes into plain, compatible JavaScript, with prototype-based objects, unless you enable ES6 output using the `--target` compiler option.
Visual Studio includes TypeScript in the box, starting with Visual Studio 2013 Update 2. You can also edit TypeScript in Visual Studio Code, WebStorm, Atom, Sublime Text, and Eclipse.
When using an external JavaScript library, or new host API, you'll need to use a declaration file (.d.ts) to describe the shape of the library. You can often find declaration files in the [DefinitelyTyped][17] repository, either by browsing, using the [TSD definition manager][18], or using NuGet.
TypeScripts GitHub repository has more than 6,000 stars.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-swagger-100613785-orig.jpg)
### Swagger ###
[Swagger][19] is a language-agnostic interface to RESTful APIs, with tooling that gives you interactive documentation, client SDK generation, and discoverability. Its one of several recent attempts to codify the description of RESTful APIs, in the spirit of WSDL for XML Web Services (2000) and CORBA for distributed object interfaces (1991).
The tooling makes Swagger especially interesting. [Swagger-UI][20] automatically generates beautiful documentation and a live API sandbox from a Swagger-compliant API. The [Swagger codegen][21] project allows generation of client libraries automatically from a Swagger-compliant server.
[Swagger Editor][22] lets you edit Swagger API specifications in YAML inside your browser and preview documentations in real time. Valid Swagger JSON descriptions can then be generated and used with the full Swagger tooling.
The [Swagger JS][23] library is a fast way to enable a JavaScript client to communicate with a Swagger-enabled server. Additional clients exist for Clojure, Go, Java, .Net, Node.js, Perl, PHP, Python, Ruby, and Scala.
The [Amazon API Gateway][24] is a managed service for API management at scale. It can import Swagger specifications using an open source [Swagger Importer][25] tool.
Swagger and friends use the Apache 2.0 license.
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-polymer-100613781-orig.jpg)
### Polymer ###
The [Polymer][26] library is a lightweight, “sugaring” layer on top of the Web components APIs to help in building your own Web components. It adds several features for greater ease in building complex elements, such as creating custom element registration, adding markup to your element, configuring properties on your element, setting the properties with attributes, data binding with mustache syntax, and internal styling of elements.
Polymer also includes libraries of prebuilt elements. The Iron library includes elements for working with layout, user input, selection, and scaffolding apps. The Paper elements implement Google's Material Design. The Gold library includes elements for credit card input fields for e-commerce, the Neon elements implement animations, the Platinum library implements push messages and offline caching, and the Google Web Components library is exactly what it says; it includes wrappers for YouTube, Firebase, Google Docs, Hangouts, Google Maps, and Google Charts.
Polymer Molecules are elements that wrap other JavaScript libraries. The only Molecule currently implemented is for marked, a Markdown library. The Polymer repository on GitHub currently has 12,000 stars. The software is distributed under a BSD-style license.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-ionic-100613775-orig.jpg)
### Ionic ###
The [Ionic][27] framework is a front-end SDK for building hybrid mobile apps, using Angular.js and Cordova, PhoneGap, or Trigger.io. Ionic was designed to be similar in spirit to the Android and iOS SDKs, and to do a minimum of DOM manipulation and use hardware-accelerated transitions to keep the rendering speed high. Ionic is focused mainly on the look and feel and UI interaction of your app.
In addition to the framework, Ionic encompasses an ecosystem of mobile development tools and resources. These include Chrome-based tools, Angular extensions for Cordova capabilities, back-end services, a development server, and a shell View App to enable testers to use your Ionic code on their devices without the need for you to distribute beta apps through the App Store or Google Play.
Appery.io integrated Ionic into its low-code builder in July 2015. Ionics GitHub repository has more than 18,000 stars and more than 3,000 forks. Ionic is distributed under an MIT license and currently runs in UIWebView for iOS 7 and later, and in Android 4.1 and up.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-cordova-100613771-orig.jpg)
### Cordova ###
[Apache Cordova][28] is the open source project spun off when Adobe acquired PhoneGap from Nitobi. Cordova is a set of device APIs, plus some tooling, that allows a mobile app developer to access native device functionality like the camera and accelerometer from JavaScript. When combined with a UI framework like Angular, it allows a smartphone app to be developed with only HTML, CSS, and JavaScript. By using Cordova plug-ins for multiple devices, you can generate hybrid apps that share a large portion of their code but also have access to a wide range of platform capabilities. The HTML5 markup and code runs in a WebView hosted by the Cordova shell.
Cordova is one of the cross-platform mobile app options supported by Visual Studio 2015. Several companies offer online builders for Cordova apps, similar to the Adobe PhoneGap Build service. Online builders save you from having to install and maintain most of the device SDKs on which Cordova relies.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-famous-100613774-orig.jpg)
### Famous Engine ###
The high-performance Famo.us JavaScript framework introduced last year has become the [Famous Engine][29] and [Famous Framework][30]. The Famous Engine runs in a mixed mode, with the DOM and WebGL under a single coordinate system. As before, Famous structures applications in a scene graph hierarchy, but now it produces very little garbage (reducing the garbage collector overhead) and sustains 60FPS animations.
The Famous Physics engine has been refactored to its own, fine-grained module so that you can load only the features you need. Other improvements since last year include streamlined eventing, improved sizing, decoupling the scene graph from the rendering pipeline by using a draw command buffer, and switching to a fully open MIT license.
The new Famous Framework is an alpha-stage developer preview built on the Famous Engine; its goal is creating reusable, composable, and interchangeable UI widgets and applications. Eventually, Famous hopes to replace the jQuery UI widgets with Famous Framework widgets, but while it's promising, the Famous Framework is nowhere near production-ready.
-- Martin Heller
![](http://images.techhive.com/images/article/2015/09/bossies-2015-mongodb-rev-100614248-orig.jpg)
### MongoDB ###
[MongoDB][31] is no stranger to the Bossies or to the ever-growing and ever-competitive NoSQL market. If you still aren't familiar with this very popular technology, here's a brief overview: MongoDB is a cross-platform document-oriented database, favoring JSON-like documents with dynamic schemas that make data integration easier and faster.
MongoDB has attractive features, including but not limited to ad hoc queries, flexible indexing, replication, high availability, automatic sharding, load balancing, and aggregation.
The big, bold move with [version 3.0 this year][32] was the new WiredTiger storage engine. We can now have document-level locking. This makes “normal” applications a whole lot more scalable and makes MongoDB available to more use cases.
MongoDB has a growing open source ecosystem with such offerings as the [TokuMX engine][33], from the famous MySQL bad boys Percona. The long list of MongoDB customers includes heavy hitters such as Craigslist, eBay, Facebook, Foursquare, Viacom, and the New York Times.
-- Andrew Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-couchbase-100614851-orig.jpg)
### Couchbase ###
[Couchbase][34] is another distributed, document-oriented database that has been making waves in the NoSQL world for quite some time now. Couchbase and MongoDB often compete, but they each have their sweet spots. Couchbase tends to outperform MongoDB when doing more in memory is possible.
Additionally, Couchbases mobile features allow you to disconnect and ship a database in compact format. This allows you to scale down as well as up. This is useful not just for mobile devices but also for specialized applications, like shipping medical records across radio waves in Africa.
This year Couchbase added N1QL, a SQL-based query language that did away with Couchbases biggest obstacle, requiring static views. The new release also introduced multidimensional scaling. This allows individual scaling of services such as querying, indexing, and data storage to improve performance, instead of adding an entire, duplicate node.
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-cassandra-100614852-orig.jpg)
### Cassandra ###
[Cassandra][35] is the other white meat of column family databases. HBase might be included with your favorite Hadoop distribution, but Cassandra is the one people deliberately deploy for specialized applications. There are good reasons for this.
Cassandra was designed for high workloads of both writes and reads where millisecond consistency isn't as important as throughput. HBase is optimized for reads and greater write consistency. To a large degree, Cassandra tends to be used for operational systems and HBase more for data warehouse and batch-system-type use cases.
While Cassandra has not received as much attention as other NoSQL databases and slipped into a quiet period a couple years back, it is widely used and deployed, and it's a great fit for time series, product catalog, recommendations, and other applications. If you want to keep a cluster up “no matter what” with multiple masters and multiple data centers, and you need to scale with lots of reads and lots of writes, Cassandra might just be your Huckleberry.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orientdb-100613780-orig.jpg)
### OrientDB ###
[OrientDB][36] is an interesting hybrid in the NoSQL world, combining features from a document database, where individual documents can have multiple fields without necessarily defining a schema, and a graph database, which consists of a set of nodes and edges. At a basic level, OrientDB considers the document as a vertex, and relationships between fields as graph edges. Because the relationships between elements are part of the record, no costly joins are required when querying data.
Like most databases today, OrientDB offers linear scalability via a distributed architecture. Adding capacity is a matter of simply adding more nodes to the cluster. Queries are written in a variant of SQL that is extended to support graph concepts. It's not exactly SQL, but data analysts shouldn't have too much trouble adapting. Language bindings are available for most commonly used languages, such as R, Scala, .Net, and C, and those integrating OrientDB into their applications will find an active user community to get help from.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-rethinkdb-100613783-orig.jpg)
### RethinkDB ###
[RethinkDB][37] is a scalable, real-time JSON database with the ability to continuously push updated query results to applications that subscribe to changes. There are official RethinkDB drivers for Ruby, Python, and JavaScript/Node.js, and community-supported drivers for more than a dozen other languages, including C#, Go, and PHP.
Its temping to confuse RethinkDB with real-time sync APIs, such as Firebase and PubNub. RethinkDB can be run as a cloud service like Firebase and PubNub, but you can also install it on your own hardware or Docker containers. RethinkDB does more than synchronize: You can run arbitrary RethinkDB queries, including table joins, subqueries, geospatial queries, and aggregation. Finally, RethinkDB is designed to be accessed from an application server, not a browser.
Where MongoDB requires you to poll the database to see changes, RethinkDB lets you subscribe to a stream of changes to a query result. You can shard and scale RethinkDB easily, unlike MongoDB. Also unlike relational databases, RethinkDB does not give you full ACID support or strong schema enforcement, although it can perform joins.
The RethinkDB repository has 10,000 stars on GitHub, a remarkably high number for a database. It is licensed with the Affero GPL 3.0; the drivers are licensed with Apache 2.0.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rust-100613784-orig.jpg)
### Rust ###
[Rust][38] is a syntactically C-like systems programming language from Mozilla Research that guarantees memory safety and offers painless concurrency (that is, no data races). It does not have a garbage collector and has minimal runtime overhead. Rust is strongly typed with type inference. This is all promising.
Rust was designed for performance. It doesnt yet demonstrate great performance, however, so now the mantra seems to be that it runs as fast as C++ code that implements all the safety checks built into Rust. Im not sure whether I believe that, as in many cases the strictest safety checks for C/C++ code are done by static and dynamic analysis and testing, which dont add any runtime overhead. Perhaps Rust performance will come with time.
So far, the only tools for Rust are the Cargo package manager and the rustdoc documentation generator, plus a couple of simple Rust plug-ins for programming editors. As far as we have heard, there is no shipping software that was actually built with Rust. Now that Rust has reached the 1.0 milestone, we might expect that to change.
Rust is distributed with a dual Apache 2.0 and MIT license. With 13,000 stars on its GitHub repository, Rust is certainly attracting attention, but when and how it will deliver real benefits remains to be seen.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opencv-100613779-orig.jpg)
### OpenCV ###
[OpenCV][39] (Open Source Computer Vision Library) is a computer vision and machine learning library that contains about 500 algorithms, such as face detection, moving object tracking, image stitching, red-eye removal, machine learning, and eye movement tracking. It runs on Windows, Mac OS X, Linux, Android, and iOS.
OpenCV has official C++, C, Python, Java, and MATLAB interfaces, and wrappers in other languages such as C#, Perl, and Ruby. CUDA and OpenCL interfaces are under active development. OpenCV was originally (1999) an Intel Research project in Russia; from there it moved to the robotics research lab Willow Garage (2008) and finally to [OpenCV.org][39] (2012) with a core team at Itseez, current source on GitHub, and stable snapshots on SourceForge.
Users of OpenCV include Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, and Toyota. There are currently more than 6,000 stars and 5,000 forks on the GitHub repository. The project uses a BSD license.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-llvm-100613777-orig.jpg)
### LLVM ###
The [LLVM Project][40] is a collection of modular and reusable compiler and tool chain technologies, which originated at the University of Illinois. LLVM has grown to include a number of subprojects, several of which are interesting in their own right. LLVM is distributed with Debian, Ubuntu, and Apple Xcode, among others, and its used in commercial products from the likes of Adobe (including After Effects), Apple (including Objective-C and Swift), Cray, Intel, NVIDIA, and Siemens. A few of the open source projects that depend on LLVM are PyPy, Mono, Rubinius, Pure, Emscripten, Rust, and Julia. Microsoft has recently contributed LLILC, a new LLVM-based compiler for .Net, to the .Net Foundation.
The main LLVM subprojects are the core libraries, which provide optimization and code generation; Clang, a C/C++/Objective-C compiler thats about three times faster than GCC; LLDB, a much faster debugger than GDB; libc++, an implementation of the C++ 11 Standard Library; and OpenMP, for parallel programming.
-- Martin Heller
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613823-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][41]
[Bossie Awards 2015: The best open source application development tools][42]
[Bossie Awards 2015: The best open source big data tools][43]
[Bossie Awards 2015: The best open source data center and cloud software][44]
[Bossie Awards 2015: The best open source desktop and mobile software][45]
[Bossie Awards 2015: The best open source networking and security software][46]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982920/open-source-tools/bossie-awards-2015-the-best-open-source-application-development-tools.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.docker.com/
[2]:https://nodejs.org/en/
[3]:https://iojs.org/en/
[4]:https://developers.google.com/v8/?hl=en
[5]:http://www.ancestry.com/
[6]:http://www.ge.com/
[7]:https://www.pearson.com/
[8]:https://angularjs.org/
[9]:https://builtwith.angularjs.org/
[10]:https://facebook.github.io/react/
[11]:https://facebook.github.io/react-native/
[12]:http://reactjs.net/
[13]:http://asp.net/
[14]:https://atom.io/
[15]:http://brackets.io/
[16]:http://www.typescriptlang.org/
[17]:http://definitelytyped.org/
[18]:http://definitelytyped.org/tsd/
[19]:http://swagger.io/
[20]:https://github.com/swagger-api/swagger-ui
[21]:https://github.com/swagger-api/swagger-codegen
[22]:https://github.com/swagger-api/swagger-editor
[23]:https://github.com/swagger-api/swagger-js
[24]:http://aws.amazon.com/cn/api-gateway/
[25]:https://github.com/awslabs/aws-apigateway-importer
[26]:https://www.polymer-project.org/
[27]:http://ionicframework.com/
[28]:https://cordova.apache.org/
[29]:http://famous.org/
[30]:http://famous.org/framework/
[31]:https://www.mongodb.org/
[32]:http://www.infoworld.com/article/2878738/nosql/first-look-mongodb-30-for-mature-audiences.html
[33]:http://www.infoworld.com/article/2929772/nosql/mongodb-crossroads-growth-or-openness.html
[34]:http://www.couchbase.com/nosql-databases/couchbase-server
[35]:https://cassandra.apache.org/
[36]:http://orientdb.com/
[37]:http://rethinkdb.com/
[38]:https://www.rust-lang.org/
[39]:http://opencv.org/
[40]:http://llvm.org/
[41]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[42]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[43]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[44]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[45]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[46]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,238 +0,0 @@
Bossie Awards 2015: The best open source applications
================================================================================
InfoWorld's top picks in open source business applications, enterprise integration, and middleware
![](http://images.techhive.com/images/article/2015/09/bossies-2015-applications-100614669-orig.jpg)
### The best open source applications ###
Applications -- ERP, CRM, HRM, CMS, BPM -- are not only fertile ground for three-letter acronyms, they're the engines behind every modern business. Our top picks in the category include back- and front-office solutions, marketing automation, lightweight middleware, heavyweight middleware, and other tools for moving data around, mixing it together, and magically transforming it into smarter business decisions.
![](http://images.techhive.com/images/article/2015/09/bossies-2015-xtuple-100614684-orig.jpg)
### xTuple ###
Small and midsize companies with light manufacturing or distribution needs have a friend in [xTuple][1]. This modular ERP/CRM combo bundles operations and financial control, product and inventory management, and CRM and sales support. Its relatively simple install lets you deploy all of the modules or only what you need today -- helping trim support costs without sacrificing customization later.
This summers release brought usability improvements to the UI and a generous number of bug fixes. Recent updates also yielded barcode scanning and label printing for mobile warehouse workers, an enhanced workflow module (built with Plv8, a wrapper around Googles V8 JavaScript engine that lets you write stored procedures for PostgreSQL in JavaScript), and quality management tools that are sure to get mileage on shop floors.
The xTuple codebase is JavaScript from stem to stern. The server components can all be installed locally, in xTuples cloud, or deployed as an appliance. A mobile Web client, and mobile CRM features, augment a good native desktop client.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-odoo-100614678-orig.jpg)
### Odoo ###
[Odoo][2] used to be known as OpenERP. Last year the company raised private capital and broadened its scope. Today Odoo is a one-stop shop for back office and customer-facing applications -- replete with content management, business intelligence, and e-commerce modules.
Odoo 8 fronts accounting, invoicing, project management, resource planning, and customer relationship management tools with a flexible Web interface that can be tailored to your companys workflow. Add-on modules for warehouse management and HR, as well as for live chat and analytics, round out the solution.
This year saw Odoo focused primarily on usability updates. A recently released sales planner helps sales groups track KPIs, and a new tips feature lends in-context help. Odoo 9 is right around the corner with alpha builds showing customer portals, Web form creation tools, mobile and VoIP services, and integration hooks to eBay and Amazon.
Available for Windows and Linux, and as a SaaS offering, Odoo gives small and midsized companies an accessible set of tools to manage virtually every aspect of their business.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-idempiere-100614673-orig.jpg)
### iDempiere ###
Small and midsize companies have great choices in Odoo and xTuple. Larger manufacturing and distribution companies will need something more. For them, theres [iDempiere][3] -- a well maintained offshoot of ADempiere with OSGi modularity.
iDempiere implements a fully loaded ERP, supply chain, and CRM suite right out of the box. Built with Java, iDempiere supports both PostgreSQL and Oracle Database, and it can be customized extensively through modules built to the OSGi specification. iDempiere is perfectly suited to managing complex business scenarios involving multiple partners, requiring dynamic reporting, or employing point-of-sale and warehouse services.
Being enterprise-ready comes with a price. iDempieres feature-rich tools and complexity impose a steep learning curve and require a commitment to integration support. Of course, those costs are offset by savings from the softwares free GPL2 licensing. iDempieres easy install script, small resource footprint, and clean interface also help alleviate some of the startup pains. Theres even a virtual appliance available on Sourceforge to get you started.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-suitecrm-100614680-orig.jpg)
### SuiteCRM ###
SugarCRM held the sweet spot in open source CRM since, well, forever. Then last year Sugar announced it would no longer contribute to the open source Community Edition. Into the ensuing vacuum rushed [SuiteCRM][4] a fork of the final Sugar code.
SuiteCRM 7.2 creates an experience on a par with SugarCRM Professionals marketing, sales, and service tools. With add-on modules for workflow, reporting, and security, as well as new innovations like Lucene-driven search, taps for social media, and a beta reveal of new desktop notifications, SuiteCRM is on solid footing.
The Advanced Open Sales module provides a familiar migration path from Sugar, while commercial support is available from the likes of [SalesAgility][5], the company that forked SuiteCRM in the first place. In little more than a year, SuiteCRM rescued the code, rallied an inspired community, and emerged as a new leader in open source CRM. Who needs Sugar?
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-civicrm-100614671-orig.jpg)
### CiviCRM ###
We typically focus attention on CRM vis-à-vis small and midsize business requirements. But nonprofit and advocacy groups need to engage with their “customers” too. Enter [CiviCRM][6].
CiviCRM addresses the needs of nonprofits with tools for fundraising and donation processing, membership management, email tracking, and event planning. Granular access control and security bring role-based permissions to views, keeping paid staff and volunteers partitioned and productive. This year CiviCRM continued to develop with new features like simple A/B testing and monitoring for email campaigns.
CiviCRM deploys as a plug-in to your WordPress, Drupal, or Joomla content management system -- a dead-simple install if you already have one of these systems in place. If you dont, CiviCRM is an excellent reason to deploy the CMS. Its a niche-filling solution that allows nonprofits to start using smarter, tailored tools for managing constituencies, without steep hurdles and training costs.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mautic-100614677-orig.jpg)
### Mautic ###
For marketers, the Internet -- Web, email, social, all of it -- is the stuff dreams are made on. [Mautic][7] allows you to create Web and email campaigns that track and nurture customer engagement, then roll all of the data into detailed reports to gain insight into customer needs and wants and how to meet them.
Open source options in marketing automation are few, but Mautics extensibility stands out even against closed solutions like IBMs Silverpop. Mautic even integrates with popular third-party email marketing solutions (MailChimp, Constant Contact) and social media platforms (Facebook, Twitter, Google+, Instagram) with quick-connect widgets.
The developers of Mautic could stand to broaden the features for list segmentation and improve the navigability of their UI. Usability is also hindered by sparse documentation. But if youre willing to rough it out long enough to learn your way, youll find a gem -- and possibly even gold -- in Mautic.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-orangehrm-100614679-orig.jpg)
### OrangeHRM ###
The commercial software market in the human resource management space is rather fragmented, with Talent, HR, and Workforce Management startups all vying for a slice of the pie. Its little wonder the open source world hasnt found much direction either, with the most ambitious HRM solutions often locked inside larger ERP distributions. [OrangeHRM][8] is a standout.
OrangeHRM tackles employee administration from recruitment and applicant tracking to performance reviews, with good audit trails throughout. An employee portal provides self-serve access to personal employment information, time cards, leave requests, and personnel documents, helping reduce demands on HR staff.
OrangeHRM doesnt yet address niche aspects like talent management (social media, collaboration, knowledge banks), but its remarkably full-featured. Professional and Enterprise options offer more advanced functionality (in areas such as recruitment, training, on/off-boarding, document management, and mobile device access), while community modules are available for the likes of Active Directory/LDAP integration, advanced reporting, and even insurance benefit management.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-libreoffice-100614675-orig.jpg)
### LibreOffice ###
[LibreOffice][9] is the easy choice for best open source office productivity suite. Originally forked from OpenOffice, Libre has been moving at a faster clip than OpenOffice ever since, drawing more developers and producing more new features than its rival.
LibreOffice 5.0, released only last month, offers UX improvements that truly enhance usability (like visual previews to style changes in the sidebar), brings document editing to Android devices (previously a view-only prospect), and finally delivers on a 64-bit Windows codebase.
LibreOffice still lacks a built-in email client and a personal information manager, not to mention the real-time collaborative document editing available in Microsoft Office. But Libre can run off of a USB flash disk for portability, natively supports a greater number of graphic and file formats, and creates hybrid PDFs with embedded ODF files for full-on editing. Libre even imports Apple Pages documents, in addition to opening and saving all Microsoft Office formats.
LibreOffice has done a solid job of tightening its codebase and delivering enhancements at a regular clip. With a new cloud version under development, LibreOffice will soon be more liberating than ever.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-bonita-100614672-orig.jpg)
### Bonita BPM ###
Open source BPM has become a mature, cost-effective alternative to the top proprietary solutions. Having led the charge since 2009, Bonitasoft continues to raise the bar. The new [Bonita BPM 7][10] release impresses with innovative features that simplify code generation and shorten development cycles for BPM app creation.
Most important to the new version, though, is better abstraction of underlying core business logic from UI and data components, allowing UIs and processes to be developed independently. This new MVC approach reduces downtime for live upgrades (no more recompilation!) and eases application maintenance.
Bonita contains a winning set of connectors to a broad range of enterprise systems (ERP, CRM, databases) as well as to Web services. Complementing its process weaving tools, a new form designer (built on AngularJS/Bootstrap) goes a long way toward improving UI creation for the Web-centric and mobile workforce.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-camunda-100614670-orig.jpg)
### Camunda BPM ###
Many open source solutions, like Bonita BPM, offer solid, drop-in functionality. Dig into the code base, though, and you may find its not the cleanest to build upon. Enterprise Java developers who hang out under the hood should check out [Camunda BPM][11].
Forked from Alfresco Activiti (a creation of former Red Hat jBPM developers), Camunda BPM delivers a tight, Java-based BPMN 2.0 engine in support of human workflow activities, case management, and systems process automation that can be embedded in your Java apps or run as a container service in Tomcat. Camundas ecosystem offers an Eclipse plug-in for process modeling and the Cockpit dashboard brings real-time monitoring and management over running processes.
The Enterprise version adds WebSphere and WebLogic Server support. Additional incentives for the Enterprise upgrade include Saxon-driven XSLT templating (sidestepping the scripting engine) and add-ons to improve process management and exception handling.
Camunda is a solid BPM engine ready for build-out and one of the first open source process managers to introduce DMN (Decision Model and Notation) support, which helps to simplify complex rules-based modeling alongside BPMN. DMN support is currently at the alpha stage.
-- James R. Borck
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-talend-100614681-orig.jpg)
### Talend Open Studio ###
No open source ETL or EAI solution comes close to [Talend Open Studio][12] in functionality, performance, or support of modern integration trends. This year Talend unleashed Open Studio 6, a new version with a streamlined UI and smarter tooling that brings it more in line with Talends cloud-based offering.
Using Open Studio you can visually design, test, and debug orchestrations that connect, transform, and synchronize data across a broad range of real-time applications and data resources. Talends wealth of connectors provides support for most any endpoint -- from flat files to Hadoop to Amazon S3. Packaged editions focus on specific scenarios such as big data integration, ESB, and data integrity monitoring.
New support for Java 8 brings a speed boost. The addition of support for MariaDB and for in-memory processing with MemSQL, as well as updates to the ESB engine, keep Talend in step with the communitys needs. Version 6 was a long time coming, but no less welcome for that. Talend Open Studio is still first in managing complex data integration -- in-house, in the cloud, or increasingly, a combination of the two.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-warewolf-100614683-orig.jpg)
### Warewolf ESB ###
Complex integration patterns may demand the strengths of a Talend to get the job done. But for many lightweight microservices, the overhead of a full-fledged enterprise integration solution is extreme overkill.
[Warewolf ESB][13] combines a streamlined .Net-based process engine with visual development tools to provide for dead simple messaging and application payload routing in a native Windows environment. The Warewolf ESB is an “easy service bus,” not an enterprise service bus.
Drag-and-drop tooling in the design studio makes quick work of configuring connections and logic flows. Built-in wizardry handles Web services definitions and database calls, and it can even tap Windows DLLs and the command line directly. Using the visual debugger, you can inspect execution streams (if not yet actually step through them), then package everything for remote deployment.
Warewolf is still a .40.5 release and undergoing major code changes. It also lacks native connectors, easy transforms, and any means of scalability management. Be aware that the precompiled install demands collection of some usage statistics (I wish they would stop that). But Warewolf ESB is fast, free, and extensible. Its a quirky, upstart project that offers definite benefits to Windows integration architects.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-knime-100614674-orig.jpg)
### KNIME ###
[KNIME][14] takes a code-free approach to predictive analytics. Using a graphical workbench, you wire together workflows from an abundant library of processing nodes, which handle data access, transformation, analysis, and visualization. With KNIME, you can pull data from databases and big data platforms, run ETL transformations, perform data mining with R, and produce custom reports in the end.
The company was busy this year rolling out the KNIME 2.12 update. The new release introduces MongoDB support, XPath nodes with autoquery creation, and a new view controller (based on the D3 JavaScript library) that creates interactive data visualizations on the fly. It also includes additional statistical nodes and a REST interface (KNIME Server edition) that provides services-based access to workflows.
KNIMEs core analytics engine is free open source. The company offers several fee-based extensions for clustering and collaboration. (A portion of your licensing fee actually funds the open source project.) KNIME Server (on-premise or cloud) ups the ante with security, collaboration, and workflow repositories -- all serving to inject analytics more productively throughout your business lines.
-- James R. Borck
![](http://images.techhive.com/images/article/2015/09/bossies-2015-teiid-100614682-orig.jpg)
### Teiid ###
[Teiid][15] is a data virtualization system that allows applications to use data from multiple, heterogeneous data stores. Currently a JBoss project, Teiid is backed by years of development from MetaMatrix and a long history of addressing the data access needs of the largest enterprise environments. I even see [uses for Teiid in Hadoop and big data environments][16].
In essence, Teiid allows you to connect all of your data sources into a “virtual” mega data source. You can define caching semantics, transforms, and other “configuration not code” transforms to load from multiple data sources using plain old SQL, XQuery, or procedural queries.
Teiid is primarily accessible through JBDC and has built-in support for Web services. Red Hat sells Teiid as [JBoss Data Virtualization][17].
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614676-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][18]
[Bossie Awards 2015: The best open source application development tools][19]
[Bossie Awards 2015: The best open source big data tools][20]
[Bossie Awards 2015: The best open source data center and cloud software][21]
[Bossie Awards 2015: The best open source desktop and mobile software][22]
[Bossie Awards 2015: The best open source networking and security software][23]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982622/open-source-tools/bossie-awards-2015-the-best-open-source-applications.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:http://xtuple.org/
[2]:http://odoo.com/
[3]:http://idempiere.org/
[4]:http://suitecrm.com/
[5]:http://salesagility.com/
[6]:http://civicrm.org/
[7]:https://www.mautic.org/
[8]:http://www.orangehrm.com/
[9]:http://libreoffice.org/
[10]:http://www.bonitasoft.com/
[11]:http://camunda.com/
[12]:http://talend.com/
[13]:http://warewolf.io/
[14]:http://www.knime.org/
[15]:http://teiid.jboss.org/
[16]:http://www.infoworld.com/article/2922180/application-development/database-virtualization-or-i-dont-want-to-do-etl-anymore.html
[17]:http://www.jboss.org/products/datavirt/overview/
[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,287 +0,0 @@
Bossie Awards 2015: The best open source big data tools
================================================================================
InfoWorld's top picks in distributed data processing, streaming analytics, machine learning, and other corners of large-scale data analytics
![](http://images.techhive.com/images/article/2015/09/bossies-2015-big-data-100613944-orig.jpg)
### The best open source big data tools ###
How many Apache projects can sit on a pile of big data? Fire up your Hadoop cluster, and you might be able to count them. Among this year's Bossies in big data, you'll find the fastest, widest, and deepest newfangled solutions for large-scale SQL, stream processing, sort-of stream processing, and in-memory analytics, not to mention our favorite maturing members of the Hadoop ecosystem. It seems everyone has a nail to drive into MapReduce's coffin.
![](http://images.techhive.com/images/article/2015/09/bossies-2015-spark-100613962-orig.jpg)
### Spark ###
With hundreds of contributors, [Spark][1] is one of the most active and fastest-growing Apache projects, and with heavyweights like IBM throwing their weight behind the project and major corporations bringing applications into large-scale production, the momentum shows no signs of letting up.
The sweet spot for Spark continues to be machine learning. Highlights since last year include the replacement of the SchemaRDD with a Dataframes API, similar to those found in R and Pandas, making data access much simpler than with the raw RDD interface. Also new are ML pipelines for building repeatable machine learning workflows, expanded and optimized support for various storage formats, simpler interfaces to machine learning algorithms, improvements in the display of cluster resources usage, and task tracking.
On by default in Spark 1.5 is the off-heap memory manager, Tungsten, which offers much faster processing by fine-tuning data structure layout in memory. Finally, the new website, [spark-packages.org][2], with more than 100 third-party libraries, adds many useful features from the community.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-storm-100614149-orig.jpg)
### Storm ###
[Apache Storm][3] is a Clojure-based distributed computation framework primarily for streaming real-time analytics. Storm is based on the [disruptor pattern][4] for low-latency complex event processing created LMAX. Unlike Spark, Storm can do single events as opposed to “micro-batches,” and it has a lower memory footprint. In my experience, it scales better for streaming, especially when youre mainly streaming to ingest data into other data sources.
Storms profile has been eclipsed by Spark, but Spark is inappropriate for many streaming applications. Storm is frequently used with Apache Kafka.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-h2o-100613950-orig.jpg)
### H2O ###
[H2O][5] is a distributed, in-memory processing engine for machine learning that boasts an impressive array of algorithms. Previously only available for R users, version 3.0 adds Python and Java language bindings, as well as a Spark execution engine for the back end. The best way to view H20 is as a very large memory extension of your R environment. Instead of working directly on large data sets, the R extensions communicate via a REST API with the H2O cluster, where H2O does the heavy lifting.
Several useful R packages such as ddply have been wrapped, allowing you to use them on data sets larger than the amount of RAM on the local machine. You can run H2O on EC2, on a Hadoop/YARN cluster, and on Docker containers. With Sparkling Water (Spark plus H2O) you can access Spark RDDs on the cluster side by side to, for example, process a data frame with Spark before passing it to an H2O machine learning algorithm.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-apex-100613943-orig.jpg)
### Apex ###
[Apex][6] is an enterprise-grade, big data-in-motion platform that unifies stream processing as well as batch processing. A native YARN application, Apex processes streaming data in a scalable, fault-tolerant manner and provides all the common stream operators out of the box. One of the best things about Apex is that it natively supports the common event processing guarantees (exactly once, at least once, at most once). Formerly a commercial product by DataTorrent, Apex's roots show in the quality of the documentation, examples, code, and design. Devops and application development are cleanly separated, and user code generally doesn't have to be aware that it is running in a streaming cluster.
A related project, [Malhar][7], offers more than 300 commonly used operators and application templates that implement common business logic. The Malhar libraries significantly reduce the time it takes to develop an Apex application, and there are connectors (operators) for storage, file systems, messaging systems, databases, and nearly anything else you might want to connect to from an application. The operators can all be extended or customized to meet individual business's requirements. All Malhar components are available under the Apache license.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-druid-100613947-orig.jpg)
### Druid ###
[Druid][8], which moved to a commercially friendly Apache license in February of this year, is best described as a hybrid, “event streams meet OLAP” solution. Originally developed to analyze online events for ad markets, Druid allows users to do arbitrary and interactive exploration of time series data. Some of the key features include low-latency ingest of events, fast aggregations, and approximate and exact calculations.
At the heart of Druid is a custom data store that uses specialized nodes to handle each part of the problem. Real-time ingest is managed by real-time nodes (JVMs) that eventually flush data to historical nodes that are responsible for data that has aged. Broker nodes direct queries in a scatter-gather fashion to both real-time and historical nodes to give the user a complete picture of events. Benchmarked at a sustained 500K events per second and 1 million events per second peak, Druid is ideal as a real-time dashboard for ad-tech, network traffic, and other activity streams.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-flink-100613949-orig.jpg)
### Flink ###
At its core, [Flink][9] is a data flow engine for event streams. Although superficially similar to Spark, Flink takes a different approach to in-memory processing. First, Flink was designed from the start as a stream processor. Batch is simply a special case of a stream with a beginning and an end, and Flink offers APIs for dealing with each case, the DataSet API (batch) and the DataStream API. Developers coming from the MapReduce world should feel right at home working with the DataSet API, and porting applications to Flink should be straightforward. In many ways Flink mirrors the simplicity and consistency that helped make Spark so popular. Like Spark, Flink is written in Scala.
The developers of Flink clearly thought out usage and operations too: Flink works natively with YARN and Tez, and it uses an off-heap memory management scheme to work around some of the JVM limitations. A peek at the Flink JIRA site shows a healthy pace of development, and youll find an active community on the mailing lists and on StackOverflow as well.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-elastic-100613948-orig.jpg)
### Elasticsearch ###
[Elasticsearch][10] is a distributed document search server based on [Apache Lucene][11]. At its heart, Elasticsearch builds indices on JSON-formatted documents in nearly real time, enabling fast, full-text, schema-free queries. Combined with the open source Kibana dashboard, you can create impressive visualizations of your real-time data in a simple point-and-click fashion.
Elasticsearch is easy to set up and easy to scale, automatically making use of new hardware by rebalancing shards as required. The query syntax isn't at all SQL-like, but it is intuitive enough for anyone familiar with JSON. Most users won't be interacting at that level anyway. Developers can use the native JSON-over-HTTP interface or one of the several language bindings available, including Ruby, Python, PHP, Perl, .Net, Java, and JavaScript.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-slamdata-100613961-orig.jpg)
### SlamData ###
If you are seeking a user-friendly tool to visualize and understand your newfangled NoSQL data, take a look at [SlamData][12]. SlamData allows you to query nested JSON data using familiar SQL syntax, without relocation or transformation.
One of the technologys main features is its connectors. From MongoDB to HBase, Cassandra, and Apache Spark, SlamData taps external data sources with the industry's most advanced “pushdown” processing technology, performing transformations and analytics close to the data.
While you might ask, “Wouldnt I be better off building a data lake or data warehouse?” consider the companies that were born in NoSQL. Skipping the ETL and simply connecting a visualization tool to a replica offers distinct advantages -- not only in terms of how up-to-date the data is, but in how many moving parts you have to maintain.
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-drill-100613946-orig.jpg)
### Drill ###
[Drill][13] is a distributed system for interactive analysis of large-scale data sets, inspired by [Google's Dremel][14]. Designed for low-latency analysis of nested data, Drill has a stated design goal of scaling to 10,000 servers and querying petabytes of data and trillions of records.
Nested data can be obtained from a variety of data sources (such as HDFS, HBase, Amazon S3, and Azure Blobs) and in multiple formats (including JSON, Avro, and protocol buffers), and you don't need to specify a schema up front (“schema on read”).
Drill uses ANSI SQL:2003 for its query language, so there's no learning curve for data engineers to overcome, and it allows you to join data across multiple data sources (for example, joining a table in HBase with logs in HDFS). Finally, Drill offers ODBC and JDBC interfaces to connect your favorite BI tools.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-hbase-100613951-orig.jpg)
### HBase ###
[HBase][15] reached the 1.x milestone this year and continues to improve. Like other nonrelational distributed datastores, HBase excels at returning search results very quickly and for this reason is often used to back search engines, such as the ones at eBay, Bloomberg, and Yahoo. As a stable and mature software offering, HBase does not get fresh features as frequently as newer projects, but that's often good for enterprises.
Recent improvements include the addition of high-availability region servers, support for rolling upgrades, and YARN compatibility. Features in the works include scanner updates that promise to improve performance and the ability to use HBase as a persistent store for streaming applications like Storm and Spark. HBase can also be queried SQL style via the [Phoenix][16] project, now out of incubation, whose SQL compatibility is steadily improving. Phoenix recently added a Spark connector and the ability to add custom user-defined functions.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-hive-100613952-orig.jpg)
### Hive ###
Although stable and mature for several years, [Hive][17] reached the 1.0 version milestone this year and continues to be the best solution when really heavy SQL lifting (many petabytes) is required. The community continues to focus on improving the speed, scale, and SQL compliance of Hive. Currently at version 1.2, significant improvements since its last Bossie include full ACID semantics, cross-data center replication, and a cost-based optimizer.
Hive 1.2 also brought improved SQL compliance, making it easier for organizations to use it to off-load ETL jobs from their existing data warehouses. In the pipeline are speed improvements with an in-memory cache called LLAP (which, from the looks of the JIRAs, is about ready for release), the integration of Spark machine learning libraries, and improved SQL constructs like nonequi joins, interval types, and subqueries.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kylin-100613955-orig.jpg)
### Kylin ###
[Kylin][18] is an application developed at eBay for processing very large OLAP cubes via ANSI SQL, a task familiar to most data analysts. If you think about how many items are on sale now and in the past at eBay, and all the ways eBay might want to slice and dice data related to those items, you will begin to understand the types of queries Kylin was designed for.
Like most other analysis applications, Kylin supports multiple access methods, including JDBC, ODBC, and a REST API for programmatic access. Although Kylin is still in incubation at Apache, and the community nascent, the project is well documented and the developers are responsive and eager to understand customer use cases. Getting up and running with a starter cube was a snap. If you have a need for analysis of extremely large cubes, you should take a look at Kylin.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-cdap-100613945-orig.jpg)
### CDAP ###
[CDAP][19] (Cask Data Access Platform) is a framework running on top of Hadoop that abstracts away the complexity of building and running big data applications. CDAP is organized around two core abstractions: data and applications. CDAP Datasets are logical representations of data that behave uniformly regardless of the underlying storage layer; CDAP Streams provide similar support for real-time data.
Applications use CDAP services for things such as distributed transactions and service discovery to shield developers from the low-level details of Hadoop. CDAP comes with a data ingestion framework and a few prebuilt applications and “packs” for common tasks like ETL and website analytics, along with support for testing, debugging, and security. Like most formerly commercial (closed source) projects, CDAP benefits from good documentation, tutorials, and examples.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-ranger-100613960-orig.jpg)
### Ranger ###
Security has long been a sore spot with Hadoop. It isnt (as is frequently reported) that Hadoop is “insecure” or “has no security.” Rather, the truth was more that Hadoop had too much security, though not in a good way. I mean that every component had its own authentication and authorization implementation that wasnt integrated with the rest of platform.
Hortonworks acquired XA/Secure in May, and [a few renames later][20] we have [Ranger][21]. Ranger pulls many of the key components of Hadoop together under one security umbrella, allowing you to set a “policy” that ties your Hadoop security to your existing ACL-based Active Directory authentication and authorization. Ranger gives you one place to manage Hadoop access control, one place to audit, one place to manage the encryption, and a pretty Web page to do it from.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mesos-100613957-orig.jpg)
### Mesos ###
[Mesos][22], developed at the [AMPLab][23] at U.C. Berkeley that also brought us Spark, takes a different approach to managing cluster computing resources. The best way to describe Mesos is as a distributed microkernel for the data center. Mesos provides a minimal set of operating system mechanisms like inter-process communications, disk access, and memory to higher-level applications, called “frameworks” in Mesos-speak, that run in what is analogous to user space. Popular frameworks for Mesos include [Chronos][24] and [Aurora][25] for building ETL pipelines and job scheduling, and a few big data processing applications including Hadoop, Storm, and Spark, which have been ported to run as Mesos frameworks.
Mesos applications (frameworks) negotiate for cluster resources using a two-level scheduling mechanism, so writing a Mesos application is unlikely to feel like a familiar experience to most developers. Although Mesos is a young project, momentum is growing, and with Spark being an exceptionally good fit for Mesos, we're likely to see more from Mesos in the coming years.
-- Steven Nunez
![](http://images.techhive.com/images/article/2015/09/bossies-2015-nifi-100613958-orig.jpg)
### NiFi ###
[NiFi][26] is an incubating Apache project to automate the flow of data between systems. It doesn't operate in the traditional space that Kafka and Storm do, but rather in the space between external devices and the data center. NiFi was originally developed by the NSA and donated to the open source community in 2014. It has a strong community of developers and users within various government agencies.
NiFi isn't like anything else in the current big data ecosystem. It is much closer to a tradition EAI (enterprise application integration) tool than a data processing platform, although simple transformations are possible. One interesting feature is the ability to debug and change data flows in real time. Although not quite a REPL (read, eval, print loop), this kind of paradigm dramatically shortens the development cycle by not requiring a compile-deploy-test-debug workflow. Other interesting features include a strong “chain of custody,” where each piece of data can be tracked from beginning to end, along with any changes made along the way. You can also prioritize data flows so that time-sensitive information can be received as quickly as possible, bypassing less time-critical events.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kafka-100613954-orig.jpg)
### Kafka ###
[Kafka][27] has emerged as the de-facto standard for distributed publish-subscribe messaging in the big data space. Its design allows brokers to support thousands of clients at high rates of sustained message throughput, while maintaining durability through a distributed commit log. Kafka does this by maintaining what is essentially a single log file in HDFS. Since HDFS is a distributed storage system that keeps redundant copies, Kafka is protected.
When consumers want to read messages, Kafka looks up their offset in the central log and sends them. Because messages are not deleted immediately, adding consumers or replaying historical messages does not impose additional costs. Kafka has been benchmarked at 2 million writes per second by its developers at LinkedIn. Despite Kafkas sub-1.0 version number, Kafka is a mature and stable product, in use in some of the largest clusters in the world.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opentsdb-100613959-orig.jpg)
### OpenTSDB ###
[OpenTSDB][28] is a time series database built on HBase. It was designed specifically for analyzing data collected from applications, mobile devices, networking equipment, and other hardware devices. The custom HBase schema used to store the time series data has been designed for fast aggregations and minimal storage requirements.
By using HBase as the underlying storage layer, OpenTSDB gains the distributed and reliable characteristics of that system. Users don't interact with HBase directly; instead events are written to the system via the time series daemon (TSD), which can be scaled out as required to handle high-throughput situations. There are a number of prebuilt connectors to publish data to OpenTSDB, and clients to read data from Ruby, Python, and other languages. OpenTSDB isn't strong on creating interactive graphics, but several third-party tools fill that gap. If you are already using HBase and want a simple way to store event data, OpenTSDB might be just the thing.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-jupyter-100613953-orig.jpg)
### Jupyter ###
Everybody's favorite notebook application went generic. [Jupyter][29] is “the language-agnostic parts of IPython” spun out into an independent package. Although Jupyter itself is written in Python, the system is modular. Now you can have an IPython-like interface, along with notebooks for sharing code, documentation, and data visualizations, for nearly any language you like.
At least [50 language][30] kernels are already supported, including LISP, R, Ruby, F#, Perl, and Scala. In fact, even IPython itself is simply a Python module for Jupyter. Communication with the language kernel is via a REPL (read, eval, print loop) protocol, similar to [nREPL][31] or [Slime][32]. It is nice to see such a useful piece of software receiving significant [nonprofit funding][33] to further its development, such as parallel execution and multi-user notebooks. Behold, open source at its best.
-- Steven Nunez
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zeppelin-100613963-orig.jpg)
### Zeppelin ###
While still in incubation, [Apache Zeppelin][34] is nevertheless stirring the data analytics and visualization pot. The Web-based notebook enables users to ingest, discover, analyze, and visualize their data. The notebook also allows you to collaborate with others to make data-driven, interactive documents incorporating a growing number of programming languages.
This technology also boasts an integration with Spark and an interpreter concept allowing any language or data processing back end to be plugged into Zeppelin. Currently Zeppelin supports interpreters such as Scala, Python, SparkSQL, Hive, Markdown, and Shell.
Zeppelin is still immature. I wanted to put a demo up but couldnt find an easy way to disable “shell” as an execution option (among other things). However, it already looks better visually than IPython Notebook, which is the popular incumbent in this space. If you dont want to spring for DataBricks Cloud or need something open source and extensible, this is the most promising distributed computing notebook around -- especially if youre a Sparky type.
-- Andrew C. Oliver
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613956-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][35]
[Bossie Awards 2015: The best open source application development tools][36]
[Bossie Awards 2015: The best open source big data tools][37]
[Bossie Awards 2015: The best open source data center and cloud software][38]
[Bossie Awards 2015: The best open source desktop and mobile software][39]
[Bossie Awards 2015: The best open source networking and security software][40]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982429/open-source-tools/bossie-awards-2015-the-best-open-source-big-data-tools.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://spark.apache.org/
[2]:http://spark-packages.org/
[3]:https://storm.apache.org/
[4]:https://lmax-exchange.github.io/disruptor/
[5]:http://h2o.ai/product/
[6]:https://www.datatorrent.com/apex/
[7]:https://github.com/DataTorrent/Malhar
[8]:https://druid.io/
[9]:https://flink.apache.org/
[10]:https://www.elastic.co/products/elasticsearch
[11]:http://lucene.apache.org/
[12]:http://teiid.jboss.org/
[13]:https://drill.apache.org/
[14]:http://research.google.com/pubs/pub36632.html
[15]:http://hbase.apache.org/
[16]:http://phoenix.apache.org/
[17]:https://hive.apache.org/
[18]:https://kylin.incubator.apache.org/
[19]:http://cdap.io/
[20]:http://www.infoworld.com/article/2973381/application-development/apache-ranger-chuck-norris-hadoop-security.html
[21]:https://ranger.incubator.apache.org/
[22]:http://mesos.apache.org/
[23]:https://amplab.cs.berkeley.edu/
[24]:http://nerds.airbnb.com/introducing-chronos/
[25]:http://aurora.apache.org/
[26]:http://nifi.apache.org/
[27]:https://kafka.apache.org/
[28]:http://opentsdb.net/
[29]:http://jupyter.org/
[30]:http://https//github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages
[31]:https://github.com/clojure/tools.nrepl
[32]:https://github.com/slime/slime
[33]:http://blog.jupyter.org/2015/07/07/jupyter-funding-2015/
[34]:https://zeppelin.incubator.apache.org/
[35]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[36]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[37]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[38]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[39]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[40]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,261 +0,0 @@
Bossie Awards 2015: The best open source data center and cloud software
================================================================================
InfoWorld's top picks of the year in open source platforms, infrastructure, management, and orchestration software
![](http://images.techhive.com/images/article/2015/09/bossies-2015-data-center-cloud-100613986-orig.jpg)
### The best open source data center and cloud software ###
You might have heard about this new thing called Docker containers. Developers love them because you can build them with a script, add services in layers, and push them right from your MacBook Pro to a server for testing. It works because they're superlightweight, unlike those now-archaic virtual machines. Containers -- and other lightweight approaches to deliver services -- are changing the shape of operating systems, applications, and the tools to manage them. Our Bossie winners in data center and cloud are leading the charge.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613987-orig.jpg)
### Docker Machine, Compose, and Swarm ###
Dockers open source container technology has been adopted by the major public clouds and is being built into the next version of Windows Server. Allowing developers and operations teams to separate applications from infrastructure, Docker is a powerful data center automation tool.
However, containers are only part of the Docker story. Docker also provides a series of tools that allow you to use the Docker API to automate the entire container lifecycle, as well as handling application design and orchestration.
[Machine][1] allows you to automate the provisioning of Docker Containers. Starting with a command line, you can use a single line of code to target one or more hosts, deploy the Docker engine, and even join it to a Swarm cluster. Theres support for most hypervisors and cloud platforms all you need are your access credentials.
[Swarm][2] handles clustering and scheduling, and it can be integrated with Mesos for more advanced scheduling capabilities. You can use Swarm to build a pool of container hosts, allowing your apps to scale out as demand increases. Applications and all of their dependencies can be defined with [Compose][3], which lets you link containers together into a distributed application and launch them as a group. Compose descriptions work across platforms, so you can take a developer configuration and quickly deploy in production.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-coreos-rkt-100613985-orig.jpg)
### CoreOS and Rkt ###
A thin, lightweight server OS, [CoreOS][4] is based on Googles Chromium OS. Instead of using a package manager to install functions, its designed to be used with Linux containers. By using containers to extend a thin core, CoreOS allows you to quickly deploy applications, working well on cloud infrastructures.
CoreOSs container management tooling, fleet, is designed to treat a cluster of CoreOS servers as a single unit, with tools for managing high availability and for deploying containers to the cluster based on resource availability. A cross-cluster key/value store, etcd, handles device management and supports service discovery. If a node fails, etcd can quickly restore state on a new replica, giving you a distributed configuration management platform thats linked to CoreOSs automated update service.
While CoreOS is perhaps best known for its Docker support, the CoreOS team is developing its own container runtime, rkt, with its own container format, the App Container Image. Also compatible with Docker containers, rkt has a modular architecture that allows different containerization systems (even hardware virtualization, in a proof of concept from Intel) to be plugged in. However, rkt is still in the early stages of development, so isnt quite production ready.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rancheros-100613997-orig.jpg)
### RancherOS ###
As we abstract more and more services away from the underlying operating system using containers, we can start thinking about what tomorrows operating system will look like. Similar to our applications, its going to be a modular set of services running on a thin kernel, self-configuring to offer only the services our applications need.
[RancherOS][5] is a glimpse of what that OS might look like. Blending the Linux kernel with Docker, RancherOS is a minimal OS suitable for hosting container-based applications in cloud infrastructures. Instead of using standard Linux packaging techniques, RancherOS leverages Docker to host Linux user-space services and applications in separate container layers. A low-level Docker instance is first to boot, hosting system services in their own containers. Users' applications run in a higher-level Docker instance, separate from the system containers. If one of your containers crashes, the host keeps running.
RancherOS is only 20MB in size, so it's easy to replicate across a data center. Its also designed to be managed using automation tools, not manually, with API-level access that works with Dockers management tools as well as with Rancher Labs own cloud infrastructure and management tools.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-kubernetes-100613991-orig.jpg)
### Kubernetes ###
Googles [Kubernetes][6] container orchestration system is designed to manage and run applications built in Docker and Rocket containers. Focused on managing microservice applications, Kubernetes lets you distribute your containers across a cluster of hosts, while handling scaling and ensuring managed services run reliably.
With containers providing an application abstraction layer, Kubernetes is an application-centric management service that supports many modern development paradigms, with a focus on user intent. That means you launch applications, and Kubernetes will manage the containers to run within the parameters you set, using the Kubernetes scheduler to make sure it gets the resources it needs. Containers are grouped into pods and managed by a replication engine that can recover failed containers or add more pods as applications scale.
Kubernetes powers Googles own Container Engine, and it runs on a range of other cloud and data center services, including AWS and Azure, as well as vSphere and Mesos. Containers can be either loosely or tightly coupled, so applications not designed for cloud PaaS operations can be migrated to the cloud as a tightly coupled set of containers. Kubernetes also supports rapid deployment of applications to a cluster, giving you an endpoint for a continuous delivery process.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-mesos-100613993-orig.jpg)
### Mesos ###
Turning a data center into a private or public cloud requires more than a hypervisor. It requires a new operating layer that can manage the data center resources as if they were a single computer, handling resources and scheduling. Described as a “distributed systems kernel,” [Apache Mesos][7] allows you to manage thousands of servers, using containers to host applications and APIs to support parallel application development.
At the heart of Mesos is a set of daemons that expose resources to a central scheduler. Tasks are distributed across nodes, taking advantage of available CPU and memory. One key approach is the ability for applications to reject offered resources if they dont meet requirements. Its an approach that works well for big data applications, and you can use Mesos to run Hadoop and Cassandra distributed databases, as well as Apaches own Spark data processing engine. Theres also support for the Jenkins continuous integration server, allowing you to run build and test workers in parallel on a cluster of servers, dynamically adjusting the tasks depending on workload.
Designed to run on Linux and Mac OS X, Mesos has also recently been ported to Windows to support the development of scalable parallel applications on Azure.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-smartos-100614849-orig.jpg)
### SmartOS and SmartDataCenter ###
Joyents [SmartDataCenter][8] is the software that runs its public cloud, adding a management platform on top of its [SmartOS][9] thin server OS. A descendent of OpenSolaris that combines Zones containers and the KVM hypervisor, SmartOS is an in-memory operating system, quick to boot from a USB stick and run on bare-metal servers.
Using SmartOS, you can quickly deploy a set of lightweight servers that can be programmatically managed via a set of JSON APIs, with functionality delivered via virtual machines, downloaded by built-in image management tools. Through the use of VMs, all userland operations are isolated from the underlying OS, reducing the security exposure of both the host and guests.
SmartDataCenter runs on SmartOS servers, with one server running as a dedicated management node, and the rest of a cluster operating as compute nodes. You can get started with a Cloud On A Laptop build (available as a VMware virtual appliance) that lets you experiment with the management server. In a live data center, youll deploy SmartOS on your servers, using ZFS to handle storage which includes your local image library. Services are deployed as images, with components stored in an object repository.
The combination of SmartDataCenter and SmartOS builds on the experience of Joyents public cloud, giving you a tried and tested set of tools that can help you bootstrap your own cloud data center. Its an infrastructure focused on virtual machines today, but laying the groundwork for tomorrow. A related Joyent project, [sdc-docker][10], exposes an entire SmartDataCenter cluster as a single Docker host, driven by native Docker commands.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-sensu-100614850-orig.jpg)
### Sensu ###
Managing large-scale data centers isnt about working with server GUIs, its about automating scripts based on information from monitoring tools and services, routing information from sensors and logs, and then delivering actions to applications. One tool thats beginning to offer this functionality is [Sensu][11], often described as a “monitoring router.”
Scripts running across your data center deliver information to Sensu, which then routes it to the appropriate handler, using a publish-and-subscribe architecture based on RabbitMQ. Servers can be distributed, delivering published check results to handler code. You might see results in email, or in a Slack room, or in Sensus own dashboards. Message formats are defined in JSON files, or mutators used to format data on the fly, and messages can be filtered to one or more event handlers.
Sensu is still a relatively young tool, but its one that shows a lot of promise. If youre going to automate your data center, youre going to need a tool like this not only to show you whats happening, but to deliver that information where its most needed. A commercial option adds support for integration with third-party applications, but much of what you need to manage a data center is in the open source release.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-prometheus-100613996-orig.jpg)
### Prometheus ###
Managing a modern data center is a complex task. Racks of servers need to be treated like cattle rather than pets, and you need a monitoring system designed to handle hundreds and thousands of nodes. Monitoring applications presents special challenges, and thats where [Prometheus][12] comes in to play. A service monitoring system designed to deliver alerts to operators, Prometheus can run on everything from a single laptop to a highly available cluster of monitoring servers.
Time series data is captured and stored, then compared against patterns to identify faults and problems. Youll need to expose data on HTTP endpoints, using a YAML file to configure the server. A browser-based reporting tool handles displaying data, with an expression console where you can experiment with queries. Dashboards can be created with a GUI builder, or written using a series of templates, letting you deliver application consoles that can be managed using version control systems such as Git.
Captured data can be managed using expressions, which make it easy to aggregate data from several sources -- for example, letting you bring performance data from a series of Web endpoints into one store. An experimental alert manager module delivers alerts to common collaboration and devops tools, including Slack and PagerDuty. Official client libraries for common languages like Go and Java mean its easy to add Prometheus support to your applications and services, while third-party options extend Prometheus to Node.js and .Net.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-elk-100613988-orig.jpg)
### Elasticsearch, Logstash, and Kibana ###
Running a modern data center generates a lot of data, and it requires tools to get information out of that data. Thats where the combination of Elasticsearch, Logstash, and Kibana, often referred to as the ELK stack, comes into play.
Designed to handle scalable search across a mix of content types, including structured and unstructured documents, [Elasticsearch][13] builds on Apaches Lucene information retrieval tools, with a RESTful JSON API. Its used to provide search for sites like Wikipedia and GitHub, using a distributed index with automated load balancing and routing.
Under the fabric of a modern cloud is a physical array of servers, running as VM hosts. Monitoring many thousands of servers needs centralized logs. [Logstash][14] harvests and filters the logs generated by those servers (and by the applications running on them), using a forwarder on each physical and virtual machine. Logstash-formatted data is then delivered to Elasticsearch, giving you a search index that can be quickly scaled as you add more servers.
At a higher level, [Kibana][15] adds a visualization layer to Elasticsearch, providing a Web dashboard for exploring and analyzing the data. Dashboards can be created around custom searches and shared with your team, providing a quick, easy-to-digest devops information feed.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-ansible-100613984-orig.jpg)
### Ansible ###
Managing server configuration is a key element of any devops approach to managing a modern data center or a cloud infrastructure. Configuration management tooling that takes a desired state approach to simplifies systems management at cloud scale, using server and application descriptions to handle server and application deployment.
[Ansible][16] offers a minimal management service, using SSH to manage Unix nodes and PowerShell to work with Windows servers, with no need to deploy agents. An Ansible Playbook describes the state of a server or service in YAML, deploying Ansible modules to servers that handle configuration and removing them once the service is running. You can use Playbooks to orchestrate tasks -- for example, deploying several Web endpoints with a single script.
Its possible to make module creation and Playbook delivery part of a continuous delivery process, using build tools to deliver configurations and automate deployment. Ansible can pull in information from cloud service providers, simplifying management of virtual machines and networks. Monitoring tools in Ansible are able to trigger additional deployments automatically, helping manage and control cloud services, as well as working to manage resources used by large-scale data platforms like Hadoop.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-jenkins-100613990-orig.jpg)
### Jenkins ###
Getting continuous delivery right requires more than a structured way of handling development; it also requires tools for managing test and build. Thats where the [Jenkins][17] continuous integration server comes in. Jenkins works with your choice of source control, your test harnesses, and your build server. Its a flexible tool, initially designed for working with Java but now extended to support Web and mobile development and even to build Windows applications.
Jenkins is perhaps best thought of as a switching network, shunting files through a test and build process, and responding to signals from the various tools youre using thanks to a library of more than 1,000 plug-ins. These include tools for integrating Jenkins with both local Git instances and GitHub so that it's possible to extend a continuous development model into your build and delivery processes.
Using an automation tool like Jenkins is as much about adopting a philosophy as it is about implementing a build process. Once you commit to continuous integration as part of a continuous delivery model, youll be running test and build cycles as soon as code is delivered to your source control release branch and delivering it to users as soon as its in the main branch.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613995-orig.jpg)
### Node.js and io.js ###
Modern cloud applications are built using different design patterns from the familiar n-tier enterprise and Web apps. Theyre distributed, event-driven collections of services that can be quickly scaled and can support many thousands of simultaneous users. One key technology in this new paradigm is [Node.js][18], used by many major cloud platforms and easy to install as part of a thin server or container on cloud infrastructure.
Key to the success of Node.js is the Npm package format, which allows you to quickly install extensions to the core Node.js service. These include frameworks like Express and Seneca, which help build scalable applications. A central registry handles package distribution, and dependencies are automatically installed.
While the [io.js][19] fork exposed issues with project governance, it also allowed a group of developers to push forward adding ECMAScript 6 support to an Npm-compatible engine. After reconciliation between the two teams, the Node.js and io.js codebases have been merged, with new releases now coming from the io.js code repository.
Other forks, like Microsofts io.js fork to add support for its 64-bit Chakra JavaScript engine alongside Googles V8, are likely to be merged back into the main branch over the next year, keeping the Node.js platform evolving and cementing its role as the preferred host for cloud-scale microservices.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-seneca-100613998-orig.jpg)
### Seneca ###
The developers of the [Seneca][20] microservice framework have a motto: “Build it now, scale it later!” Its an apt maxim for anyone thinking about developing microservices, as it allows you to start small, then add functionality as your service grows.
Seneca is at heart an implementation of the [actor/message design pattern][21], focused on using Node.js as a switching engine that takes in messages, processes their contents, and sends an appropriate response, either to the message originator or to another service. By focusing on the message patterns that map to business use cases, its relatively easy to take Seneca and quickly build a minimum viable product for your application. A plug-in architecture makes it easy to integrate Seneca with other tools and to quickly add functionality to your services.
You can easily add new patterns to your codebase or break existing patterns into separate services as the needs of your application grow or change. One pattern can also call another, allowing quick code reuse. Its also easy to add Seneca to a message bus, so you can use it as a framework for working with data from Internet of things devices, as all you need to do is define a listening port where JSON data is delivered.
Services may not be persistent, and Seneca gives you the option of using a built-in object relational mapping layer to handle data abstraction, with plug-ins for common databases.
-- Simon Bisson
![](http://images.techhive.com/images/article/2015/09/bossies-2015-netcore-aspnet-100613994-orig.jpg)
### .Net Core and ASP.Net vNext ###
Microsofts [open-sourcing of .Net][22] is bringing much of the companys Web platform into the open. The new [.Net Core][23] release runs on Windows, on OS X, and on Linux. Currently migrating from Microsofts Codeplex repository to GitHub, .Net Core offers a more modular approach to .Net, allowing you to install the functions you need as you need them.
Currently under development is [ASP.Net 5][24], an open source version of the Web platform, which runs on .Net Core. You can work with it as the basis of Web apps using Microsofts MVC 6 framework. Theres also support for the new SignalR libraries, which add support for WebSockets and other real-time communications protocols.
If youre planning on using Microsofts new Nano server, youll be writing code against .Net Core, as its designed for thin environments. The new DNX, the .Net Execution environment, simplifies deployment of ASP.Net applications on a wide range of platforms, with tools for packaging code and for booting a runtime on a host. Features are added using the NuGet package manager, letting you use only the libraries you want.
Microsofts open source .Net is still very young, but theres a commitment in Redmond to ensure its successful. Support in Microsofts own next-generation server operating systems means it has a place in both the data center and the cloud.
-- Simon Bisson
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-glusterfs-100613989-orig.jpg)
### GlusterFS ###
[GlusterFS][25] is a distributed file system. Gluster aggregates various storage servers into one large parallel network file system. You can [even use it in place of HDFS in a Hadoop cluster][26] or in place of an expensive SAN system -- or both. While HDFS is great for Hadoop, having a general-purpose distributed file system that doesnt require you to transfer data to another location to analyze it is a key advantage.
In an era of commoditized hardware, commoditized computing, and increased performance and latency requirements, buying a big, fat expensive EMC SAN and hoping it fits all of your needs (it wont) is no longer your sole viable option. GlusterFS was acquired by Red Hat in 2011.
-- Andrew C. Oliver
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100613992-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][27]
[Bossie Awards 2015: The best open source application development tools][28]
[Bossie Awards 2015: The best open source big data tools][29]
[Bossie Awards 2015: The best open source data center and cloud software][30]
[Bossie Awards 2015: The best open source desktop and mobile software][31]
[Bossie Awards 2015: The best open source networking and security software][32]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982923/open-source-tools/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.docker.com/docker-machine
[2]:https://www.docker.com/docker-swarm
[3]:https://www.docker.com/docker-compose
[4]:https://coreos.com/
[5]:http://rancher.com/rancher-os/
[6]:http://kubernetes.io/
[7]:https://mesos.apache.org/
[8]:https://github.com/joyent/sdc
[9]:https://smartos.org/
[10]:https://github.com/joyent/sdc-docker
[11]:https://sensuapp.org/
[12]:http://prometheus.io/
[13]:https://www.elastic.co/products/elasticsearch
[14]:https://www.elastic.co/products/logstash
[15]:https://www.elastic.co/products/kibana
[16]:http://www.ansible.com/home
[17]:https://jenkins-ci.org/
[18]:https://nodejs.org/en/
[19]:https://iojs.org/en/
[20]:http://senecajs.org/
[21]:http://www.infoworld.com/article/2976422/application-development/how-to-use-actors-in-distributed-applications.html
[22]:http://www.infoworld.com/article/2846450/microsoft-net/microsoft-open-sources-server-side-net-launches-visual-studio-2015-preview.html
[23]:https://dotnet.github.io/core/
[24]:http://www.asp.net/vnext
[25]:http://www.gluster.org/
[26]:http://www.gluster.org/community/documentation/index.php/Hadoop
[27]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[28]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[29]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[30]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[31]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[32]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,223 +0,0 @@
Bossie Awards 2015: The best open source desktop and mobile software
================================================================================
InfoWorld's top picks in open source productivity tools, desktop utilities, and mobile apps
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-desktop-mobile-100614439-orig.jpg)
### The best open source desktop and mobile software ###
Open source on the desktop has a long and distinguished history, and many of our Bossie winners in this category go back many years. Packed with features and still improving, some of these tools offer compelling alternatives to pricey commercial software. Others are utilities that we lean on daily for one reason or another -- the can openers and potato peelers of desktop productivity. One or two of them either plug holes in Windows, or they go the distance where Windows falls short.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-libreoffice-100614436-orig.jpg)
### LibreOffice ###
With the major release of version 5 in August, the Document Foundations [LibreOffice][1] offers a completely redesigned user interface, better compatibility with Microsoft Office (including good-but-not-great DOCX, XLSX, and PPTX file format support), and significant improvements to Calc, the spreadsheet application.
Set against a turbulent background, the LibreOffice effort split from OpenOffice.org in 2010. In 2011, Oracle announced it would no longer support OpenOffice.org, and handed the trademark to the Apache Software Foundation. Since then, it has become [increasingly clear][2] that LibreOffice is winning the race for developers, features, and users.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-firefox-100614426-orig.jpg)
### Firefox ###
In the battle of the big browsers, [Firefox][3] gets our vote over its longtime open source rival Chromium for two important reasons:
&bull; **Memory use**. Chromium, like its commercial cousin Chrome, has a nasty propensity to glom onto massive amounts of memory.
&bull; **Privacy**. Witness the [recent controversy][4] over Chromium automatically downloading a microphone snooping program to respond to “OK, Google.”
Firefox may not have the most features or the down-to-the-millisecond fastest rendering engine. But its solid, stingy with resources, highly extensible, and most of all, it comes with no strings attached. Theres no ulterior data-gathering motive.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-thunderbird-100614433-orig.jpg)
### Thunderbird ###
A longtime favorite email client, Mozillas [Thunderbird][5], may be getting a bit long in the tooth, but its still supported and showing signs of life. The latest version, 38.2, arrived in August, and there are plans for more development.
Mozilla officially pulled its people off the project back in July 2012, but a hardcore group of volunteers, led by Kent James and the all-volunteer Thunderbird Council, continues to toil away. While you wont find the latest email innovations in Thunderbird, you will find a solid core of basic functions based on local storage. If having mail in the cloud spooks you, its a good, private alternative. And if James goes ahead with his idea of encrypting Thunderbird mail end-to-end, there may be significant new life in the old bird.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-notepad-100614432-orig.jpg)
### Notepad++ ###
If Windows Notepad handles all of your text editing (and source code editing and HTML editing) needs, more power to ya. For Windows users who yearn for a little bit more in a text editor, theres Don Hos [Notepad++][6], which is the editor I turn to, over and over again.
With tabbed views, drag-and-drop, color-coded hints for completing HTML commands, bookmarks, macro recording, shortcut keys, and every text encoding format youre likely to encounter, Notepad++ takes text to a new level. We get frequent updates, too, with the latest in August.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-vlc-100614435-orig.jpg)
### VLC ###
The stalwart [VLC][7] (formerly known as VideoLan Client) runs almost any kind of media file on almost any platform. Yes, it even works as a remote control on Apple Watch.
The tiled Universal app version for Windows 10, in the Windows Store, draws some criticism for instability and lack of control, but in most cases VLC works, and it works well -- without external codecs. It even supports Blu-ray formats with two new libraries.
The desktop version is a must-have for Windows 10, unless youre ready to run the advertising gauntlets that are the Universal Groove Music and Movies & TV apps from Microsoft. VLC received a major [feature update][8] in February and a comprehensive bug fix in April.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-7-zip-100614429-orig.jpg)
### 7-Zip ###
Long recognized as the preeminent open source ZIP archive manager for Windows, [7-Zip][9] works like a champ, even on the Windows 10 desktop. Full coverage for RAR files, which can be problematic in Windows, combine with password-protected file creation and support for self-extracting ZIPs. Its one of those programs that just works.
Yes, it would be nice to get a more modern file picker. Yes, it would be interesting to see a tiled Universal app version. But even without the fancy bells and whistles, 7-Zip deserves a place on every Windows desktop.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-handbrake-100614427-orig.jpg)
### Handbrake ###
If you want to convert your DVDs (or video files in any commonly used format) into a file in some other format, or simply scrape them off a silver coaster, [Handbrake][10] is the way to do it. If youre a Windows user, Handbrake is almost indispensible, since Microsoft doesnt believe in ripping DVDs.
Handbrake presents a number of handy presets for optimizing conversions for your target device (iPod, iPad, Android Tablet, and so on) Its simple, and its fast. With the latest round of bug fixes released in June, Handbrakes keeping up on maintenance -- and it works fine on the Windows 10 desktop.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-keepass-100614430-orig.jpg)
### KeePass ###
Ill confess that I almost gave up on [KeePass][11] because the primary download site goes to Sourceforge. That means you have to be extremely careful which boxes are checked and what you click on (and when) as you attempt to download and install the software. While KeePass itself is 100 percent clean open source (GNU GPL), Sourceforge doesnt feel so constrained, and its [installers reek of crapware][12].
One of many local-file password storage programs, KeePass distinguishes itself with broad scope, as well as its ability to run on all sorts of platforms, no installation required. KeePass will save not only passwords, but also credit card information and freely structured information. It provides a strong random password generator, and the database itself is locked with AES and Twofish, so nobodys going to crack it. And its kept up to date, with a new stable release last month.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-virtualbox-100614434-orig.jpg)
### VirtualBox ###
With a major release published in July, Oracles open source [VirtualBox][13] -- available for Windows, OS X, Linux, even Solaris --continues to give commercial counterparts VMware Workstation, VMware Fusion, Parallels Desktop, and Microsofts Hyper-V a hard run for their money. The Oracle team is still getting the final Windows 10 bugs ironed out, but come to think of it, so is Microsoft.
VirtualBox doesnt quite match the performance or polish of the VMware and Parallels products, but its getting closer. Version 5 brought long-awaited drag-and-drop support, making it easier to move files between VMs and host.
I prefer VirtualBox over Hyper-V because its easy to control external devices. In Hyper-V, for example, getting sound to work is a pain in the neck, but in VirtualBox it only takes a click in setup. The shared clipboard between VM and host works wonders. Running speed on both is roughly the same, with a slight advantage to Hyper-V. But managing VirtualBox machines is much easier.
-- Woody Leonhard
![](http://images.techhive.com/images/article/2015/09/bossies-2015-inkscape-100614428-orig.jpg)
### Inkscape ###
If you stand in awe of the designs created with Adobe Illustrator (or even CorelDraw), take a close look at [Inkscape][14]. Scalable vector images never looked so good.
Version 0.91, released in January, uses a new internal graphics rendering engine called Cairo, sponsored by Google, to make the app run faster and allow for more accurate rendering. Inkscape will read and write SVG, PNG, PDF, even EPS, and many other formats. It can export Flash XML Graphics, HTML5 Canvas, and XAML, among others.
Theres a strong community around Inkscape, and its built for easy extensibility. Its available for Windows, OS X, and Linux.
-- Woody Leonhard
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-keepassdroid-100614431-orig.jpg)
### KeePassDroid ###
Trying to remember all of the passwords we need today is impossible, and creating new ones to meet stringent password policy requirements can be agonizing. A port of KeePass for Android, [KeePassDroid][15] brings sanity preserving password management to mobile devices.
Like KeyPass, KeyPassDroid makes creating and accessing passwords easy, requiring you to recall only a single master password. It supports both DES and Twofish algorithms for encrypting all passwords, and it goes a step further by encrypting the entire password database, not only the password fields. Notes and other password pertinent information are encrypted too.
While KeePassDroid's interface is minimal -- dated, some would say -- it gets the job done with bare-bones efficiency. Need to generate passwords that have certain character sets and lengths? KeePassDroid can do that with ease. With more than a million downloads on the Google Play Store, you could say this app definitely fills a need.
-- Victor R. Garza
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-prey-100615300-orig.jpg)
### Prey ###
Loss or theft of mobile devices is all too common these days. While there are many tools in the enterprise to manage and erase data either misplaced or stolen from an organization, [Prey][16] facilitates the recovery of the phone, laptop, or tablet, and not just the wiping of potentially sensitive information from the device.
Prey is a Web service that works with an open source installed agent for Linux, OS X, Windows, Android, and iOS devices. Prey tracks your lost or stolen device by using either the device's GPS, the native geolocation provided by newer operating systems, or an associated Wi-Fi hotspot to home in on the location.
If your smartphone is lost or stolen, send a text message to the device to activate Prey. For stolen tablets or laptops, use the Prey Project's cloud-based control panel to select the device as missing. The Prey agent on any device can then take a screenshot of the active applications, turn on the camera to catch a thief's image, reset the device to the factory settings, or fully lock down the device.
Should you want to retrieve your lost items, the Prey Project strongly suggests you contact your local police to have them assist you.
-- Victor R. Garza
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orbot-100615299-orig.jpg)
### Orbot ###
The premiere proxy application for Android, [Orbot][17] leverages the volunteer-operated network of virtual tunnels called Tor (The Onion Router) to keep all communications private. Orbot works with companion applications [Orweb][18] for secure Web browsing and [ChatSecure][19] for secure chat. In fact, any Android app that allows its proxy settings to be changed can be secured with Orbot.
One thing to remember about the Tor network is that it's designed for secure, lightweight communications, not for pulling down torrents or watching YouTube videos. Surfing media-rich sites like Facebook can be painfully slow. Your Orbot communications won't be blazing fast, but they will stay private and confidential.
-- Victor R. Garza
![](http://images.techhive.com/images/article/2015/09/bossies-2015-tails-100615301-orig.jpg)
### Tails ###
[Tails][20], or The Amnesic Incognito Live System, is a Linux Live OS that can be booted from a USB stick, DVD, or SD card. Its often used covertly in the Deep Web to secure traffic when purchasing illicit substances, but it can also be used to avoid tracking, support freedom of speech, circumvent censorship, and promote liberty.
Leveraging Tor (The Onion Router), Tails keeps all communications secure and private and promises to leave no trace on any computer after its used. It performs disk encryption with LUKS, protects instant messages with OTR, encrypts Web traffic with the Tor Browser and HTTPS Everywhere, and securely deletes files via Nautilus Wipe. Tails even has an office suite, image editor, and the like.
Now, it's always possible to be traced while using any system if you're not careful, so be vigilant when using Tails and follow good privacy practices, like turning off JavaScript while using Tor. And be aware that Tails isn't necessarily going to be speedy, even while using a fiber connect, but that's what you pay for anonymity.
-- Victor R. Garza
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100614438-orig.jpg)
### Read about more open source winners ###
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][21]
[Bossie Awards 2015: The best open source application development tools][22]
[Bossie Awards 2015: The best open source big data tools][23]
[Bossie Awards 2015: The best open source data center and cloud software][24]
[Bossie Awards 2015: The best open source desktop and mobile software][25]
[Bossie Awards 2015: The best open source networking and security software][26]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982630/open-source-tools/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.libreoffice.org/download/libreoffice-fresh/
[2]:http://lwn.net/Articles/637735/
[3]:https://www.mozilla.org/en-US/firefox/new/
[4]:https://nakedsecurity.sophos.com/2015/06/24/not-ok-google-privacy-advocates-take-on-the-chromium-team-and-win/
[5]:https://www.mozilla.org/en-US/thunderbird/
[6]:https://notepad-plus-plus.org/
[7]:http://www.videolan.org/vlc/index.html
[8]:http://www.videolan.org/press/vlc-2.2.0.html
[9]:http://www.7-zip.org/
[10]:https://handbrake.fr/
[11]:http://keepass.info/
[12]:http://www.infoworld.com/article/2931753/open-source-software/sourceforge-the-end-cant-come-too-soon.html
[13]:https://www.virtualbox.org/
[14]:https://inkscape.org/en/download/windows/
[15]:http://www.keepassdroid.com/
[16]:http://preyproject.com/
[17]:https://www.torproject.org/docs/android.html.en
[18]:https://guardianproject.info/apps/orweb/
[19]:https://guardianproject.info/apps/chatsecure/
[20]:https://tails.boum.org/
[21]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[22]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[23]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[24]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[25]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[26]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,165 +0,0 @@
robot527 translating
Bossie Awards 2015: The best open source networking and security software
================================================================================
InfoWorld's top picks of the year among open source tools for building, operating, and securing networks
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-net-sec-100614459-orig.jpg)
### The best open source networking and security software ###
BIND, Sendmail, OpenSSH, Cacti, Nagios, Snort -- open source software seems to have been invented for networks, and many of the oldies and goodies are still going strong. Among our top picks in the category this year, you'll find a mix of stalwarts, mainstays, newcomers, and upstarts perfecting the arts of network management, security monitoring, vulnerability assessment, rootkit detection, and much more.
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-icinga-100614482-orig.jpg)
### Icinga 2 ###
Icinga began life as a fork of system monitoring application Nagios. [Icinga 2][1] was completely rewritten to give users a modern interface, support for multiple databases, and an API to integrate numerous extensions. With out-of-the-box load balancing, notifications, and configuration, Icinga 2 shortens the time to installation for complex environments. Icinga 2 supports Graphite natively, giving administrators real-time performance graphing without any fuss. But what puts Icinga back on the radar this year is its release of Icinga Web 2, a graphical front end with drag-and-drop customizable dashboards and streamlined monitoring tools.
Administrators can view, filter, and prioritize problems, while keeping track of which actions have already been taken. A new matrix view lets administrators view hosts and services on one page. You can view events over a particular time period or filter incidents to understand which ones need immediate attention. Icinga Web 2 may boast a new interface and zippier performance, but all the usual commands from Icinga Classic and Icinga Web are still available. That means there is no downtime trying to learn a new version of the tool.
-- Fahmida Rashid
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zenoss-100614465-orig.jpg)
### Zenoss Core ###
Another open source stalwart, [Zenoss Core][2] gives network administrators a complete, one-stop solution for tracking and managing all of the applications, servers, storage, networking components, virtualization tools, and other elements of an enterprise infrastructure. Administrators can make sure the hardware is running efficiently and take advantage of the modular design to plug in ZenPacks for extended functionality.
Zenoss Core 5, released in February of this year, takes the already powerful tool and improves it further, with an enhanced user interface and expanded dashboard. The Web-based console and dashboards were already highly customizable and dynamic, and the new version now lets administrators mash up multiple component charts onto a single chart. Think of it as the tool for better root cause and cause/effect analysis.
Portlets give additional insights for network mapping, device issues, daemon processes, production states, watch lists, and event views, to name a few. And new HTML5 charts can be exported outside the tool. The Zenoss Control Center allows out-of-band management and monitoring of all Zenoss components. Zenoss Core has new tools for online backup and restore, snapshots and rollbacks, and multihost deployment. Even more important, deployments are faster with full Docker support.
-- Fahmida Rashid
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opennms-100614461-orig.jpg)
### OpenNMS ###
An extremely flexible network management solution, [OpenNMS][3] can handle any network management task, whether it's device management, application performance monitoring, inventory control, or events management. With IPv6 support, a robust alerts system, and the ability to record user scripts to test Web applications, OpenNMS has everything network administrators and testers need. OpenNMS has become, as now a mobile dashboard, called OpenNMS Compass, lets networking pros keep an eye on their network even when they're out and about.
The iOS version of the app, which is available on the [iTunes App Store][4], displays outages, nodes, and alarms. The next version will offer additional event details, resource graphs, and information about IP and SNMP interfaces. The Android version, available on [Google Play][5], displays network availability, outages, and alarms on the dashboard, as well as the ability to acknowledge, escalate, or clear alarms. The mobile clients are compatible with OpenNMS Horizon 1.12 or greater and OpenNMS Meridian 2015.1.0 or greater.
-- Fahmida Rashid
![](http://images.techhive.com/images/article/2015/09/bossies-2015-onion-100614460-orig.jpg)
### Security Onion ###
Like an onion, network security monitoring is made of many layers. No single tool will give you visibility into every attack or show you every reconnaissance or foot-printing session on your company network. [Security Onion][6] bundles scores of proven tools into one handy Ubuntu distro that will allow you to see who's inside your network and help keep the bad guys out.
Whether you're taking a proactive approach to network security monitoring or following up on a potential attack, Security Onion can assist. Consisting of sensor, server, and display layers, the Onion combines full network packet capture with network-based and host-based intrusion detection, and it serves up all of the various logs for inspection and analysis.
The star-studded network security toolchain includes Netsniff-NG for packet capture, Snort and Suricata for rules-based network intrusion detection, Bro for analysis-based network monitoring, OSSEC for host intrusion detection, and Sguil, Squert, Snorby, and ELSA (Enterprise Log Search and Archive) for display, analysis, and log management. Its a carefully vetted collection of tools, all wrapped in a wizard-driven installer and backed by thorough documentation, that can help you get from zero to monitoring as fast as possible.
-- Victor R. Garza
![](http://images.techhive.com/images/article/2015/09/bossies-2015-kali-100614458-orig.jpg)
Kali Linux
The team behind [Kali Linux][7] revamped the popular security Linux distribution this year to make it faster and even more versatile. Kali sports a new 4.0 kernel, improved hardware and wireless driver support, and a snappier interface. The most popular tools are easily accessible from a dock on the side of the screen. The biggest change? Kali Linux is now a rolling distribution, with a continuous stream of software updates. Kali's core system is based on Debian Jessie, and the team will pull packages continuously from Debian Testing, while continuing to add new Kali-flavored features on top.
The distribution still comes jam-packed with tools for penetration testing, vulnerability analysis, security forensics, Web application analysis, wireless networking and assessment, reverse engineering, and exploitation tools. Now the distribution has an upstream version checking system that will automatically notify users when updates are available for the individual tools. The distribution also features ARM images for a range of devices, including Raspberry Pi, Chromebook, and Odroids, as well as updates to the NetHunter penetration testing platform that runs on Android devices. There are other changes too: Metasploit Community/Pro is no longer included, because Kali 2.0 is not yet [officially supported by Rapid7][8].
-- Fahmida Rashid
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-openvas-100614462-orig.jpg)
### OpenVAS ###
[OpenVAS][9], the Open Vulnerability Assessment System, is a framework that combines multiple services and tools to offer vulnerability scanning and vulnerability management. The scanner is coupled with a weekly feed of network vulnerability tests, or you can use a feed from a commercial service. The framework includes a command-line interface (so it can be scripted) and an SSL-secured, browser-based interface via the [Greenbone Security Assistant][10]. OpenVAS accommodates various plug-ins for additional functionality. Scans can be scheduled or run on-demand.
Multiple OpenVAS installations can be controlled through a single master, which makes this a scalable vulnerability assessment tool for enterprises. The project is as compatible with standards as can be: Scan results and configurations are stored in a SQL database, where they can be accessed easily by external reporting tools. Client tools access the OpenVAS Manager via the XML-based stateless OpenVAS Management Protocol, so security administrators can extend the functionality of the framework. The software can be installed from packages or source code to run on Windows or Linux, or downloaded as a virtual appliance.
-- Matt Sarrel
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-owasp-100614463-orig.jpg)
### OWASP ###
[OWASP][11], the Open Web Application Security Project, is a nonprofit organization with worldwide chapters focused on improving software security. The community-driven organization provides test tools, documentation, training, and almost anything you could imagine thats related to assessing software security and best practices for developing secure software. Several OWASP projects have become valuable components of many a security practitioner's toolkit:
[ZAP][12], the Zed Attack Proxy Project, is a penetration test tool for finding vulnerabilities in Web applications. One of the design goals of ZAP was to make it easy to use so that developers and functional testers who aren't security experts can benefit from using it. ZAP provides automated scanners and a set of manual test tools.
The [Xenotix XSS Exploit Framework][13] is an advanced cross-site scripting vulnerability detection and exploitation framework that runs scans within browser engines to get real-world results. The Xenotix Scanner Module uses three intelligent fuzzers, and it can run through nearly 5,000 distinct XSS payloads. An API lets security administrators extend and customize the exploit toolkit.
[O-Saft][14], or the OWASP SSL advanced forensic tool, is an SSL auditing tool that shows detailed information about SSL certificates and tests SSL connections. This command-line tool can run online or offline to assess SSL security such as ciphers and configurations. O-Saft provides built-in checks for common vulnerabilities, and you can easily extend these through scripting. In May 2015 a simple GUI was added as an optional download.
[OWTF][15], the Offensive Web Testing Framework, is an automated test tool that follows OWASP testing guidelines and the NIST and PTES standards. The framework uses both a Web UI and a CLI, and it probes Web and application servers for common vulnerabilities such as improper configuration and unpatched software.
-- Matt Sarrel
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-beef-100614456-orig.jpg)
### BeEF ###
The Web browser has become the most common vector for attacks against clients. [BeEF][15], the Browser Exploitation Framework Project, is a widely used penetration tool to assess Web browser security. BeEF helps you expose the security weaknesses of client systems using client-side attacks launched through the browser. BeEF sets up a malicious website, which security administrators visit from the browser they want to test. BeEF then sends commands to attack the Web browser and use it to plant software on the client machine. Administrators can then launch attacks on the client machine as if they were zombies.
BeEF comes with commonly used modules like a key logger, a port scanner, and a Web proxy, plus you can write your own modules or send commands directly to the zombified test machine. BeEF comes with a handful of demo Web pages to help you get started and makes it very easy to write additional Web pages and attack modules so you can customize testing to your environment. BeEF is a valuable test tool for assessing browser and endpoint security and for learning how browser-based attacks are launched. Use it to put together a demo to show your users how malware typically infects client devices.
-- Matt Sarrel
![](http://images.techhive.com/images/article/2015/09/bossies-2015-unhide-100614464-orig.jpg)
### Unhide ###
[Unhide][16] is a forensic tool that locates open TCP/UDP ports and hidden process on UNIX, Linux, and Windows. Hidden ports and processes can be the result of rootkit or LKM (loadable kernel module) activity. Rootkits can be difficult to find and remove because they are designed to be stealthy, hiding themselves from the OS and user. A rootkit can use LKMs to hide its processes or impersonate other processes, allowing it to run on machines undiscovered for a long time. Unhide can provide the assurance that administrators need to know their systems are clean.
Unhide is really two separate scripts: one for processes and one for ports. The tool interrogates running processes, threads, and open ports and compares this info to what's registered with the system as active, reporting discrepancies. Unhide and WinUnhide are extremely lightweight scripts that run from the command line to produce text output. They're not pretty, but they are extremely useful. Unhide is also included in the [Rootkit Hunter][17] project.
-- Matt Sarrel
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614457-orig.jpg)
Read about more open source winners
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
[Bossie Awards 2015: The best open source applications][18]
[Bossie Awards 2015: The best open source application development tools][19]
[Bossie Awards 2015: The best open source big data tools][20]
[Bossie Awards 2015: The best open source data center and cloud software][21]
[Bossie Awards 2015: The best open source desktop and mobile software][22]
[Bossie Awards 2015: The best open source networking and security software][23]
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2982962/open-source-tools/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
作者:[InfoWorld staff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/InfoWorld-staff/
[1]:https://www.icinga.org/icinga/icinga-2/
[2]:http://www.zenoss.com/
[3]:http://www.opennms.org/
[4]:https://itunes.apple.com/us/app/opennms-compass/id968875097?mt=8
[5]:https://play.google.com/store/apps/details?id=com.opennms.compass&hl=en
[6]:http://blog.securityonion.net/p/securityonion.html
[7]:https://www.kali.org/
[8]:https://community.rapid7.com/community/metasploit/blog/2015/08/12/metasploit-on-kali-linux-20
[9]:http://www.openvas.org/
[10]:http://www.greenbone.net/
[11]:https://www.owasp.org/index.php/Main_Page
[12]:https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
[13]:https://www.owasp.org/index.php/O-Saft
[14]:https://www.owasp.org/index.php/OWASP_OWTF
[15]:http://www.beefproject.com/
[16]:http://www.unhide-forensics.info/
[17]:http://www.rootkit.nl/projects/rootkit_hunter.html
[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html

View File

@ -1,186 +0,0 @@
[translating by ray]
Interviews: Linus Torvalds Answers Your Question
================================================================================
Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2].
**Productivity**
by DoofusOfDeath
> You've somehow managed to originate two insanely useful pieces of software: Linux, and Git. Do you think there's anything in your work habits, your approach to choosing projects, etc., that have helped you achieve that level of productivity? Or is it just the traditional combination of talent, effort, and luck?
**Linus**: I'm sure it's pretty much always that "talent, effort and luck". I'll leave it to others to debate how much of each...
I'd love to point out some magical work habit that makes it all happen, but I doubt there really is any. Especially as the work habits I had wrt the kernel and Git have been so different.
With Git, I think it was a lot about coming at a problem with fresh eyes (not having ever really bought into the traditional SCM mindset), and really trying to think about the issues, and spending a fair amount of time thinking about what the real problems were and what I wanted the design to be. And then the initial self-hosting code took about a day to write (ok, that was "self-hosting" in only the weakest sense, but still).
And with Linux, obviously, things were very different - the big designs came from the outside, and it took half a year to host itself, and it hadn't even started out as a kernel to begin with. Clearly not a lot of thinking ahead and planning involved ;). So very different circumstances indeed.
What both the kernel and Git have, and what I think is really important (and I guess that counts as a "work habit"), is a maintainer that stuck to it, and was responsive, responsible and sane. Too many projects falter because they don't have people that stick with them, or have people who have an agenda that doesn't match reality or the user expectations.
But it's very important to point out that for Git, that maintainer was not me. Junio Hamano really should get pretty much all the credit for Git. Credit where credit is due. I'll take credit for the initial implementation and design of Git - it may not be perfect, but ten years on it still is very solid and very clearly the same basic design. But I'll take even _more_ credit for recognizing that Junio had his head screwed on right, and was the person to drive the project. And all the rest of the credit goes to him.
Of course, that kind of segues into something else the kernel and Git do have in common: while I still maintain the kernel, I did end up finding a lot of smart people to maintain all the different parts of it. So while one important work habit is that "stick to it" persistence that you need to really take a project from a not-quite-usable prototype to something bigger and better, another important work-habit is probably to also "let go" and not try to own and control the project too much. Let other people really help you - guide the process but don't get in their way.
**init system**
by lorinc
> There wasn't a decent unix-like kernel, you wrote one which ultimately became the most used. There wasn't a decent version control software, you wrote one which ultimately became the most love. Do you think we already have a decent init system, or do you have plan to write one that will ultimately settle the world on that hot topic?
**Linus**: You can say the word "systemd", It's not a four-letter word. Seven letters. Count them.
I have to say, I don't really get the hatred of systemd. I think it improves a lot on the state of init, and no, I don't see myself getting into that whole area.
Yeah, it may have a few odd corners here and there, and I'm sure you'll find things to despise. That happens in every project. I'm not a huge fan of the binary logging, for example. But that's just an example. I much prefer systemd's infrastructure for starting services over traditional init, and I think that's a much bigger design decision.
Yeah, I've had some personality issues with some of the maintainers, but that's about how you handle bug reports and accept blame (or not) for when things go wrong. If people thought that meant that I dislike systemd, I will have to disappoint you guys.
**Can Valve change the Linux gaming market?**
by Anonymous Coward
> Do you think Valve is capable of making Linux a primary choice for gamers?
**Linus**: "Primary"? Probably not where it's even aiming. I think consoles (and all those handheld and various mobile platforms that "real gamers" seem to dismiss as toys) are likely much more primary, and will stay so.
I think Valve wants to make sure they can control their own future, and Linux and ValveOS is probably partly to explore a more "console-like" Valve experience (ie the whole "get a box set up for a single main purpose", as opposed to a more PC-like experience), and partly as a "second source" against Microsoft, who is a competitor in the console area. Keeping your infrastructure suppliers honest by making sure you have alternatives sounds like a good strategy, and particularly so when those suppliers may be competing with you directly elsewhere.
So I don't think the aim is really "primary". "Solid alternative" is I think the aim. Of course, let's see where it goes after that.
But I really have not been involved. People like Greg and the actual graphics driver guys have been in much more direct contact with Valve. I think it's great to see gaming on Linux, but at the same time, I'm personally not really much of a gamer.
**The future of RT-Linux?**
by nurhussein
> According to Thomas Gleixner, [the future of the realtime patchset to Linux is in doubt][2], as it is difficult to secure funding from interested parties on this functionality even though it is both useful and important: What are your thoughts on this, and what do you think we need to do to get more support behind the RT patchset, especially considering Linux's increasing use in embedded systems where realtime functionality is undoubtedly useful.
**Linus**: So I think this is one of those things where the markets decide how important rtLinux ends up being, and I suspect there are more than enough companies who end up wanting and using rtLinux that the project isn't really going anywhere. The complaints by Thomas were - I think - a wake-up call to the companies who end up wanting the extended hard realtime patches.
So I suspect there are companies and groups like OSADL that end up funding and helping with rtLinux, and that it isn't going away.
**Rigor and developments**
by hcs_$reboot
> The most complex program running on a machine is arguably its OS, especially the kernel. Linux (kernel) reached the top level in terms of performance, reliability and versatility. You have been criticized quite a few times for some virulent mails addressed to developers. Do you think Linux would be where it is without managing the project with an iron fist? To go further, do you think some other main OSS project would benefit from a more rigorous management approach?
**Linus**: One of the nice things about open source is how it allows people to really concentrate on what they are good at, and it has been a huge advantage for Linux that we've had people who are interested in the marketing side and selling Linux, as well as the legal side etc.
And that is all in addition, of course, to the original "we're motivated by the technology" people like me. And even within that "we're motivated by technology" group, you most certainly don't need to find _everything_ interesting, you can find the area you are passionate about and really care about and want to work on.
That's _fundamentally_ how open source works.
Now, if somebody is passionate about some "good management" thing, go wild, and try to get involved, and try to manage things. It's not what _I_ am interested in, but hey, the proof is in the pudding - anybody who thinks they have a new rigorous management approach that they think will help some part of the process, go wild.
Now, I personally suspect that it wouldn't work - not only are tech people an ornery lot to begin with (that whole "herding cats" thing), just look at all the crazy arguments on the internet. And ask yourself what actually holds an open source project like the kernel together? I think you need to be very oriented towards the purely technical solutions, simply because then you have tangible and real issues you can discuss (and argue about) with fairly clear-cut hard answers. It's the only thing people can really agree on in the big picture.
So the Linux approach to "management" has been to put technology first. That's rigorous enough for me. But as mentioned, it's a free-for-all. Anybody can come in and try to do better. Really.
And btw, it's worth noting that there are obviously specific smaller development teams where other management models work fine. Most of the individual developers are parts of teams inside particular companies, and within the confines of that company, there may well be a very strict rigorous management model. Similarly, within the confines of a particular productization effort there may be particular goals and models for that particular team that transcend that general "technical issues" thing.
Just to give a concrete example, the "development kernel" tree that I maintain works fundamentally differently and with very different rules from the "stable tree" that Greg does, which in turn is maintained very differently from what a distribution team within a Linux company does inside its maintenance kernel team.
So there's certainly room for different approaches to managing those very different groups. But do I think you can "rigorously manage" people on the internet? No.
**Functional languages?**
by EmeraldBot
> While historically you've been a C and Assembly guy (and the odd shell scripting and such), what do you think of functional languages such as Lisp, Closure, Haskell, etc? Do you see any advantages to them, or do you view them as frivolous and impractical? If you decide to do so, thanks for taking the time to answer my question! You're a legend at what you do, and I think it's awesome that the significantly less interesting me can ask you a question like this.
**Linus**: I may be a fan of C (with a certain fondness for assembly, just because it's so close to the machine), but that's very much about a certain context. I work at a level where those languages make sense. I certainly don't think that tools like Haskell etc are "frivolous and impractical" in general, although on a kernel level (or in a source control management system) I suspect they kind of are.
Many moons ago I worked on sparse (the C parser and analyzer), and one of my coworkers was a Haskell fan, and did incredible example transformations in very simple (well, to him) code - stuff that is just nasty to write in C because it's pretty high-level, there's tons of memory management, and you're really talking about implementing fairly abstract and high-level rules with pattern matching etc.
So I'm definitely not a functional language kind of guy - it's not how I learnt programming, and it really isn't very relevant to what I do, and I wouldn't recognize Haskell code if it bit me in the ass and called me names. But no, I wouldn't call them frivolous.
**Critical software to the use of Linux**
by TWX
> Mr. Torvalds, For many uses of Linux such as on the desktop, other software beyond the kernel and the base GNU tools are required. What other projects would you like to see given priority, and what would you like to see implemented or improved? Admittedly I thought most about X-Windows when asking this question; but I don't doubt that other daemons or systems can be just as important to the user experience. Thank you for your efforts all these years.
**Linus**: Hey, I don't really have any particular project I would want to champion, largely because we all have so different requirements on the desktop. There's just no single thing that stands out as being hugely more important than others to me.
What I do wish particularly desktop developers cared about is "consistency of experience". And by that I don't mean some kind of enforced visual consistency between different applications to make things "look coherent". No, I'm just talking about the pain and uncertainty users go through with upgrades, and understanding that while your project may be the most important project to *you* (because it's what you do), to your users, your project is likely just a fairly small and irrelevant part of their experience, and it's not very central at all, and they've learnt the quirks about that thing they don't even care about, and you really shouldn't break their expectations. Because it turns out that that is how you really make people hate their desktop.
This is not at all Linux-specific, of course - just look at the less than enthusiastic reception that other operating system redesigns have received. But I really wish that we hadn't had *both* of the major Linux desktop environments have to learn this (well, I hope they learnt) the hard way, and both of them ending up blaming their users rather than themselves.
**"anykernel"-style portable drivers?**
by staalmannen
> What do you think about the "anykernel" concept (invented by another Finn btw) used in NetBSD? Basically, they have modularized the code so that a driver can be built either in a monolithic kernel or for user space without source code changes ( rumpkernel.org ). The drivers are highly portable and used in Genode os (L4 type kernels), minix etc... Would this be possible or desirable for Linux? Apparently there is one attempt called "libos"...
**Linus**: So I have bad experiences with "portable" drivers. Writing drivers to some common environment tends to force some ridiculously nasty impedance matching abstractions that just get in the way and make things really hard to read and modify. It gets particularly nasty when everybody ends up having complicated - and differently so - driver subsystems to handle a lot of commonalities for a certain class of drivers (say a network driver, or a USB driver), and the different operating systems really have very different approaches and locking rules etc.
I haven't seen anykernel drivers, but from past experience my reaction to "portable device drivers" is to run away, screaming like little girl. As they say in Swedish "Bränt barn luktar illa".
**Processor Architecture**
by swv3752
> Several years ago, you were employed by Transmeta designing the Crusoe processor. I understand you are quite knowledgeable about cpu architecture. What are your thoughts on the Current Intel and AMD x86 CPUs particularly in comparison with ARM and IBM's Power8 CPUs? Where do you see the advantages of each one?
**Linus**: I'm no CPU architect, I just play one on TV.
But yes, I've been close to the CPU both as part of my kernel work, and as part of a processor company, and working at that level for a long time just means that you end up having fairly strong opinions. One of the things that my experiences at Transmeta convinced me of, for example, was that there's definitely very much a limit to what software should care about. I loved working at Transmeta, I loved the whole startup company environment, I loved working with really smart people, but in the end I ended up absolutely *not* loving to work with overly simple hardware (I also didn't love the whole IPO process, and what that did to the company culture, but that's a different thing).
Because there's only so much that software can do to compensate.
Something similar happened with my kernel work on the alpha architecture, which also started out as being an overly simplified implementation in the name of being small and supposedly running really fast. While I really started out liking the alpha architecture for being so clean, I ended up detesting how fragile the architecture implementations were (and by the time that got fixed in the 21264, I had given up on alpha).
So I've come to absolutely detest CPU's that need a lot of compiler smarts or special tuning to go fast. Life is too short to waste on in-order CPU's, or on hardware designers who think software should take care of the pieces that they find to be too complicated to handle themselves, and as a result just left undone. "Weak memory ordering" is just another example.
Thankfully, most of the industry these days seems to agree. Yes, there are still in-order cores, but nobody tries to make excuses for them any more: they are for the truly cheap and low-end market.
I tend to really like the modern Intel cores in particular, which tend to take that "let's not be stupid" really to heart. With the kernel being so threaded, I end up caring a lot about things like memory ordering etc, and the Intel big-core CPU's tend to be in a class of their own there. As a software person who cares about performance and looks at instruction profiles etc, it's just so *nice* to see that the CPU doesn't have some crazy glass jaw where you have to be very careful.
**GPU kernels**
by maraist
> Is there any inspiration that a GPU based kernel / scheduler has for you? How might Linux be improved to better take advantage of GPU-type batch execution models. Given that you worked transmeta and JIT compiled host-targeted runtimes. GPUs 1,000-thread schedulers seem like the next great paradigm for the exact type of machines that Linux does best on.
**Linus**: I don't think we'll see the kernel ever treat GPU threads the way we treat CPU threads. Not with the current model of GPU's (and that model doesn't really seem to be changing all that much any more).
Yes, GPU's are getting much better, and now generally have virtual memory and the ability to preempt execution, and you could run an OS on them. But the scheduling latencies are pretty high, and the threads are not really "independent" (ie they tend to share a lot of state - like the virtual address space and a large shared register set), so GPU "threads" don't tend to work like CPU threads. You'd schedule them all-or-nothing, so if you were to switch processes, you'd treat the GPU as one entity where you switch all the threads at once.
So it really wouldn't look like a thousand threads to the kernel. The GPU would still be scheduled as one single entity (or maybe a couple of entities depending on how the GPU is partitioned). The fact that that single entity works by doing a lot of things in massive parallelism is kind of immaterial for the kernel that doesn't end up seeing that parallelism as separate threads.
**alleged danger of Artificial Intelligence**
by peter303
> Some computer experts like Marvin Minsky, Larry Page, Ray Kuzweil think A.I. will be a great gift to Mankind. Others like Bill Joy and Elon Musk are fearful of potential danger. Where do you stand, Linus?
**Linus**: I just don't see the thing to be fearful of.
We'll get AI, and it will almost certainly be through something very much like recurrent neural networks. And the thing is, since that kind of AI will need training, it won't be "reliable" in the traditional computer sense. It's not the old rule-based prolog days, when people thought they'd *understand* what the actual decisions were in an AI.
And that all makes it very interesting, of course, but it also makes it hard to productize. Which will very much limit where you'll actually find those neural networks, and what kinds of network sizes and inputs and outputs they'll have.
So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.
The whole "Singularity" kind of event? Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really..
It's like Moore's law - yeah, it's very impressive when something can (almost) be plotted on an exponential curve for a long time. Very impressive indeed when it's over many decades. But it's _still_ just the beginning of the "S curve". Anybody who thinks any different is just deluding themselves. There are no unending exponentials.
**Is the kernel basically a finished project?**
by NaCh0
> Aside from adding drivers and refactoring algorithms when performance limits are discovered, is there anything left for the kernel? Maybe it's a failure of tech journalism but we never hear about the next big thing in kernel land anymore.
**Linus**: I don't think there's much of a "next big thing" in the kernel.
I wouldn't say that there is nothing but drivers (and architectures are kind of "CPU drivers) and improving scalability left, because I'm constantly amazed by how many new things people figure out are still good ideas. But they tend to still be pretty incremental improvements. An OS kernel doesn't look *that* radically different from what it was 40 years ago, and that's fine. I think radical new ideas are often overrated, and the thing that really matters in the end is that plodding detail work. That's how technology evolves.
And judging by how our kernel releases are going, there's no end in sight for that "plodding detail work". And it's still as interesting as it ever was.
--------------------------------------------------------------------------------
via: http://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds-answers-your-question
作者:[samzenpus][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:samzenpus@slashdot.org
[1]:http://interviews.slashdot.org/story/15/06/24/1718247/interview-ask-linus-torvalds-a-question
[2]:http://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions
[3]:https://lwn.net/Articles/604695/

View File

@ -1,53 +0,0 @@
Which Open Source Linux Distributions Would Presidential Hopefuls Run?
================================================================================
![Republican presidential candidate Donald Trump
](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/08/donaldtrump.jpg)
Republican presidential candidate Donald Trump
If people running for president used Linux or another open source operating system, which distribution would it be? That's a key question that the rest of the press—distracted by issues of questionable relevance such as "policy platforms" and whether it's appropriate to add an exclamation point to one's Christian name—has been ignoring. But the ignorance ends here: Read on for this sometime-journalist's take on presidential elections and Linux distributions.
If this sounds like a familiar topic to those of you who have been reading my drivel for years (is anyone, other than my dear editor, unfortunate enough to have actually done that?), it's because I wrote a [similar post][1] during the last presidential election cycle. Some kind readers took that article more seriously than I intended, so I'll take a moment to point out that I don't actually believe that open source software and political campaigns have anything meaningful to do with one another. I am just trying to amuse myself at the start of a new week.
But you can make of this what you will. You're the reader, after all.
### Linux Distributions of Choice: Republicans ###
Today, I'll cover just the Republicans. And I won't even discuss all of them, since the candidates hoping for the Republican party's nomination are too numerous to cover fully here in one post. But for starters:
If **Jeb (Jeb!?) Bush** ran Linux, it would be [Debian][2]. It's a relatively boring distribution designed for serious, grown-up hackers—the kind who see it as their mission to be the adults in the pack and clean up the messes that less-experienced open source fans create. Of course, this also makes Debian relatively unexciting, and its user base remains perennially small as a result.
**Scott Walker**, for his part, would be a [Damn Small Linux][3] (DSL) user. Requiring merely 50MB of disk space and 16MB of RAM to run, DSL can breathe new life into 20-year-old 486 computers—which is exactly what a cost-cutting guru like Walker would want. Of course, the user experience you get from DSL is damn primitive; the platform barely runs a browser. But at least you won't be wasting money on new computer hardware when the stuff you bought in 1993 can still serve you perfectly well.
How about **Chris Christie**? He'd obviously be clinging to [Relax-and-Recover Linux][4], which bills itself as a "setup-and-forget Linux bare metal disaster recovery solution." "Setup-and-forget" has basically been Christie's political strategy ever since that unfortunate incident on the George Washington Bridge stymied his political momentum. Disaster recovery may or may not bring back everything for Christie in the end, but at least he might succeed in recovering a confidential email or two that accidentally disappeared when his computer crashed.
As for **Carly Fiorina**, she'd no doubt be using software developed for "[The Machine][5]" operating system from [Hewlett-Packard][6] (HPQ), the company she led from 1999 to 2005. The Machine actually may run several different operating systems, which may or may not be based on Linux—details remain unclear—and its development began well after Fiorina's tenure at HP came to a conclusion. Still, her roots as a successful executive in the IT world form an important part of her profile today, meaning that her ties to HP have hardly been severed fully.
Last but not least—and you knew this was coming—there's **Donald Trump**. He'd most likely pay a team of elite hackers millions of dollars to custom-build an operating system just for him—even though he could obtain a perfectly good, ready-made operating system for free—to show off how much money he has to waste. He'd then brag about it being the best operating system ever made, though it would of course not be compliant with POSIX or anything else, because that would mean catering to the establishment. The platform would also be totally undocumented, since, if Trump explained how his operating system actually worked, he'd risk giving away all his secrets to the Islamic State—obviously.
Alternatively, if Trump had to go with a Linux platform already out there, [Ubuntu][7] seems like the most obvious choice. Like Trump, the Ubuntu developers have taken a we-do-what-we-want approach to building open source software by implementing their own, sometimes proprietary applications and interfaces. Free-software purists hate Ubuntu for that, but plenty of ordinary people like it a lot. Of course, whether playing purely by your own rules—in the realms of either software or politics—is sustainable in the long run remains to be seen.
### Stay Tuned ###
If you're wondering why I haven't yet mentioned the Democratic candidates, worry not. I am not leaving them out of today's writing because I like them any more or less than the Republicans. (Personally, I think the peculiar American practice of having only two viable political parties—which virtually no other functioning democracy does—is ridiculous, and I am suspicious of all of these candidates as a result.)
On the contrary, there's plenty to say about the Linux distributions the Democrats might use, too. And I will, in a future post. Stay tuned.
--------------------------------------------------------------------------------
via: http://thevarguy.com/open-source-application-software-companies/081715/which-open-source-linux-distributions-would-presidential-
作者:[Christopher Tozzi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://thevarguy.com/author/christopher-tozzi
[1]:http://thevarguy.com/open-source-application-software-companies/aligning-linux-distributions-presidential-hopefuls
[2]:http://debian.org/
[3]:http://www.damnsmalllinux.org/
[4]:http://relax-and-recover.org/
[5]:http://thevarguy.com/open-source-application-software-companies/061614/hps-machine-open-source-os-truly-revolutionary
[6]:http://hp.com/
[7]:http://ubuntu.com/

View File

@ -1,66 +0,0 @@
While the event had a certain amount of drama surrounding it, the [announcement][1] of the end for the [Debian Live project][2] seems likely to have less of an impact than it first appeared. The loss of the lead developer will certainly be felt—and the treatment he and the project received seems rather baffling—but the project looks like it will continue in some form. So Debian will still have tools to create live CDs and other media going forward, but what appears to be a long-simmering dispute between project founder and leader Daniel Baumann and the Debian CD and installer teams has been "resolved", albeit in an unfortunate fashion.
The November 9 announcement from Baumann was titled "An abrupt End to Debian Live". In that message, he pointed to a number of different events over the nearly ten years since the [project was founded][3] that indicated to him that his efforts on Debian Live were not being valued, at least by some. The final straw, it seems, was an "intent to package" (ITP) bug [filed][4] by Iain R. Learmonth that impinged on the namespace used by Debian Live.
Given that one of the main Debian Live packages is called "live-build", the new package's name, "live-build-ng", was fairly confrontational in and of itself. Live-build-ng is meant to be a wrapper around the [vmdebootstrap][5] tool for creating live media (CDs and USB sticks), which is precisely the role Debian Live is filling. But when Baumann [asked][6] Learmonth to choose a different name for his package, he got an "interesting" [reply][7]:
```
It is worth noting that live-build is not a Debian project, it is an external project that claims to be an official Debian project. This is something that needs to be fixed.
There is no namespace issue, we are building on the existing live-config and live-boot packages that are maintained and bringing these into Debian as native projects. If necessary, these will be forks, but I'm hoping that won't have to happen and that we can integrate these packages into Debian and continue development in a collaborative manner.
live-build has been deprecated by debian-cd, and live-build-ng is replacing it. In a purely Debian context at least, live-build is deprecated. live-build-ng is being developed in collaboration with debian-cd and D-I [Debian Installer].
```
Whether or not Debian Live is an "official" Debian project (or even what "official" means in this context) has been disputed in the thread. Beyond that, though, Neil Williams (who is the maintainer of vmdebootstrap) [provided some][8] explanation for the switch away from Debian Live:
```
vmdebootstrap is being extended explicitly to provide support for a replacement for live-build. This work is happening within the debian-cd team to be able to solve the existing problems with live-build. These problems include reliability issues, lack of multiple architecture support and lack of UEFI support. vmdebootstrap has all of these, we do use support from live-boot and live-config as these are out of the scope for vmdebootstrap.
```
Those seem like legitimate complaints, but ones that could have been fixed within the existing project. Instead, though, something of a stealth project was evidently undertaken to replace live-build. As Baumann [pointed out][9], nothing was posted to the debian-live mailing list about the plans. The ITP was the first notice that anyone from the Debian Live project got about the plans, so it all looks like a "secret plan"—something that doesn't sit well in a project like Debian.
As might be guessed, there were multiple postings that supported Baumann's request to rename "live-build-ng", followed by many that expressed dismay at his decision to stop working on Debian Live. But Learmonth and Williams were adamant that replacing live-build is needed. Learmonth did [rename][10] live-build-ng to a perhaps less confrontational name: live-wrapper. He noted that his aim had been to add the new tool to the Debian Live project (and "bring the Debian Live project into Debian"), but things did not play out that way.
```
I apologise to everyone that has been upset by the ITP bug. The software is not yet ready for use as a full replacement for live-build, and it was filed to let people know that the work was ongoing and to collect feedback. This sort of worked, but the feedback wasn't the kind I was looking for.
```
The backlash could perhaps have been foreseen. Communication is a key aspect of free-software communities, so a plan to replace the guts of a project seems likely to be controversial—more so if it is kept under wraps. For his part, Baumann has certainly not been perfect—he delayed the "wheezy" release by [uploading an unsuitable syslinux package][11] and [dropped down][12] from a Debian Developer to a Debian Maintainer shortly thereafter—but that doesn't mean he deserves this kind of treatment. There are others involved in the project as well, of course, so it is not just Baumann who is affected.
One of those other people is Ben Armstrong, who has been something of a diplomat during the event and has tried to smooth the waters. He started with a [post][13] that celebrated the project and what Baumann and the team had accomplished over the years. As he noted, the [list of downstream projects][14] for Debian Live is quite impressive. In another post, he also [pointed out][15] that the project is not dead:
```
If the Debian CD team succeeds in their efforts and produces a replacement that is viable, reliable, well-tested, and a suitable candidate to replace live-build, this can only be good for Debian. If they are doing their job, they will not "[replace live-build with] an officially improved, unreliable, little-tested alternative". I've seen no evidence so far that they operate that way. And in the meantime, live-build remains in the archive -- there is no hurry to remove it, so long as it remains in good shape, and there is not yet an improved successor to replace it.
```
On November 24, Armstrong also [posted][16] an update (and to [his blog][17]) on Debian Live. It shows some good progress made in the two weeks since Baumann's exit; there are even signs of collaboration between the project and the live-wrapper developers. There is also a [to-do list][18], as well as the inevitable call for more help. That gives reason to believe that all of the drama surrounding the project was just a glitch—avoidable, perhaps, but not quite as dire as it might have seemed.
---------------------------------
via: https://lwn.net/Articles/665839/
作者Jake Edge
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]: https://lwn.net/Articles/666127/
[2]: http://live.debian.net/
[3]: https://www.debian.org/News/weekly/2006/08/
[4]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804315
[5]: http://liw.fi/vmdebootstrap/
[6]: https://lwn.net/Articles/666173/
[7]: https://lwn.net/Articles/666176/
[8]: https://lwn.net/Articles/666181/
[9]: https://lwn.net/Articles/666208/
[10]: https://lwn.net/Articles/666321/
[11]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699808
[12]: https://nm.debian.org/public/process/14450
[13]: https://lwn.net/Articles/666336/
[14]: http://live.debian.net/project/downstream/
[15]: https://lwn.net/Articles/666338/
[16]: https://lwn.net/Articles/666340/
[17]: http://syn.theti.ca/2015/11/24/debian-live-after-debian-live/
[18]: https://wiki.debian.org/DebianLive/TODO

View File

@ -0,0 +1,76 @@
vim-kakali translating
Confessions of a cross-platform developer
=============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/business_clouds.png?itok=cucHuJnU)
[Andreia Gaita][1] is giving a talk at this year's OSCON, titled [Confessions of a cross-platform developer][2]. She's a long-time open source and [Mono][3] contributor, and develops primarily in C#/C++. Andreia works at GitHub, where she's focused on building the GitHub Extension manager for Visual Studio.
I caught up with Andreia ahead of her talk to ask about cross-platform development and what she's learned in her 16 years as a cross-platform developer.
![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
**What languages have you found easiest and hardest to develop cross-platform code for?**
It's less about which languages are good and more about the libraries and tooling available for those languages. The compilers/interpreters/build systems available for languages determine how easy it is to do cross-platform work with them (or whether it's even possible), and the libraries available for UI and native system access determine how deep you can integrate with the OS. With that in mind, I found C# to be the best for cross-platform work. The language itself includes features that allow fast native calls and accurate memory mapping, which you really need if you want your code to talk to the OS and native libraries. When I need very specific OS integration, I switch to C or C++.
**What cross-platform toolkits/abstractions have you used?**
Most of my cross-platform work has been developing tools, libraries and bindings for other people to develop cross-platform applications with, mostly in Mono/C# and C/C++. I don't get to use a lot of abstractions at that level, beyond glib and friends. I mostly rely on Mono for any cross-platform app that includes a UI, and Unity3D for the occasional game development. I play with Electron every now and then.
**What has been your approach to build systems, and how does this vary by language or platform?**
I try to pick the build system that is most suited for the language(s) I'm using. That way, it'll (hopefully) give me less headaches. It needs to allow for platform and architecture selection, be smart about build artifact locations (for multiple parallel builds), and be decently configurable. Most of the time I have projects combining C/C++ and C# and I want to build all the different configurations at the same time from the same source tree (Debug, Release, Windows, OSX, Linux, Android, iOS, etc, etc.), and that usually requires selecting and invoking different compilers with different flags per output build artifact. So the build system has to let me do all of this without getting (too much) in my way. I try out different build systems every now and then, just to see what's new, but in the end, I end up going back to makefiles and a combination of either shell and batch scripts or Perl scripts for driving them (because if I want users to build my things, I'd better pick a command line script language that is available everywhere).
**How do you balance the desire for native look and feel with the need for uniform user interfaces?**
Cross-platform UI is hard! I've implemented several cross-platform GUIs over the years, and it's the one thing for which I don't think there's an optimal solution. There's basically two options. You can pick a cross-platform GUI toolkit and do a UI that doesn't feel quite right in all the platforms you support, with a small codebase and low maintenance cost. Or you can choose to develop platform-specific UIs that will look and feel native and well integrated with a larger codebase and higher maintenance cost. The decision really depends on the type of app, how many features it has, how many resources you have, and how many platforms you're shipping to.
In the end, I think there's an increase in users' tolerance for "One UI To Rule Them All" with frameworks like Electron. I have a Chromium+C+C# framework side project that will one day hopefully allow me build Electron-style apps in C#, giving me the best of both worlds.
**Has building/packaging dependencies been an issue for you?**
I'm very conservative about my use of dependencies, having been bitten so many times by breaking ABIs, clashing symbols, and missing packages. I decide which OS version(s) I'm targeting and pick the lowest common denominator release available of a dependency to minimize issues. That usually means having five different copies of Xcode and OSX Framework libraries, five different versions of Visual Studio installed side-to-side on the same machine, multiple clang and gcc versions, and a bunch of VMs running various other distros. If I'm unsure of the state of packages in the OS I'm targeting, I will sometimes link statically and sometimes submodule dependencies to make sure they're always available. And most of all, I avoid the bleeding edge unless I really, really need something there.
**Do you use continuous integration, code review, and related tools?**
All the time! It's the only way to keep sane. The first thing I do on a project is set up cross-platform build scripts to ensure everything is automateable as early as possible. When you're targeting multiple platforms, CI is essential. It's impossible for everyone to build all the different combinations of platforms in one machine, and as soon as you're not building all of them you're going to break something without being aware of it. In a shared multi-platform codebase, different people own different platforms and features, so the only way to guarantee quality is to have cross-team code reviews combined with CI and other analysis tools. It's no different than other software projects, there's just more points of failure.
**Do you rely on automated build testing, or do you tend to build on each platform and test locally?**
For tools and libraries that don't include UIs, I can usually get away with automated build testing. If there's a UI, then I need to do both—reliable, scriptable UI automation for existing GUI toolkits is rare to non-existent, so I would have to either invest in creating UI automation tools that work across all the platforms I want to support, or I do it manually. If a project uses a custom UI toolkit (like, say, an OpenGL UI like Unity3D does), then it's fairly easy to develop scriptable automation tools and automate most of that stuff. Still, there's nothing like the human ability to break things with a couple of clicks!
**If you are developing cross-platform, do you support cross-editor build systems so that you can use Visual Studio on Windows, Qt Creator on Linux, and XCode on Mac? Or do you tend toward supporting one platform such as Eclipse on all platforms?**
I favor cross-editor build systems. I prefer generating project files for different IDEs (preferably in a way that makes it easier to add more IDEs), with build scripts that can drive builds from the IDEs for the platform they're on. Editors are the most important tool for a developer. It takes time and effort to learn them, and they're not interchangeable. I have my favorite editors and tools, and everyone else should be able to use their favorite tool, too.
**What is your preferred editor/development environment/IDE for cross-platform development?**
The cross-platform developer is cursed with having to pick the lowest common denominator editor that works across the most platforms. I love Visual Studio, but I can't rely on it for anything except Windows work (and you really don't want to make Windows your primary cross-compiling platform), so I can't make it my primary IDE. Even if I could, an essential skill of cross-platform development is to know and use as many platforms as possible. That means really knowing them—using the platform's editors and libraries, getting to know the OS and its assumptions, behaviors, and limitations, etc. To do that and keep my sanity (and my shortcut muscle memory), I have to rely on cross-platform editors. So, I use Emacs and Sublime.
**What are some of your favorite past and current cross-platform projects?**
Mono is my all-time favorite, hands down, and most of the others revolve around it in some way. Gluezilla was a Mozilla binding I did years ago to allow C# apps to embed web browser views, and that one was a doozy. At one point I had a Winforms app, built on Linux, running on Windows with an embedded GTK view in it that was running a Mozilla browser view. The CppSharp project (formerly Cxxi, formerly CppInterop) is a project I started to generate C# bindings for C++ libraries so that you could call, create instances of, and subclass C++ classes from C#. It was done in such a way that it would detect at runtime what platform you'd be running on and what compiler was used to create the native library and generate the correct C# bindings for it. That was fun!
**Where do you see cross-platform development heading in the future?**
The way we build native applications is already changing, and I feel like the visual differences between the various desktop operating systems are going to become even more blurred so that it will become easier to build cross-platform apps that integrate reasonably well without being fully native. Unfortunately, that might mean applications will be worse in terms of accessibility and less innovative when it comes to using the OS to its full potential. Cross-platform development of tools, libraries, and runtimes is something that we know how to do well, but there's still a lot of work to do with cross-platform application development.
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/5/oscon-interview-andreia-gaita
作者:[Marcus D. Hanwell ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mhanwell
[1]: https://twitter.com/sh4na
[2]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/48702
[3]: http://www.mono-project.com/

View File

@ -1,59 +0,0 @@
The history of Android
================================================================================
![Another Play Store redesign! This one is very close to the current design and uses cards that make layout changes a piece of cake.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/get-em-Kirill.jpg)
Another Play Store redesign! This one is very close to the current design and uses cards that make layout changes a piece of cake.
Photo by Ron Amadeo
### Out-of-cycle updates—who needs a new OS? ###
In between Android 4.2 and 4.3, Google went on an out-of-cycle update tear and showed just how much Android could be improved without having to fire up the arduous OTA update process. Thanks to the [Google Play Store and Play Services][1], all of these updates were able to be delivered without updating any core system components.
In April 2013, Google released a major redesign to the Google Play Store. Like most redesigns from here on out, the new Play Store fully adopted the Google Now aesthetic, with white cards on a gray background. The action bar changed color based on the current content section, and since the first screen featured content from all sections of the store, the action bar was a neutral gray. Buttons to navigate to the content sections were now given top billing, and below that was usually a promotional block or rows of recommended apps.
In April 2013, Google released a major redesign to the Google Play Store. Like most redesigns from here on out, the new Play Store fully adopted the Google Now aesthetic, with white cards on a gray background. The action bar changed color based on the current content section, and since the first screen featured content from all sections of the store, the action bar was a neutral gray. Buttons to navigate to the content sections were now given top billing, and below that was usually a promotional block or rows of recommended apps.
![The individual content sections are beautifully color-coded.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/content-rainbow.jpg)
The individual content sections are beautifully color-coded.
Photo by Ron Amadeo
The new Play Store showed off the real power of Googles card design language, which enabled a fully responsive layout across all screen sizes. One large card could be stuck next to several little cards, larger-screened devices could show more cards, and rather than stretch things in horizontal mode, more cards could just be added to a row. The Play Store content editors were free to play with the layout of the cards, too; a big release that needed to be highlighted could get a larger card. This design would eventually trickle down to the other Google Play content apps, finally resulting in a unified design.
![Hangouts replaced Google Talk and is now continually developed by the Google+ team.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/talkvhangouts2.jpg)
Hangouts replaced Google Talk and is now continually developed by the Google+ team.
Photo by Ron Amadeo
Google I/O, the company's annual developer conference, was usually where a new Android version was announced. But at the 2013 edition, Google made just as many improvements without having to update the OS.
One of the biggest things announced at the show was an update to Google Talk, Google's instant messaging platform. For a long time, Google shipped four text communication apps for Android: Google Talk, Google+ Messenger, Messaging (the SMS app), and Google Voice. Having four apps that accomplished the same task—sending a text message to someone—was very confusing for users. At I/O, Google killed Google Talk and started their messaging product over from scratch, creating [Google Hangouts][2]. While initially it only replaced Google Talk, the plan for Hangouts was to unify all of Google's various messaging apps into a single interface.
The layout of the Hangouts UI really wasn't drastically different from Google Talk. The main page contained your open conversations, and tapping on one opened a chat page. The design was updated, the chat page now used a card-style display for each paragraph, and the chat list was now a "drawer"-style interface, meaning you could open it with a horizontal swipe. Hangouts had read receipts and a typing status indicator, and group chat was now a primary feature.
Google+ was the center of Hangouts now, so much so that the full name of the product was actually "Google+ Hangouts." Hangouts was completely integrated with the Google+ desktop site so that video and chats could be made from one to the other. Identity and avatars were pulled from Google+, and tapping on an avatar would open that person's Google+ profile. And much like the change from Browser to Google Chrome, core Android functionality was passed off to a separate team—the Google+ team—as opposed to being a side product of the very busy Android engineers. With the Google+ takeover, Android's main IM client now became a continually developed application. It was placed into the Play Store and received fairly regular updates.
![The new navigation drawer interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/navigation_drawer_overview1.png)
The new navigation drawer interface.
Photo by [developer.android.com][3]
Google also introduced a new design element for the action bar: the navigation drawer. This drawer was shown as a set of three lines next to the app icon in the top-right corner. By tapping on it or dragging from the edge of the screen to the right, a side-mounted menu would appear. As the name implies, this was used to navigate around the app, and it would show several top-level locations within the app. This allowed the first screen to show content, and it gave users a consistent, easy-to-access place for navigation elements. The nav drawer was basically a super-sized version of the normal menu, scrollable and docked to the right side.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/23/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
[2]:http://arstechnica.com/information-technology/2013/05/hands-on-with-hangouts-googles-new-text-and-video-chat-architecture/
[3]:https://developer.android.com/design/patterns/navigation-drawer.html
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,82 +0,0 @@
The history of Android
================================================================================
![The slick new Google Play Music app, which changed from Tron to a perfect match for the Play Store.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Goooogleplaymusic.jpg)
The slick new Google Play Music app, which changed from Tron to a perfect match for the Play Store.
Photo by Ron Amadeo
Another app update pushed out at I/O was a new Google Music app. The app was completely redesigned, finally doing away with the blue-on-blue design introduced in Honeycomb. Play Music's design was unified with the new Play Store released a few months earlier, with a responsive white card layout. Music was also one of the first major apps to take advantage of the new navigation drawer style. Along with the new app, Google launched Google Play Music All Access, an all-you-can-eat subscription service for $10 a month. Google Music now had a subscription plan, à la carte purchasing, and a cloud music locker. This version also introduced "Instant Mix," a mode where Google would cloud-compute a playlist of similar songs.
![A game showing support for Google Play Games. This lineup shows the Play Store game feature descriptions, the permissions box triggered by signing into the game, a Play Games notification, and the achievements screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gooooogleplaygames.jpg)
A game showing support for Google Play Games. This lineup shows the Play Store game feature descriptions, the permissions box triggered by signing into the game, a Play Games notification, and the achievements screen.
Photo by Ron Amadeo
Google also introduced "Google Play Games," a back-end service that developers could plug into their games. The service was basically an Android version of Xbox Live or Apple's Game Center. Developers could build Play Games support into their game, which would easily let them integrate achievements, leaderboards, multiplayer, matchmaking, user accounts, and cloud saves by using Google's back-end services.
Play Games was the start of Google's big push into gaming. Just like standalone GPS units, flip phones, and MP3 players, smartphone makers were hoping standalone gaming devices would be turned into nothing more than a smartphone feature bullet point. Why buy a Nintendo DS or PS Vita when you had a smartphone with you? An easy-to-use multiplayer service would be a big part of this, and we've still yet to see the final consequence of this move. Today, Google and Apple are both rumored to be planning living room gaming devices.
![Google Keep, Google's first note taking service since Google Notebook.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/goooglekeep.jpg)
Google Keep, Google's first note taking service since Google Notebook.
Photo by Ron Amadeo
It was clear some products were developed in time for presentation at Google I/O, [but the three-and-a-half hour keynote][1] was already so massive, some things were cut from being announced. Once the smoke cleared three days after Google I/O, Google introduced Google Keep, a note taking app for Android and the Web. Keep was a fairly straightforward affair, applying the responsive Google Now-style design to a note taking app. Users could change the size of the cards from a multi-column layout to a single column view. Notes could consist of plain text, checklists, voice note with automatic transcription, or pictures. Note cards could be dragged around and rearranged on the main screen, and you could even assign a color to a note.
![Gmail 4.5, which switched to the new navigation drawer design and merged the action bars, thanks to some clever button elimination.](http://cdn.arstechnica.net/wp-content/uploads/2014/05/gmail.png)
Gmail 4.5, which switched to the new navigation drawer design and merged the action bars, thanks to some clever button elimination.
Photo by Ron Amadeo
After I/O, not much was safe from Google's out-of-cycle updating. In June 2013, Google released a redesigned version of Gmail. The headline feature of the new design was the new navigation drawer interface that was introduced a month earlier at Google I/O. The most eye catching change was the addition of Google+ profile pictures instead of checkboxes. While the checkboxes were visibly removed, they were still there, just tap on a picture.
![The new Google Maps, which switched to an all-white Google Now-style theme.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps11.png.)
The new Google Maps, which switched to an all-white Google Now-style theme.
Photo by Ron Amadeo
One month later, Google released a completely overhauled version of Google Maps to the Play Store. It was the first ground-up redesign of Google Maps since Ice Cream Sandwich. The new version fully adopted the Google Now white card aesthetic, and it greatly reduced the amount of stuff on the screen. The new Google Maps seemed to have a design mandate to always show a map on the screen somewhere, as youll be hard pressed to find something other than the settings that fully covers the map.
This version of Google Maps seemed to live in its own little design world. The white search bar “floated" above the map, with maps showing on the sides and top of the bar. That didn't really make it seem like the traditional Action Bar design. The navigation drawer, in the top left on every other app, was in the bottom left. There was no up button, app icon, or overflow button on the main screen.
![The new Google Maps cut a lot of fat and displayed more information on a single screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps21.png)
The new Google Maps cut a lot of fat and displayed more information on a single screen.
Photo by Ron Amadeo
The left picture shows what popped up when you tapped on the search bar (along with the keyboard, which had been closed). In the past, Google would show an empty page below a blank search bar, but in Maps, Google used that space to link to the new “Local" page. The “blank" search results displayed links to common, browsable results like restaurant listings, gas stations, and attractions. At the bottom of the results page was a list of nearby results from your search history and an option to manually cache parts of the map.
The right set of images shows location page. The map shown in the top of the Maps 7 screenshot isnt a thumbnail; thats the full map view. In the new version of Google Maps, a location was displayed as a card that “floats" overtop of the main map, and the map was repositioned to center on the location. Scrolling up would move the card up and cover the map, and scrolling down would show the whole map with the result reduced to a small strip at the bottom. If the location was part of a list of search results, swiping left and right would move through the results.
The location pages were redesigned to be much more useful at a glance. On the first page, the new version added critical information, like the location on a map, the review score, and the number of reviews. Since this is a phone, and the software will be dialing for you, the phone number was deemed pointless and was removed. The old version showed the distance to the location in miles, while the new version of Google Maps showed the distance in terms of time, based on traffic and preferred mode of transportation—a much more useful metric. The new version also put a share button front and center, which made coordination over IM or text messaging a lot easier.
### Android 4.3, Jelly Bean—getting wearable support out early ###
Android 4.3 would have been an incredible update if Google had done the traditional thing and not released updates between 4.3 and 4.2 through the Play Store. If the new Play Store, Gmail, Maps, Books, Music, Hangouts, Keep, and Play Games were bundled into a big brick as a new version of Android, it would have been hailed as the biggest release ever. Google didn't need to do hold back features anymore though. With very little left that required an OS update, at the end of July 2013, Google released the seemingly insignificant update called "Android 4.3."
![Android Wear plugging into Android 4.3's Notification access screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-28-12.231.jpg)
Android Wear plugging into Android 4.3's Notification access screen.
Photo by Ron Amadeo
Google made no qualms about the low importance of 4.3, calling the newest release "Jelly Bean" (the third one in a row). Android 4.3's feature list read like a laundry list of things Google couldn't update from the Play Store or through Google Play Services, mostly consisting of low-level framework changes for developers.
Many of the additions seemed to fit a singular purpose, though—Android 4.3 was Google's trojan horse for wearable computing support. 4.3 added support for Bluetooth Low Energy, a way to wirelessly connect Android to another device and pass data back and forth while using a very small amount of power—an integral feature to a wearable device. Android 4.3 also added a "Notification Access" API, which allowed apps to completely replicate and control the notification panel. Apps could display notification text and pictures and interact with the notification the same way users do—namely pressing action buttons and dismissing notifications. Doing this from an on-board app when you have the notification panel is useless, but on a device that is separate from your phone, replicating the information in the notification panel becomes much more useful. One of the few apps that plugged into this was "Android Wear Preview," which used the notification API to power most of the interface for Android Wear.
The "4.3 is for wearables" theory explained the relatively low number of features in 4.3: it was pushed out the door to give OEMs time to update devices in time for the launch of [Android Wear][2]. The plan seems to have worked. Android Wear requires Android 4.3 and up, which has been out for so long now that most major flagships have updated.
Android 4.3 was not all that exciting, but Android releases from here on out didn't need to be all that exciting. Everything became so modularized that Google could push updates out as soon as they were done through Google Play, rather than drop everything in one huge brick as an OS update.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/24/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://live.arstechnica.com/liveblog-google-io-2013-keynote/
[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,70 +0,0 @@
The history of Android
================================================================================
![The LG-made Nexus 5, the launch device for KitKat.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nexus56.jpg)
The LG-made Nexus 5, the launch device for KitKat.
Android 4.4, KitKat—more polish; less memory usage
Google got really cute with the launch of Android 4.4. The company [teamed up with Nestlé][1] to name the OS "KitKat," and it launched on Halloween, October 31, 2013. Nestlé produced limited-edition Android-shaped KitKat bars, and KitKat packaging in stores promoted the new OS while offering a chance to win a Nexus 7.
KitKat launched with a new Nexus device, the Nexus 5. The new flagship had the biggest display yet: a five-inch, 1920x1080 LCD. Despite the bigger screen size, LG—again the manufacturer for the device—was able to fit the Nexus 5 into the same dimensions as a Galaxy Nexus or Nexus 4.
The Nexus 5 was specced comparatively to the highest-end phones at the time, with a 2.3Ghz Snapdragon 800 processor and 2GB of RAM. The phone was again sold unlocked on the Play Store, but while most phones with specs like this would go for $600-$700, Google sold the Nexus 5 for only $350.
One of the most important improvements in KitKat was one you couldn't see: significantly lower memory usage. For KitKat, Google started a concerted effort to lower memory usage across the OS and bundled apps called "Project Svelte." After tons of optimization work and a "low memory" mode that disabled expensive graphical effects, Android could now run on as little as 340MB of RAM. Lower memory requirements were a big deal, because devices in the developing world—the biggest growth markets for smartphones—often ran on only 512MB of RAM. Ice Cream Sandwich's more advanced UI significantly raised the system requirements of Android devices, which left many low-end devices—even newly released low-end devices—stuck on Gingerbread. The lower system requirements of KitKat meant to bring these cheap devices back into the fold. With KitKat, Google hoped to finally kill Gingerbread (which, at the time of writing, is around 20 percent of the market). Just in case the lower system requirements weren't enough, there have even been reports that Google will [no longer license][2] the Google apps to Gingerbread devices.
Besides bringing low-end phones to a modern version of the OS, Project Svelte's lower memory requirements were to be a boon to wearable computers, too. Google Glass [announced][3] it was also switching to the slimmer OS, and [Android Wear][4] ran on KitKat, too. The lower memory requirements in Android 4.4 and the notification API and Bluetooth LE support in 4.3 came together nicely to support wearable computing.
KitKat also featured a lot of polish to the core OS interfaces that couldn't be updated via the Play Store. The System UI, Dialer, Clock, and Settings all saw updates.
![KitKat's transparent bars on the Google Now Launcher.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/1homescreenz.png)
KitKat's transparent bars on the Google Now Launcher.
Photo by Ron Amadeo
KitKat not only got rid of the unpopular lines to the left and right sides of the lock screen—it completely disabled lock screen widgets by default! Google obviously felt multiple lock screens and multiple home screens were a little to complicated for new users, so lock screen widgets now needed to be enabled in the settings. The lopsided time here and in the clock app was switched to a symmetrical weight, which looked a lot nicer.
In KitKat, apps had the ability to make the system and status bars transparent, which significantly changed the look of the OS. The bars now blended into the wallpaper and any other app that chose to enable transparent bars. The bars could also be completely hidden by any app via a new feature called “immersive" mode.
KitKat was the final nail in the “Tron" coffin, removing almost all traces of blue from the operating system. The status bar icons were changed from a blue to a neutral white. The status and system bars on the home screen werent completely transparent; a dark gradient was added to the top and bottom of the screen so that the white icons would still be visible on a light background.
![Tweaks to Google Now and the folders.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nowfolders.png)
Tweaks to Google Now and the folders.
Photo by Ron Amadeo
The home screen that shipped with KitKat on the Nexus 5 was actually exclusive to the Nexus 5 for a few months, but it could now be on any Nexus device. The new home screen was called the "Google Now Launcher," and it was actually [the Google Search app][5]. Yes, Google Search grew from a simple search box to an entire home screen, and in KitKat, it drew the wallpaper, icons, app drawer, widgets, home screen settings, Google Now, and, of course, the search box. Thanks to Search now running the entire home screen, any time the home screen was open and the screen was on, voice commands could be activated by saying “OK Google." This was pointed out to the user with introductory “Say 'OK Google' text in the search bar, which would fade away after a few uses.
Google Now was more integrated, too. Besides the usual swipe up from the system bar, Google Now was also the leftmost home screen. The new version brought some design tweaks as well. The Google logo was moved into the search bar, and the whole top area was compacted. A few card designs were cleaned up, and a new set of buttons at the bottom led to reminders, customization options, and an overflow button with settings, feedback, and help. Since Google Now was part of the home screen, it got transparent system and status bars, too.
Transparency and “brightening up" certain parts of the OS were design themes in KitKat. Black was removed in the status and system bars by switching to transparent, and the black background of the folders was switched to white.
![A screenshot showing the new, cleaner app screen layout, and a composite image of the app lineup.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/apps.png)
A screenshot showing the new, cleaner app screen layout, and a composite image of the app lineup.
Photo by Ron Amadeo
The KitKat icon lineup changed significantly from 4.3. To be more dramatic, it was a bloodbath, with Google removing seven icons over the 4.3 loadout. Google Hangouts could handle SMS now, so the Messaging app was removed. Hangouts also took over Google+ Messenger duties, so that app shortcut was cut. Google Currents was removed as a default app, as it would soon be killed—along with Google Play Magazines—in favor of Google Play Newsstand. Google Maps was beaten back into a single icon, which meant Local and Navigation shortcuts were removed. The impossible-to-understand Movie Studio was cut, too—Google must have realized no one wants to edit movies on a phone. Thanks to the home screen “OK Google" hotword detection, the Voice Search icon was rendered redundant and removed. Depressingly, the long abandoned News & Weather app remained.
There was a new app called “Photos"—really the Google+ app—which took over picture management duties. On the Nexus 5, the Gallery and Google+ Photos were pretty similar, but in newer builds of KitKat present on Google Play Edition devices, the Gallery was completely replaced by Google+ photos. Play Games was an interface for Googles back-end multiplayer service—a Googly version of Xbox Live or Apples Game Center. Google Drive, which existed for years as a Play Store app, was finally made a default app. Google bought Quickoffice back in June 2012, now finally deeming the app acceptable for inclusion by default. While Drive opened Google Documents, Quickoffice opened Microsoft Office Documents. If keeping track, that was two document editing apps and two photo editing apps included on most KitKat loadouts.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/25/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2013/09/official-the-next-edition-of-android-is-kitkat-version-4-4/
[2]:http://www.androidpolice.com/2014/02/10/rumor-google-to-begin-forcing-oems-to-certify-android-devices-with-a-recent-os-version-if-they-want-google-apps/
[3]:http://www.androidpolice.com/2014/03/01/glass-xe14-delayed-until-its-ready-promises-big-changes-and-a-move-to-kitkat/
[4]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[5]:http://arstechnica.com/gadgets/2013/11/google-just-pulled-a-facebook-home-kitkats-primary-interface-is-google-search/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,87 +0,0 @@
The history of Android
================================================================================
![The new "add to home screen" interface was definitely inspired by Honeycomb.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/homesetupthrowback.png)
The new "add to home screen" interface was definitely inspired by Honeycomb.
Photo by Ron Amadeo
KitKat added a nice throwback to Honeycomb with the home screen configuration screen. On the massive 10-inch screen of a Honeycomb tablet (right picture, above), long pressing on the home screen background would present you with a zoomed-out view of all your home screens. Widgets could be dragged from the bottom widget drawer into any home screen—it was very handy. When it came time to bring the Honeycomb interface to phones, from Android 4.0 all the way to 4.3, Google skipped this design and left it to the larger screened devices, presenting only a list of options after a long press (center picture).
For KitKat though, Google finally came up with a solution. After a long press, 4.4 presented a slightly zoomed out view—you could see the current home screen and the screens to the left and right of it. Tapping on the “widgets" button would open a full screen list of widget thumbnails, but after long-pressing on a widget, you were thrown back into the zoomed-out view and could scroll through home screen pages and place the icon where you wanted. By dragging an icon or widget all the way past the rightmost home page, you could create a new home page.
![Contacts and the Keyboard both removed any trace of blue.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/RIP33B5E5.png)
Contacts and the Keyboard both removed any trace of blue.
Photo by Ron Amadeo
KitKat was the end of the line for the Tron design. In most parts of the OS, any remaining blue highlights were removed in favor of gray. In the People app, blue was sucked out of the header and the letter separators in the contact list. The pictures swapped sides and the bottom bar was changed to a light gray to match the top. The Keyboard, which injected the color blue into nearly every app, was changed to gray-on-gray-on-gray. That wasn't a bad thing. Apps should be allowed to have their own color scheme—forcing a potentially clashing color on them via the keyboard wasnt good design.
![The first three screenshots show KitKat's dialer, and the last one is 4.3.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/phone.png)
The first three screenshots show KitKat's dialer, and the last one is 4.3.
Photo by Ron Amadeo
Google completely revamped the dialer in KitKat, creating a wild new design that changed the way users thought about a phone. Actual numbers in the new dialer were hidden as much as possible—there wasnt even a dial pad on the main screen. The primary interface for making a phone call was now a search bar! If you wanted to call someone in your contacts, just type their name in; if you wanted to call a business, just type the business name in and the dialer would search through Google Maps extensive database of phone numbers. It worked incredibly well and was something only Google could pull off.
If searching for numbers wasnt your thing, the app also intelligently displayed a listing for the previous phone call, your most-contacted people, and a link to all contacts. At the bottom were links to your call history, the now old school number pad, and the usual overflow button containing a settings page.
![Office stuff: Google Drive, which was now packed in, and the printing support.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googledrive-and-printing.png)
Office stuff: Google Drive, which was now packed in, and the printing support.
Photo by Ron Amadeo
It was amazing it took this long, but in KitKat, Google Drive was finally included as a default app. Drive allowed users to create and edit Google Docs spreadsheets and documents, scan documents with the camera and upload them as PDFs, or view (but not edit) presentations. Drive, by this point, had a great, modern design with a slide-out navigation drawer and a Google Now-style card design.
For even more mobile office fun, KitKat included an OS-level printing framework. At the bottom of the settings was a "Printing" screen, and any printer OEM could make a plugin for it. Google Cloud Print was, of course, one of the first supporters. Once your printer was hooked up to Cloud Print, either natively or through a computer with Chrome installed, you could print to it over the Internet. Apps needed to support the printing framework, too. Pressing the little "i" button on Google Drive would show information about the document and give you the option to print it. Just like a desktop OS, a print dialog would pop up with settings like copies, paper size, and page selection.
![The "Photos" section of the Google+ app, which replaced the Gallery.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/that-is-one-dead-gallery.png)
The "Photos" section of the Google+ app, which replaced the Gallery.
Photo by Ron Amadeo
Google+ Photos and the Gallery initially shipped together on the Nexus 5, but in a later build of KitKat on Google Play devices, the Gallery was axed and Google+ completely took over photo duties. The new app changed the photo app from a light theme to a dark theme, and Google+ Photos brought a modern navigation drawer design.
Android had long included an instant upload feature, which would automatically backup all pictures on Googles cloud storage, first on Picasa and later on Google+. The big benefit of G+ Photos over the Gallery was that it could finally manage those cloud-stored photos. Little cloud icons in the lower right of a photo indicated backup status, and it would fill from right to left to indicate an upload-in-progress. G+ photos brought its own photo editor along with support for a million of other Google+ photo features, like highlights, auto awesome, and, of course, sharing to Google+.
![Tweaks to the Clock app, which added an alarms tab and changed the time input dialog.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clocks.png)
Tweaks to the Clock app, which added an alarms tab and changed the time input dialog.
Photo by Ron Amadeo
Google changed the excellent time picker that was introduced in 4.2 to this strange clock interface, which was both slower and less precise than the old interface. First you were presented with a one-handed clock which you used to choose the hour, then that clock went away and another one-handed clock allowed you to choose the minute. Having to spin the minute hand or tap a spot on the clock face made it very difficult to pick times in non-five-minute increments. Unlike the old time picker, which required you to pick a time period, this just defaulted to AM (again making it possible to accidentally be off by 12 hours).
### Today—Android everywhere ###
![](http://cdn.arstechnica.net/wp-content/uploads/2014/05/android-everywhere2.png)
Photo by Google/Sony/Motorola/Ron Amadeo
What started out as a curious BlackBerry clone from a search engine company became the most popular OS in the world from one of the biggest titans in the tech industry. Android has become Google's de-facto consumer operating system, and it powers phones, tablets, Google Glass, Google TV, and more. [Parts of it][1] are even used in the Chromecast. In the future, Google will be bringing Android to watches and wearables with [Android Wear][2], and the [Open Automotive Alliance][3] will be bringing Android to cars. Google will be making a renewed commitment to the living room soon, too, with [Android TV][4]. The OS is such a core pillar of Google, that events that are supposed to cover company-wide products, like Google I/O, end up becoming Android launch parties.
![Top row: the Google Play content stores. Bottom row: the Google Play Apps.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-30-03.08.jpg)
Top row: the Google Play content stores. Bottom row: the Google Play Apps.
Photo by Ron Amadeo
What was once the ugly duckling of the mobile industry has transformed so much it now [wins design awards][5] for its user interface. The design of things like Google Now have affected everything the company produces, with even the desktop sites like Search, Google+, YouTube, and Maps getting in on the card design unity. The design keeps evolving as well. Google's next plan is to [unify design][6] across not just Android, but all of its products. The goal is to take something like Gmail and make it feel the same, whether you're using it on Android, a desktop browser, or a watch.
Google outsourced so many pieces of Android to the Play Store, that version releases are becoming less and less necessary. Google decided the best way to beat carrier and OEM update issues was to sidestep those roadblocks completely. From here on out, there isn't much left to include in an Android update other than core under-the-hood changes—but even many APIs have been pushed to Google Play Services. If you just look at version releases, it seems like Android development has slowed down from the peak 2.5-month release cycle. But the reality is Google can now continually push out improvements to the Play Store in a never-ending, somewhat subtler stream of updates.
With 1.5 million activations per day, Android has no where to go but up. In the future, Android will be headed from phones and tablets to cars and watches, and the lower system requirements of KitKat will drive phones to even lower prices in the developing world. The bottom line? More and more people will get online. And for many of those people, Android will be not just their phone but their primary computing device. With Android leading the charge for Google in so many areas, the OS that started off as a tiny acquisition has become one of Google's most important products.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/26/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://blog.gtvhacker.com/2013/chromecast-exploiting-the-newest-device-by-google/
[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[3]:http://arstechnica.com/information-technology/2014/01/open-automotive-alliance-aims-to-bring-android-inside-the-car/
[4]:http://arstechnica.com/gadgets/2014/04/documents-point-to-android-tv-googles-latest-bid-for-the-living-room/
[5]:http://userexperienceawards.com/uxa2012/
[6]:http://arstechnica.com/gadgets/2014/04/googles-next-design-challenge-unify-app-design-across-platforms/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,75 @@
hkurj translating
A four year, action-packed experience with Wikipedia
=======================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/wikipedia_checkuser_lead.jpg?itok=4lVDjSSM)
I consider myself to be an Odia Wikimedian. I contribute [Odia][1] knowledge (the predominant language of the Indian state of [Odisha][2]) to many Wikimedia projects, like Wikipedia and Wikisource, by writing articles and correcting mistakes in articles. I also contribute to Hindi and English Wikipedia articles.
![](https://opensource.com/sites/default/files/resize/1st_day_at_odia_wikipedia_workshop_image_source_facebook-200x133.jpg)
My love for Wikimedia started while I was reading an article about the [Bangladesh Liberation war][3] on the English Wikipedia after my 10th board exam (like, an annual exam for 10th grade students in America). By mistake I clicked on a link that took me to an India Wikipedia article, and I started reading. Something was written in Odia on the lefthand side of the article, so I clicked on that, and reached a [ଭାରତ/Bhārat][4] article on the Odia Wikipedia. I was excited to find a Wikipedia article in my native language!
![](https://opensource.com/sites/default/files/resize/introducing_wikipedia_at_google_io_image_by_gdg_bhubaneswar-251x166.png)
A banner inviting readers to be part of the 2nd Bhubaneswar workshop on April 1, 2012 sparked my curiousity. I had never contributed to Wikipedia before, only used it for research, and I wasn't familiar with open source and the community contribution process. Plus, I was only 15 years old. I registered. There were many language enthusiasts at the workshop, and all older than me. My father encouraged me to the participate despite my fear; he has played an important role—he's not a Wikimedian, like me, but his encouragement has helped me change Odia Wikipedia and participate in community activities.
I believe that knowledge about Odia language and literature needs to improve—there are many misconceptions and knowledge gaps—so, I help organize events and workshops for Odia Wikipedia. On my accomplished list at the point, I have:
* initiated three major edit-a-thons in Odia Wikipedia: Women's Day 2015, Women's Day 2016, abd [Nabakalebara edit-a-thon 2015][5]
* initiated a photograph contest to get more [Rathyatra][6] images from all over the India
* represented Odia Wikipedia during two events by Google ([Google I/O extended][7] and Google Dev Fest)
* spoke at [Perception][8] 2015 and the first [Open Access India][9] meetup
![](https://opensource.com/sites/default/files/resize/bengali_wikipedia_10th_anniversary_cc-by-sa4.0_biswaroop_ganguly-251x166.jpg)
I was just an editor to Wikipedia projects until last year, in January 2015, when I attended [Bengali Wikipedia's 10th anniversary conference][10] and [Vishnu][11], the director of the [Center for Internet and Society][12] at the time, invited me to attend the [Train the Trainer][13] Program. I was inspired to start doing outreach for Odia Wikipedia and hosting meetups for [GLAM]14] activities and training new Wikimedians. These experience taught me how to work with a community of contributors.
[Ravi][15], the director of Wikimedia India at the time, also played an important role in my journey. He trusted me and made me a part of [Wiki Loves Food][16], a public photo competition on Wikimedia Commons, and the organizing committee of [Wikiconference India 2016][17]. During Wiki Loves Food 2015, my team helped add 10,000+ CC BY-SA images on Wikimedia Commons. Ravi further solidified my commitment by sharing a lot of information with me about the Wikimedia movement, and his own journey, during [Odia Wikipedia's 13th anniversary][18].
Less than a year later, in December 2015, I became a Program Associate at the Center for Internet and Society's [Access to Knowledge program][19] (CIS-A2K). One of my proud moments was at a workshop in Puri, India where we helped bring 20 new Wikimedian editors to the Odia Wikimedia community. Now, I mentor Wikimedians during an informal meetup called [WikiTungi][20] Puri. I am working with this group to make Odia Wikiquotes a live project. I am also dedicated to bridging the gender gap in Odia Wikipedia. [Eight female editors][21] are now helping to organize meetups and workshops, and participate in the [Women's History month edit-a-thon][22].
During my brief but action-packed journey during the four years since, I have also been involved in the [Wikipedia Education Program][23], the [newsletter team][24], and two global edit-a-thons: [Art and Feminsim][25] and [Menu Challenge][26]. I look forward to the many more to come!
I would also like to thank [Sameer][27] and [Anna][28] (both previous members of the Wikipedia Education Program).
------------------------------------------------------------------------------
via: https://opensource.com/life/16/4/my-open-source-story-sailesh-patnaik
作者:[Sailesh Patnaik][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/saileshpat
[1]: https://en.wikipedia.org/wiki/Odia_language
[2]: https://en.wikipedia.org/wiki/Odisha
[3]: https://en.wikipedia.org/wiki/Bangladesh_Liberation_War
[4]: https://or.wikipedia.org/s/d2
[5]: https://or.wikipedia.org/s/toq
[6]: https://commons.wikimedia.org/wiki/Commons:The_Rathyatra_Challenge
[7]: http://cis-india.org/openness/blog-old/odia-wikipedia-meets-google-developer-group
[8]: http://perception.cetb.in/events/odia-wikipedia-event/
[9]: https://opencon2015kolkata.sched.org/speaker/sailesh.patnaik007
[10]: https://meta.wikimedia.org/wiki/Bengali_Wikipedia_10th_Anniversary_Celebration_Kolkata
[11]: https://www.facebook.com/vishnu.vardhan.50746?fref=ts
[12]: http://cis-india.org/
[13]: https://meta.wikimedia.org/wiki/CIS-A2K/Events/Train_the_Trainer_Program/2015
[14]: https://en.wikipedia.org/wiki/Wikipedia:GLAM
[15]: https://www.facebook.com/ravidreams?fref=ts
[16]: https://commons.wikimedia.org/wiki/Commons:Wiki_Loves_Food
[17]: https://meta.wikimedia.org/wiki/WikiConference_India_2016
[18]: https://or.wikipedia.org/s/sml
[19]: https://meta.wikimedia.org/wiki/CIS-A2K
[20]: https://or.wikipedia.org/s/xgx
[21]: https://or.wikipedia.org/s/ysg
[22]: https://or.wikipedia.org/s/ynj
[23]: https://outreach.wikimedia.org/wiki/Education
[24]: https://outreach.wikimedia.org/wiki/Talk:Education/News#Call_for_volunteers
[25]: https://en.wikipedia.org/wiki/User_talk:Saileshpat#Barnstar_for_Art_.26_Feminism_Challenge
[26]: https://opensource.com/life/15/11/tasty-translations-the-open-source-way
[27]: https://www.facebook.com/samirsharbaty?fref=ts
[28]: https://www.facebook.com/anna.koval.737?fref=ts

View File

@ -0,0 +1,103 @@
translating by martin2011qi
Why and how I became a software engineer
==========================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
The year was 1989. The city was Kampala, Uganda.
In their infinite wisdom, my parents decided that instead of all the troublemaking I was getting into at home, they would send me off to my uncle's office to learn how to use a computer. A few days later, I found myself on the 21st floor in a cramped room with six or seven other teens and a brand new computer on a desk perpendicular to the teacher's desk. It was made abundantly clear that we were not skilled enough to touch it. After three frustrating weeks of writing and perfecting DOS commands, the magic moment happened. It was my turn to type **copy doc.txt d:**.
The alien scratching noises that etched a simple text file onto the five-inch floppy sounded like beautiful music. For a while, that floppy disk was my most prized possession. I copied everything I could onto it. However, in 1989, Ugandans tended to take life pretty seriously, and messing around with computers, copying files, and formatting disks did not count as serious. I had to focus on my education, which led me away from computer science and into architectural engineering.
Like any young person of my generation, a multitude of job titles and skills acquisition filled the years in between. I taught kindergarten, taught adults how to use software, worked in a clothing store, and served as a paid usher in a church. While I earned my degree at the University of Kansas, I worked as a tech assistant to the technical administrator, which is really just a fancy title for someone who messes around with the student database.
By the time I graduated in 2007, technology had become inescapable. Every aspect of architectural engineering was deeply intertwined with computer science, so we all inadvertently learned simple programming skills. For me, that part was always more fascinating. But because I had to be a serious engineer, I developed a secret hobby: writing science fiction.
In my stories, I lived vicariously through the lives of my heroines. They were scientists with amazing programming skills who were always getting embroiled in adventures and fighting tech scallywags with technology they invented, sometimes inventing them on the spot. Sometimes the new tech I came up with was based on real-world inventions. Other times it was the stuff I read about or saw in the science fiction I consumed. This meant that I had to understand how the tech worked and my research led me to some interesting subreddits and e-zines.
### Open source: The ultimate goldmine
Throughout my experiences, the fascinating weeks I'd spent writing out DOS commands remained a prominent influence, bleeding into little side projects and occupying valuable study time. As soon as Geocities became available to all Yahoo! Users, I created a website where I published blurry pictures that I'd taken on a tiny digital camera. I created websites for free, helped friends and family fix issues they had with their computers, and created a library database for a church.
This meant that I was always researching and trying to find more information about how things could be made better. The Internet gods blessed me and open source fell into my lap. Suddenly, 30-day trials and restrictive licenses became a ghost of computing past. I could continue to create using GIMP, Inkscape, and OpenOffice.
### Time to get serious
I was fortunate to have a business partner who saw the magic in my stories. She too is a dreamer and visionary who imagines a better connected world that functions efficiently and conveniently. Together, we came up with several solutions to pain points we experienced in the journey to success, but implementation had been a problem. We both lacked the skills to make our products come to life, something that was made evident every time we approached investors with our ideas.
We needed to learn to program. So, at the end of the summer in 2015, we embarked on a journey that would lead us right to the front steps of Holberton School, a community-driven, project-based school in San Francisco.
My business partner came to me one morning and started a conversation the way she does when she has a new crazy idea that I'm about to get sucked into.
**Zee**: Gloria, I'm going to tell you something and I want you to listen first before you say no.
**Me**: No.
**Zee**: We're going to be applying to go to a school for full-stack engineers.
**Me**: What?
**Zee**: Here, look! We're going to learn how to program by applying to this school.
**Me**: I don't understand. We're doing online courses in Python and...
**Zee**: This is different. Trust me.
**Me**: What about the...
**Zee**: That's not trusting me.
**Me**: Fine. Show me.
### Removing the bias
What I read sounded similar to something we had seen online. It was too good to be true, but we decided to give it a try, jump in with both feet, and see what would come out of it.
To become students, we had to go through a four-step selection process based solely on talent and motivation, not on the basis of educational degree or programming experience. The selection process is the beginning of the curriculum, so we started learning and collaborating through it.
It has been my experience—and that of my business partner—that the process of applying for anything was an utter bore compared to the application process Holberton School created. It was like a game. If you completed a challenge, you got to go to the next level, where another fascinating challenge awaited. We created Twitter accounts, blogged on Medium, learned HTML and CSS in order to create a website, and created a vibrant community online even before we knew who was going to get to go.
The most striking thing about the online community was how varied our experience with computers was, and how our background and gender did not factor into the choices that were being made by the founders (who we secretly called "The Trinity"). We just enjoyed being together and talking to each other. We were all smart people on a journey to increasing our nerd cred by learning how to code.
For much of the application process, our identities were not very evident. For example, my business partner's name does not indicate her gender or race. It was during the final step, a video chat, that The Trinity even knew she was a woman of color. Thus far, only her enthusiasm and talent had propelled her through the levels. The color of her skin and her gender did not hinder nor help her. How cool is that?
The night we got our acceptance letters, we knew our lives were about to change in ways we had only dreamt of. On the 22nd of January 2016, we walked into 98 Battery Street to meet our fellow [Hippokampoiers][2] for the first time. It was evident then, as it had been before, that the Trinity had started something amazing. They had assembled a truly diverse collection of passionate and enthusiastic people who had dedicated themselves to become full-stack engineers.
The school is an experience like no other. Every day is an intense foray into some facet of programming. We're handed a project and, with a little guidance, we use every resource available to us to find the solution. The premise that [Holberton School][1] is built upon is that information is available to us in more places than we've ever had before. MOOCs, tutorials, the availability of open source software and projects, and online communities are all bursting at the seams with knowledge that shakes up some of the projects we have to complete. And with the support of the invaluable team of mentors to guide us to solutions, the school becomes more than just a school; we've become a community of learners. I would highly recommend this school for anyone who is interested in software engineering and is also interested in the learning style. The next class is in October 2016 and is accepting new applications. It's both terrifying and exhilarating, but so worth it.
### Open source matters
My earliest experience with an open source operating system was [Fedora][3], a [Red Hat][4]-sponsored project. During a panicked conversation with an IRC member, she recommended this free OS. I had never installed my own OS before, but it sparked my interest in open source and my dependence on open source software for my computing needs. We are advocates for open source contribution, creation, and use. Our projects are on GitHub where anyone can use or contribute to them. We also have the opportunity to access existing open source projects to use or contribute to in our own way. Many of the tools that we use at school are open source, such as Fedora, [Vagrant][5], [VirtualBox][6], [GCC][7], and [Discourse][8], to name a few.
As I continue on my journey to becoming a software engineer, I still dream of a time when I will be able to contribute to the open source community and be able to share my knowledge with others.
### Diversity Matters
Standing in the room and talking to 29 other bright-eyed learners was intoxicating. 40% of the people there were women and 44% were people of color. These numbers become very important when you are a woman of color in a field that has been famously known for its lack of diversity. It was an oasis in the tech Mecca of the world. I knew I had arrived.
The notion of becoming a full-stack engineer is daunting, and you may even struggle to know what that means. It is a challenging road to travel with immeasurable rewards to reap. The future is run by technology, and you are an important part of that bright future. While the media continues to trip over handling the issue of diversity in tech companies, know that whoever you are, whatever your background is, whatever your reasons might be for becoming a full-stack engineer, you can find a place to thrive.
But perhaps most importantly, a strong reminder of the role of women in the history of computing can help more women return to the tech world, and they can be fully engaged without hesitation due to their gender or their capabilities as women. Their talents will help shape the future not just of tech, but of the world.
------------------------------------------------------------------------------
via: https://opensource.com/life/16/4/my-open-source-story-gloria-bwandungi
作者:[Gloria Bwandungi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/nappybrain
[1]: https://www.holbertonschool.com/
[2]: https://twitter.com/hippokampoiers
[3]: https://en.wikipedia.org/wiki/Fedora_(operating_system)
[4]: https://www.redhat.com/
[5]: https://www.vagrantup.com/
[6]: https://www.virtualbox.org/
[7]: https://gcc.gnu.org/
[8]: https://www.discourse.org/

View File

@ -1,4 +1,3 @@
translated by bestony
How to Install OsTicket Ticketing System in Fedora 22 / Centos 7 How to Install OsTicket Ticketing System in Fedora 22 / Centos 7
================================================================================ ================================================================================
In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system. In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system.

View File

@ -1,155 +0,0 @@
chisper 翻译中
How to Install Pure-FTPd with TLS on FreeBSD 10.2
================================================================================
FTP or File Transfer Protocol is application layer standard network protocol used to transfer file from the client to the server, after user logged in to the FTP server over the TCP-Network, such as internet. FTP has been round long time ago, much longer then P2P Program, or World Wide Web, and until this day it was a primary method for sharing file with other over the internet and it it remain very popular even today. FTP provide an secure transmission, that protect username, password and encrypt the content with SSL/TLS.
Pure-FTPd is free FTP Server with strong and focus on the software security. It was great choice for you if you want to provide a fast, secure, lightweight with feature rich FTP Services. Pure-FTPd can be install on variety of Unix-like operating system, include Linux and FreeBSD. Pure-FTPd is created by Frank Dennis in 2001, based on Troll-FTPd, and until now is actively developed by a team led by Dennis.
In this tutorial we will provide about installation and configuration of "**Pure-FTPd**" with Unix-like operating system FreeBSD 10.2.
### Step 1 - Update system ###
The first thing you must do is to install and update the freebsd repository, please connect to your server with SSH and then type command below as sudo/root :
freebsd-update fetch
freebsd-update install
### Step 2 - Install Pure-FTPd ###
You can install Pure-FTPd from the ports method, but in this tutorial we will install from the freebsd repository with "**pkg**" command. So, now let's install :
pkg install pure-ftpd
Once installation is finished, please add pure-ftpd to the start at the boot time with sysrc command below :
sysrc pureftpd_enable=yes
### Step 3 - Configure Pure-FTPd ###
Configuration file for Pure-FTPd is located at directory "/usr/local/etc/", please go to the directory and copy the sample configuration for pure-ftpd to "**pure-ftpd.conf**".
cd /usr/local/etc/
cp pure-ftpd.conf.sample pure-ftpd.conf
Now edit the file configuration with nano editor :
nano -c pure-ftpd.conf
Note : -c option to show line number on nano.
Go to line 59 and change the value of "VerboseLog" to "**yes**". This option is allow you as administrator to see the log all command used by the users.
VerboseLog yes
And now look at line 126 "PureDB" for virtual-users configuration. Virtual users is a simple mechanism to store a list of users, with their password, name, uid, directory, etc. It's just like /etc/passwd. But it's not /etc/passwd. It's a different file and only for FTP. In this tutorial we will store the list of user to the file "**/usr/local/etc/pureftpd.passwd**" and "**/usr/local/etc/pureftpd.pdb**". Please uncomment that line and change the path for the file to "/usr/local/etc/pureftpd.pdb".
PureDB /usr/local/etc/pureftpd.pdb
Next, uncomment on the line 336 "**CreateHomeDir**", this option make you easy to add the virtual users, allow automatically create home directories if they are missing.
CreateHomeDir yes
Save and exit.
Next, start pure-ftpd with service command :
service pure-ftpd start
### Step 4 - Adding New Users ###
At this step FTP server is started without error, but you can not log in to the FTP Server, because the default configuration of pure-ftpd is disabled for anonymous users. We need to create new users with home directory, and then give it the password for login.
On thing you must do befere you add new user to pure-ftpd virtual-user is to create a system user for this, lets create new system user "**vftp**" and the default group is same as username, with home directory "**/home/vftp/**".
pw useradd vftp -s /sbin/nologin -w no -d /home/vftp \
-c "Virtual User Pure-FTPd" -m
Now you can add the new user for the FTP Server with "**pure-pw**" command. For an example here, we will create new user named "**akari**", so please see command below :
pure-pw useradd akari -u vftp -g vftp -d /home/vftp/akari
Password: TYPE YOUR PASSWORD
that command will create user "**akari**" and the data stored at the file "**/usr/local/etc/pureftpd.passwd**", not at /etc/passwd file, so this means is that you can easily create FTP-only accounts without messing up your system accounts.
Next, you must generate the PureDB user database with this command :
pure-pw mkdb
Now restart the pure-ftpd services and try connect with user "akari" :
service pure-ftpd restart
Trying to connect with user akari :
ftp SERVERIP
![FTP Connect user akari](http://blog.linoxide.com/wp-content/uploads/2015/10/FTP-Connect-user-akari.png)
**NOTE :**
If you want to add new user again, you can use "**pure-pw**" command. And if you want to delete the current user, you can use this :
pure-pw userdel useryouwanttodelete
pure-pw mkdb
### Step 5 - Add SSL/TLS to Pure-FTPd ###
Pure-FTPd supports encryption using TLS security mechanisms. To support for TLS/SSL, make sure the OpenSSL library is already installed on your freebsd system.
Now you must generate new "**self-signed certificate**" on the directory "**/etc/ssl/private**". Before you generate the certificate, please create new directory there called "private".
cd /etc/ssl/
mkdir private
cd private/
Now generate "self-signed certificate" with openssl command below :
openssl req -x509 -nodes -newkey rsa:2048 -sha256 -keyout \
/etc/ssl/private/pure-ftpd.pem \
-out /etc/ssl/private/pure-ftpd.pem
FILL ALL WITH YOUR PERSONAL INFO.
![Generate Certificate pem](http://blog.linoxide.com/wp-content/uploads/2015/10/Generate-Certificate-pem.png)
Next, change the certificate permission :
chmod 600 /etc/ssl/private/*.pem
Once the certifcate is generated, Edit the pure-ftpd configuration file :
nano -c /usr/local/etc/pure-ftpd.conf
Uncomment on line **423** to enable the TLS :
TLS 1
And line **439** for the certificate file path :
CertFile /etc/ssl/private/pure-ftpd.pem
Save and exit, then restart the pure-ftpd services :
service pure-ftpd restart
Now let's test the Pure-FTPd that work with TLS/SSL. I'm here use "**FileZilla**" to connect to the FTP Server, and use user "**akari**" that have been created.
![Pure-FTPd with TLS SUpport](http://blog.linoxide.com/wp-content/uploads/2015/10/Pure-FTPd-with-TLS-SUpport.png)
Pure-FTPd with TLS on FreeBSD 10.2 successfully.
### Conclusion ###
FTP or File Transfer Protocol is standart protocol used to transfer file between users and the server. One of the best, lightweight and secure FTP Server Software is Pure-FTPd. It is secure and support for TLS/SSL encryption mechanism. Pure-FTPd is easy to to install and configure, you can manage the user with virtual user support, and it is make you as sysadmin is easy to manage the user if you have a much user ftp server.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-pure-ftpd-tls-freebsd-10-2/
作者:[Arul][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arulm/

View File

@ -1,108 +0,0 @@
8 things to do after installing openSUSE Leap 42.1
================================================================================
![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg)
Credit: [Metropolitan Transportation/Flicrk][1]
> You've installed openSUSE on your PC. Here's what to do next.
[openSUSE Leap is indeed a huge leap][2], allowing users to run a distro that has the same DNA of SUSE Linux Enterprise. Like any other operating system, some work is needed to get it set up for optimal use.
Following are some of the things that I did after installing openSUSE Leap on my PC (these are not applicable for server installations). None of them are mandatory, and you may be fine with the basic install. But if you need more out of your openSUSE Leap, follow me.
### 1. Adding Packman repository ###
Due to software patents and licences, openSUSE, like many Linux distributions, doesn't offer many applications, codecs, and drivers through official repositories (repos). Instead, these are made available through 3rd party or community repos. The first and most important repository is 'Packman'. Since these repos are not enabled by default, we have to add them. You can do so either using YaST (one of the gems of openSUSE) or by command line (instructions below).
![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png)
Adding Packman repositories.
Using YaST, go to the Software Repositories section. Click on the 'Add button and select 'Community Repositories.' Click 'next.' And once the repos are loaded, select the Packman Repository. Click 'OK,' then import the trusted GnuPG key by clicking on the 'Trust' button.
Or, using the terminal you can add and enable the Packman repo using the following command:
zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman
Once the repo is added, you have access to many more packages. To install any application or package, open YaST Software Manager, search for the package and install it.
### 2. Install VLC ###
VLC is the Swiss Army knife of media players and can play virtually any media file. You can install VLC from YaST Software Manager or from software.opensuse.org. You will need to install two packages: vlc and vlc-codecs.
If using terminal, run the following command:
sudo zypper install vlc vlc-codecs
### 3. Install Handbrake ###
If you need to transcode or convert your video files from one format to another, [Handbrake is the tools for you][3]. Handbrake is available through repositories we enabled, so just search for it in YaST and install.
If you are using the terminal, run the following command:
sudo zypper install handbrake-cli handbrake-gtk
(Pro tip: VLC can also transcode audio and video files.)
### 4. Install Chrome ###
OpenSUSE comes with Firefox as the default browser. But since Firefox isn't capable of playing restricted media such as Netflix, I recommend installing Chrome. This takes some extra work. First you need to import the trusted key from Google. Open the terminal app and run the 'wget' command to download the key:
wget https://dl.google.com/linux/linux_signing_key.pub
Then import the key:
sudo rpm --import linux_signing_key.pub
Now head over to the [Google Chrome website][4] and download the 64 bit .rpm file. Once downloaded run the following command to install the browser:
sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm
### 5. Install Nvidia drivers ###
OpenSUSE Leap will work out of the box even if you have Nvidia or ATI graphics cards. However, if you do need the proprietary drivers for gaming or any other purpose, you can install such drivers, but some extra work is needed.
First you need to add the Nvidia repositories; it's the same procedure we used to add Packman repositories using YaST. The only difference is that you will choose Nvidia from the Community Repositories section. Once it's added, go to **Software Management > Extras** and select 'Extras/Install All Matching Recommended Packages'.
![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png)
It will open a dialogue box showing all the packages it's going to install, click OK and follow the instructions. You can also run the following command after adding the Nvidia repository to install the needed Nvidia drivers:
sudo zypper inr
(Note: I have never used AMD/ATI cards so I have no experience with them.)
### 6. Install media codecs ###
Once you have VLC installed you won't need to install media codecs, but if you are using other apps for media playback you will need to install such codecs. Some developers have written scripts/tools which makes it a much easier process. Just go to [this page][5] and install the entire pack by clicking on the appropriate button. It will open YaST and install the packages automatically (of source you will have to give the root password and trust the GnuPG key, as usual).
### 7. Install your preferred email client ###
OpenSUSE comes with Kmail or Evolution, depending on the Desktop Environment you installed on the system. I run Plasma, which comes with Kmail, and this email client leaves a lot to be desired. I suggest trying Thunderbird or Evolution mail. All major email clients are available through official repositories. You can also check my [handpicked list of the best email clients for Linux][7].
### 8. Enable Samba services from Firewall ###
OpenSUSE offers a much more secure system out of the box, compared to other distributions. But it also requires a little bit more work for a new user. If you are using Samba protocol to share files within your local network then you will have to allow that service from the Firewall.
![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png)
Allow Samba Client and Server from Firewall settings.
Open YaST and search for Firewall. Once in Firewall settings, go to 'Allowed Services' where you will see a drop down list under 'Service to allow.' Select 'Samba Client,' then click 'Add.' Do the same with the 'Samba Server' option. Once both are added, click 'Next,' then click 'Finish,' and now you will be able to share folders from your openSUSE system and also access other machines over the local network.
That's pretty much all that I did on my new openSUSE system to set it up just the way I like it. If you have any questions, please feel free to ask in the comments below.
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html
作者:[Swapnil Bhartiya][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
[1]:https://www.flickr.com/photos/mtaphotos/11200079265/
[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution
[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line
[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha
[5]:http://opensuse-community.org/
[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html

View File

@ -1,43 +0,0 @@
A new Mindcraft moment?
=======================
Credit:Jonathan Corbet
It is not often that Linux kernel development attracts the attention of a mainstream newspaper like The Washington Post; lengthy features on the kernel community's approach to security are even more uncommon. So when just such a feature hit the net, it attracted a lot of attention. This article has gotten mixed reactions, with many seeing it as a direct attack on Linux. The motivations behind the article are hard to know, but history suggests that we may look back on it as having given us a much-needed push in a direction we should have been going for some time.
Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on.
Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to thethundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably.
The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then.
The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point.
We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not.
The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made.
Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues.
The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run.
Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying.
Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies.
There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel.
We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer.
---------------------------
via: https://lwn.net/Articles/663474/
作者Jonathan Corbet
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

Some files were not shown because too many files have changed in this diff Show More