diff --git a/README.md b/README.md index e68f260b71..bbc3753a39 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,16 @@ 简介 ------------------------------- -LCTT是“Linux中国”([https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译Linux相关的技术、资讯、杂文等内容。 +LCTT 是“Linux中国”([https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。 -LCTT已经拥有几百名活跃成员,并欢迎更多的Linux志愿者加入我们的团队。 +LCTT 已经拥有几百名活跃成员,并欢迎更多的Linux志愿者加入我们的团队。 ![logo](http://img.linux.net.cn/static/image/common/lctt_logo.png) -LCTT的组成 +LCTT 的组成 ------------------------------- -**选题**,负责选择合适的内容,并将原文转换为markdown格式,提交到LCTT的[TranslateProject](https://github.com/LCTT/TranslateProject) 库中。 +**选题**,负责选择合适的内容,并将原文转换为 markdown 格式,提交到 LCTT 的 [TranslateProject](https://github.com/LCTT/TranslateProject) 库中。 **译者**,负责从选题中选择内容进行翻译。 @@ -21,38 +21,38 @@ LCTT的组成 加入我们 ------------------------------- -请首先加入翻译组的QQ群,群号是:198889102,加群时请说明是“志愿者”。加入后记得修改您的群名片为您的github的ID。 +请首先加入翻译组的 QQ 群,群号是:198889102,加群时请说明是“志愿者”。加入后记得修改您的群名片为您的 GitHub 的 ID。 -加入的成员,请先阅读[WIKI 如何开始](https://github.com/LCTT/TranslateProject/wiki/01-如何开始)。 +加入的成员,请先阅读 [WIKI 如何开始](https://github.com/LCTT/TranslateProject/wiki/01-如何开始)。 如何开始 ------------------------------- -请阅读[WIKI](https://github.com/LCTT/TranslateProject/wiki)。 +请阅读 [WIKI](https://github.com/LCTT/TranslateProject/wiki)。 历史 ------------------------------- * 2013/09/10 倡议并得到了大家的积极响应,成立翻译组。 -* 2013/09/11 采用github进行翻译协作,并开始进行选题翻译。 +* 2013/09/11 采用 GitHub 进行翻译协作,并开始进行选题翻译。 * 2013/09/16 公开发布了翻译组成立消息后,又有新的成员申请加入了。并从此建立见习成员制度。 -* 2013/09/24 鉴于大家使用Github的水平不一,容易导致主仓库的一些错误,因此换成了常规的fork+PR的模式来进行翻译流程。 -* 2013/10/11 根据对LCTT的贡献,划分了Core Translators组,最先的加入成员是vito-L和tinyeyeser。 -* 2013/10/12 取消对LINUX.CN注册用户的依赖,在QQ群内、文章内都采用github的注册ID。 -* 2013/10/18 正式启动man翻译计划。 +* 2013/09/24 鉴于大家使用 GitHub 的水平不一,容易导致主仓库的一些错误,因此换成了常规的 fork+PR 的模式来进行翻译流程。 +* 2013/10/11 根据对 LCTT 的贡献,划分了 Core Translators 组,最先的加入成员是 vito-L 和 tinyeyeser。 +* 2013/10/12 取消对 LINUX.CN 注册用户的依赖,在 QQ 群内、文章内都采用 GitHub 的注册 ID。 +* 2013/10/18 正式启动 man 翻译计划。 * 2013/11/10 举行第一次北京线下聚会。 -* 2014/01/02 增加了Core Translators 成员: geekpi。 -* 2014/05/04 更换了新的QQ群:198889102 -* 2014/05/16 增加了Core Translators 成员: will.qian、vizv。 -* 2014/06/18 由于GOLinux令人惊叹的翻译速度和不错的翻译质量,升级为Core Translators成员。 +* 2014/01/02 增加了 Core Translators 成员: geekpi。 +* 2014/05/04 更换了新的 QQ 群:198889102 +* 2014/05/16 增加了 Core Translators 成员: will.qian、vizv。 +* 2014/06/18 由于 GOLinux 令人惊叹的翻译速度和不错的翻译质量,升级为 Core Translators 成员。 * 2014/09/09 LCTT 一周年,做一年[总结](http://linux.cn/article-3784-1.html)。并将曾任 CORE 的成员分组为 Senior,以表彰他们的贡献。 -* 2014/10/08 提升bazz2为Core Translators成员。 -* 2014/11/04 提升zpl1025为Core Translators成员。 -* 2014/12/25 提升runningwater为Core Translators成员。 +* 2014/10/08 提升 bazz2 为 Core Translators 成员。 +* 2014/11/04 提升 zpl1025 为 Core Translators 成员。 +* 2014/12/25 提升 runningwater 为 Core Translators 成员。 * 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。 -* 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。 -* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。 -* 2016/05/09 提升PurlingNayuki为校对。 +* 2015/06/09 提升 ictlyh 和 dongfengweixiao 为 Core Translators 成员。 +* 2015/11/10 提升 strugglingyouth、FSSlc、Vic020、alim0x 为 Core Translators 成员。 +* 2016/05/09 提升 PurlingNayuki 为校对。 活跃成员 ------------------------------- @@ -74,16 +74,16 @@ LCTT的组成 - CORE @dongfengweixiao, - CORE @alim0x, - Senior @DeadFire, -- Senior @reinoir, +- Senior @reinoir222, - Senior @tinyeyeser, - Senior @vito-L, - Senior @jasminepeng, - Senior @willqian, - Senior @vizv, - ZTinoZ, -- theo-l, -- luoxcat, - martin2011qi, +- theo-l, +- Luoxcat, - wi-cuckoo, - disylee, - haimingfg, @@ -91,8 +91,8 @@ LCTT的组成 - wwy-hust, - felixonmars, - su-kaiyao, -- ivo-wang, - GHLandy, +- ivo-wang, - cvsher, - wyangsun, - DongShuaike, @@ -119,6 +119,7 @@ LCTT的组成 - blueabysm, - boredivan, - name1e5s, +- StdioA, - yechunxiao19, - l3b2w1, - XLCYun, @@ -134,49 +135,34 @@ LCTT的组成 - 1w2b3l, - JonathanKang, - crowner, -- mtunique, - dingdongnigetou, +- mtunique, - CNprober, - hyaocuk, - szrlee, - KnightJoker, - Xuanwo, - nd0104, -- jerryling315, +- Moelf, - xiaoyu33, - guodongxiaren, - ynmlml, -- kylepeng93, +- vim-kakali, - ggaaooppeenngg, - Ricky-Gong, - zky001, -- Flowsnow, - lfzark, - 213edu, -- Tanete, -- liuaiping, - bestony, +- mudongliang, +- liuaiping, - Timeszoro, - rogetfan, -- itsang, - JeffDing, - Yuking-net, -- MikeCoder, -- zhangboyue, -- liaoishere, -- yupmoon, -- Medusar, -- zzlyzq, -- yujianxuechuan, -- ailurus1991, -- tomatoKiller, -- stduolc, -- shaohaolin, -- FineFan, -- kingname, -- CHINAANSHE, -(按提交行数排名前百) + +(按增加行数排名前百) LFS 项目活跃成员有: @@ -188,7 +174,7 @@ LFS 项目活跃成员有: - @KevinSJ - @Yuking-net -(更新于2016/05/09) +(更新于2016/06/20) 谢谢大家的支持! diff --git a/published/201502/20150205 LinuxQuestions Survey Results Surface Top Open Source Projects.md b/published/201502/20150205 LinuxQuestions Survey Results Surface Top Open Source Projects.md index 9430f11ada..4cfe259dd0 100644 --- a/published/201502/20150205 LinuxQuestions Survey Results Surface Top Open Source Projects.md +++ b/published/201502/20150205 LinuxQuestions Survey Results Surface Top Open Source Projects.md @@ -19,7 +19,7 @@ LinuxQuestions 问卷调查揭晓最佳开源项目 via: http://ostatic.com/blog/linuxquestions-survey-results-surface-top-open-source-projects 作者:[Sam Dean][a] -译者:[jerryling315](https://github.com/jerryling315) +译者:[Moelf](https://github.com/Moelf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 @@ -29,4 +29,4 @@ via: http://ostatic.com/blog/linuxquestions-survey-results-surface-top-open-sour [2]:http://www.linuxquestions.org/questions/linux-news-59/2014-linuxquestions-org-members-choice-award-winners-4175532948/ [3]:http://www.linuxquestions.org/questions/2014mca.php [4]:http://ostatic.com/blog/lq-members-choice-award-winners-announced -[5]:http://www.linuxquestions.org/questions/2014mca.php \ No newline at end of file +[5]:http://www.linuxquestions.org/questions/2014mca.php diff --git a/published/201509/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/published/201509/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md index e14e0ba320..f82f43ccf1 100644 --- a/published/201509/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md +++ b/published/201509/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md @@ -98,7 +98,7 @@ Debian 在 Linux 生态环境中的贡献是难以用语言描述的。 如果 D via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ 作者:[Avishek Kumar][a] -译者:[jerryling315](http://moelf.xyz) +译者:[Moelf](https://github.com/Moelf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201509/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/published/201509/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md index e7ee7d760d..af5c90e7f7 100644 --- a/published/201509/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md +++ b/published/201509/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md @@ -69,7 +69,7 @@ b、 一旦你保存了这个文件,你应该能在 Wifi 菜单里看到你刚 via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ 作者:[Sayantan Das][a] -译者:[jerryling315](https://github.com/jerryling315) +译者:[Moelf](https://github.com/Moelf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151117 How bad a boss is Linus Torvalds.md b/published/20151117 How bad a boss is Linus Torvalds.md new file mode 100644 index 0000000000..b35ba67827 --- /dev/null +++ b/published/20151117 How bad a boss is Linus Torvalds.md @@ -0,0 +1,80 @@ +Linus Torvalds 是一个糟糕的老板吗? +================================================================================ + +![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) + +*1999 年 8 月 10 日,加利福尼亚州圣何塞市,在 LinuxWorld Show 上 Linus Torvalds 在一个坐满 Linux 爱好者的礼堂中发表了一篇演讲。图片来自:James Niccolai* + +**这取决于所处的领域。在软件开发的世界中,他也是个普通人。问题是,这种情况是否应该继续下去?** + +Linus Torvalds 是 Linux 的发明者,我认识他超过 20 年了。我们不是密友,但是我们欣赏彼此。 + +最近,因为 Linus Torvalds 的管理风格,他正遭到严厉的炮轰。Linus 无法忍受胡来的人。“代码的质量有多好?”是他在 Linux 内核的开发过程中评判人的一种方式。 + +没有什么比这个更重要了。正如 Linus 今年(2015年)早些时候在 Linux.conf.au 会议上说的那样,“我不是一个友好的人,我也不在意你。对我重要的是『[我所关心的技术和内核][1]』。” + +现在我也可以和这种只关心技术的人打交道了。如果你不能,你应当避免参加 Linux 内核会议,因为在那里你会遇到许多有这种精英思想的人。这不代表我认为在 Linux 领域所有东西都是极好的,并且不应该受到其他影响而带来改变。我能够和一个精英待在一起;而在一个男性做主导的大城堡中遇到的问题是,女性经常受到蔑视和无礼的对待。 + +这就是我看到的最近关于 Linus 管理风格所引发争论的原因 -- 或者更准确的说,他对于个人管理方面是完全冷漠的 -- 就像是在软件开发世界的标准操作流程一样。与此同时,我看到了揭示了这个事情需要改变的另外一个证据。 + +第一次是在 [Linux 4.3 发布][2]的时候出现的这个情况,Linus 使用 Linux 内核邮件列表来狠狠的数落了一个插入了一些网络方面的代码的开发者——这些代码很“烂”,“[生成了如此烂的代码][3]。这看起来太糟糕了,并且完全没有理由这样做。”他继续咆哮了半天。这里使用“烂”这个词,相对他早期使用的“愚蠢的”这个同义词来说还算好的。 + +但是,事情就是这样。Linus 是对的。我读了代码后,发现代码确实很烂,并且开发者只是为了用新的“overflow_usub()” 函数而用的。 + +现在,一些人把 Linus 的这种谩骂的行为看作他脾气不好而且恃强凌弱的证据。我见过一个完美主义者,在他的领域中,他无法忍受这种糟糕。 + +许多人告诉我,这不是一个专业的程序员应当有的行为。群众们,你曾经和最优秀的开发者一起工作过吗?据我所知道的,在 Apple,Microsoft,Oracle 这就是他们的行为。 + +我曾经听过 Steve Jobs 攻击一个开发者,就像要把他撕成碎片那样。我也被一个 Oracle 的高级开发者攻击一屋子的新开发者吓到过,就像食人鱼穿过一群金鱼那样。 + +在 Robert X. Cringely 关于 PC 崛起的经典书籍《[意外帝国(Accidental Empires)][5]》,中,他这样描述了微软的软件管理风格,比尔·盖茨像计算机系统一样管理他们,“比尔·盖茨的是最高等级,从他开始每一个等级依次递减,上级会向下级叫嚷,刺激他们,甚至羞辱他们。” + +Linus 和所有大型的商业软件公司的领导人不同的是,Linus 说在这里所有的东西是向全世界公开的。而其他人是在自己的会议室中做东西的。我听有人说 Linus 在那种公司中可能会被开除。这是不可能的。他会处于他现在所处的地位,他在编程世界的最顶端。 + +但是,这里有另外一个不同。如果 Larry Ellison (Oracle 的首席执行官)向你发火,你就别想在这里干了。如果 Linus 向你发火,你会在邮件中收到他的责骂。这就是差别。 + +你知道的,Linus 不是任何人的老板。他完全没有雇佣和解聘的权利,他只是负责着有 10000 个贡献者的一个项目而已。他仅仅能做的就是从心理上伤害你。 + +这说明,在开源软件开发圈和商业软件开发圈中同时存在一个非常严重的问题。不管你是一个多么好的编程者,如果你是一个女性,你的这个身份就是对你不利的。 + +这种情况并没有在 Sarah Sharp 的身上有任何好转,她现在是一个 Intel 的开发者,以前是一个顶尖的 Linux 程序员。[在她博客上10月份的一个帖子中][4],她解释道:“我最终发现,我不能够再为 Linux 社区做出贡献了。因为在那里,我虽然能够得到技术上的尊重,却得不到个人的尊重……我不想专职于同那些有着轻微的性别歧视或开同性恋玩笑的人一起工作。” + +谁会责怪她呢?我不会。很抱歉,我必须说,Linus 就像所有我见过的软件经理一样,是他造成了这种不利的工作环境。 + +他可能会说,确保 Linux 的贡献者都表现出专业精神和相互尊重不应该是他的工作。除了代码以外,他不关心任何其他事情。 + +就像 Sarah Sharp 写的那样: + + +> 我对于 Linux 内核社区做出的技术努力表示最大尊重。他们在那维护一些最高标准的代码,以此来平衡并且发展一个项目。他们专注于优秀的技术,以及超过负荷的维护人员,他们有不同的文化背景和社会规范,这些意味着这些 Linux 内核维护者说话非常直率、粗鲁,或者为了完成他们的任务而不讲道理。顶尖的 Linux 内核开发者经常为了使别人改正行为而向他们大喊大叫。 +> +> 这种事情发生在我身上,但它不是一种有效的沟通方式。 +> +> 许多高级的 Linux 内核开发者支持那些技术上和人性上不讲道理的维护者的权利。即使他们自己是非常友好的人,他们不想看到 Linux 内核交流方式改变。 + +她是对的。 + +我和其他观察者不同的是,我不认为这个问题对于 Linux 或开源社区在任何方面有特殊之处。作为一个从事技术商业工作超过五年和有着 25 年技术工作经历的记者,我见多了这种不成熟的小孩子行为。 + +这不是 Linus 的错误。他不是一个经理,他是一个有想象力的技术领导者。看起来真正的问题是,在软件开发领域没有人能够用一种支持的语气来对待团队和社区。 + +展望未来,我希望像 Linux 基金会这样的公司和组织,能够找到一种方式去授权社区经理或其他经理来鼓励并且强制实施民主的行为。 + +非常遗憾的是,我们不能够在我们这种纯技术或纯商业的领导人中找到这种管理策略。它不存在于这些人的基因中。 + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html + +作者:[Steven J. Vaughan-Nichols][a] +译者:[FrankXinqi](https://github.com/FrankXinqi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html +[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/ +[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html +[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/ +[5]:https://www.amazon.cn/Accidental-Empires-Cringely-Robert-X/dp/0887308554/479-5308016-9671450?ie=UTF8&qid=1447101469&ref_=sr_1_1&tag=geo-23 \ No newline at end of file diff --git a/published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md b/published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md new file mode 100644 index 0000000000..7038dfd87a --- /dev/null +++ b/published/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md @@ -0,0 +1,462 @@ +Securi-Pi:使用树莓派作为安全跳板 +================================================================================ + +像很多 LinuxJournal 的读者一样,我也过上了当今非常普遍的“科技游牧”生活,在网络之间,从一个接入点到另一个接入点,我们身处现实世界的不同地方却始终保持连接到互联网和日常使用的其它网络上。近来我发现越来越多的网络环境开始屏蔽对外的常用端口比如 SMTP(端口25),SSH(端口22)之类的。当你走进一家咖啡馆然后想 SSH 到你的一台服务器上做点事情的时候发现端口 22 被屏蔽了是一件很烦的事情。 + +不过,我到目前为止还没发现有什么网络环境会把 HTTPS 给墙了(端口443)。在稍微配置了一下家中的树莓派 2 之后,我成功地让自己通过接入树莓派的 443 端口充当跳板,从而让我在各种网络环境下都能连上想要的目标端口。简而言之,我把家中的树莓派设置成了一个 OpenVPN 的端点和 SSH 端点,同时也是一个 Apache 服务器,所有这些服务都监听在 443 端口上,以便可以限制我不想暴露的网络服务。 + + +### 备注 + +此解决方案能搞定大多数有限制的网络环境,但有些防火墙会对外部流量调用深度包检查(Deep packet inspection),它们时常能屏蔽掉用本篇文章里的方式传输的信息。不过我到目前为止还没在这样的防火墙后测试过。同时,尽管我使用了很多基于密码学的工具(OpenVPN,HTTPS,SSH),我并没有非常严格地审计过这套配置方案(LCTT 译注:作者的意思是指这套方案能帮你绕过端口限制,但不代表你的活动就是完全安全的)。有时候甚至 DNS 服务都会泄露你的信息,很可能在我没有考虑周到的角落里会有遗漏。我强烈不推荐把此跳板配置方案当作是万无一失的隐藏网络流量的办法,此配置只是希望能绕过一些端口限制连上网络,而不是做一些危险的事情。 + +### 起步 + +让我们先从你需要什么说起,我用的是树莓派 2,装载了最新版本的 Raspbian,不过这个配置也应该能在树莓派 Model B 上运行;512MB 的内存对我们来说绰绰有余了,虽然性能可能没有树莓派 2这么好,毕竟相比于四核心的树莓派 2, Model B 只有一颗单核心 CPU。我的树莓派放置在家里的防火墙和路由器的后面,所以我还能用这个树莓派作为跳板访问家里的其他电子设备。同时这也意味着我的流量在互联网上看起来仿佛来自我家的 ip 地址,所以这也算某种意义上保护了我的匿名性。如果你没有树莓派,或者不想从家里运行这个服务,那你完全可以把这个配置放在一台小型云服务器上(LCTT 译注:比如 IPS )。你只要确保服务器运行着基于 Debian 的 Linux 发行版即可,这份指南依然可用。 + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg) + +*图 1 树莓派,即将成为我们的加密网络端点* + + +### 安装并配置 BIND + +无论你是用树莓派还是一台服务器,当你成功启动之后你就可以安装 BIND 了,这是一个驱动了互联网相当一部分的域名服务软件。你将会把 BIND 仅仅作为缓存域名服务使用,而不用把它配置为用来处理来自互联网的域名请求。安装 BIND 会让你拥有一个可以被 OpenVPN 使用的 DNS 服务器。安装 BIND 十分简单,`apt-get` 就可以直接搞定: + +``` +root@test:~# apt-get install bind9 +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + bind9utils +Suggested packages: + bind9-doc resolvconf ufw +The following NEW packages will be installed: + bind9 bind9utils +0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. +Need to get 490 kB of archives. +After this operation, 1,128 kB of additional disk space will be used. +Do you want to continue [Y/n]? y +``` + +在我们把 BIND 作为缓存域名服务器之前,还有一些小细节需要配置。两个修改都在`/etc/bind/named.conf.options`里完成。首先你要取消注释掉 forwarders 这一节内容,同时你还要增加一个可以转发域名请求的目标服务器。作为例子我会用 Google 的 DNS 服务器(8.8.8.8)(LCTT 译注:国内的话需要找一个替代品);文件的 forwarders 节看上去大致是这样的: + + +``` +forwarders { + 8.8.8.8; +}; +``` + +第二点你需要做的更改是允许来自内网和本机的查询请求,直接把这一行加入配置文件的后面,记得放在最后一个`};`之前就可以了: + + +``` +allow-query { 192.168.1.0/24; 127.0.0.0/16; }; +``` + +上面那行配置会允许此 DNS 服务器接收来自其所在的网络(在本例中,我的网络就在我的防火墙之后)和本机的请求。下一步,你需要重启一下 BIND 的服务: + +``` +root@test:~# /etc/init.d/bind9 restart +[....] Stopping domain name service...: bind9waiting for pid 13209 to die +. ok +[ ok ] Starting domain name service...: bind9. +``` + +现在你可以测试一下 `nslookup` 来确保你的服务正常运行了: + + +``` +root@test:~# nslookup +> server localhost +Default server: localhost +Address: 127.0.0.1#53 +> www.google.com +Server: localhost +Address: 127.0.0.1#53 + +Non-authoritative answer: +Name: www.google.com +Address: 173.194.33.176 +Name: www.google.com +Address: 173.194.33.177 +Name: www.google.com +Address: 173.194.33.178 +Name: www.google.com +Address: 173.194.33.179 +Name: www.google.com +Address: 173.194.33.180 +``` + +完美!现在你的系统里已经有一个正常的域名服务在工作了,下一步我们来配置一下OpenVPN。 + + +### 安装并配置 OpenVPN + +OpenVPN 是一个运用 SSL/TLS 作为密钥交换的开源 VPN 解决方案。同时它也非常便于在 Linux 环境下部署。配置 OpenVPN 可能有一点点难,不过其实你也不需要在默认的配置文件里做太多修改。首先你需要运行一下 `apt-get` 来安装 OpenVPN: + + +``` +root@test:~# apt-get install openvpn +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + liblzo2-2 libpkcs11-helper1 +Suggested packages: + resolvconf +The following NEW packages will be installed: + liblzo2-2 libpkcs11-helper1 openvpn +0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. +Need to get 621 kB of archives. +After this operation, 1,489 kB of additional disk space will be used. +Do you want to continue [Y/n]? y +``` + +现在 OpenVPN 已经安装好了,你需要去配置它了。OpenVPN 是基于 SSL 的,并且它同时依赖于服务端和客户端两方的证书来工作。为了生成这些证书,你需要在机器上配置一个证书签发(CA)。幸运地,OpenVPN 在安装中自带了一些用于生成证书的脚本比如 “easy-rsa” 来帮助你加快这个过程。你将要创建一个文件目录用于放置 easy-rsa 脚本,从模板目录复制过来: + + +``` +root@test:~# mkdir /etc/openvpn/easy-rsa +root@test:~# cp -rpv /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa/ + ``` + +下一步,把 vars 文件复制一个备份: + + +``` +root@test:/etc/openvpn/easy-rsa# cp vars vars.bak +``` + +接下来,编辑一下 vars 以让其中的信息符合你的状态。我将以我需要编辑的信息作为例子: + +``` +KEY_SIZE=4096 +KEY_COUNTRY="US" +KEY_PROVINCE="CA" +KEY_CITY="Silicon Valley" +KEY_ORG="Linux Journal" +KEY_EMAIL="bill.childers@linuxjournal.com" +``` + +下一步是导入(source)一下 vars 中的环境变量,这样系统就能把其中的信息当作环境变量处理了: + + +``` +root@test:/etc/openvpn/easy-rsa# source ./vars +NOTE: If you run ./clean-all, I will be doing a rm -rf on /etc/openvpn/easy-rsa/keys + ``` + +### 搭建 CA(证书签发) + +接下来你要运行一下 `clean-all` 来确保有一个清理干净的系统工作环境,紧接着你就要做证书签发了。注意一下我修改了一些 changeme 的所提示修改的内容以符合我需要的安装情况: + + +``` +root@test:/etc/openvpn/easy-rsa# ./clean-all +root@test:/etc/openvpn/easy-rsa# ./build-ca +Generating a 4096 bit RSA private key +...................................................++ +...................................................++ +writing new private key to 'ca.key' +----- +You are about to be asked to enter information that +will be incorporated into your certificate request. +What you are about to enter is what is called a +Distinguished Name or a DN. +There are quite a few fields but you can leave some +blank. For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [US]: +State or Province Name (full name) [CA]: +Locality Name (eg, city) [Silicon Valley]: +Organization Name (eg, company) [Linux Journal]: +Organizational Unit Name (eg, section) [changeme]:SecTeam +Common Name (eg, your name or your server's hostname) [changeme]:test.linuxjournal.com +Name [changeme]:test.linuxjournal.com +Email Address [bill.childers@linuxjournal.com]: +``` + + +### 生成服务端证书 + +一旦 CA 创建好了,你接着就可以生成客户端的 OpenVPN 证书了: + + +``` +root@test:/etc/openvpn/easy-rsa# ./build-key-server test.linuxjournal.com +Generating a 4096 bit RSA private key +...................................................++ +writing new private key to 'test.linuxjournal.com.key' +----- +You are about to be asked to enter information that +will be incorporated into your certificate request. +What you are about to enter is what is called a +Distinguished Name or a DN. +There are quite a few fields but you can leave some +blank. For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [US]: +State or Province Name (full name) [CA]: +Locality Name (eg, city) [Silicon Valley]: +Organization Name (eg, company) [Linux Journal]: +Organizational Unit Name (eg, section) [changeme]:SecTeam +Common Name (eg, your name or your server's hostname) [test.linuxjournal.com]: +Name [changeme]:test.linuxjournal.com +Email Address [bill.childers@linuxjournal.com]: + +Please enter the following 'extra' attributes +to be sent with your certificate request +A challenge password []: +An optional company name []: +Using configuration from /etc/openvpn/easy-rsa/openssl-1.0.0.cnf +Check that the request matches the signature +Signature ok +The Subject's Distinguished Name is as follows +countryName :PRINTABLE:'US' +stateOrProvinceName :PRINTABLE:'CA' +localityName :PRINTABLE:'Silicon Valley' +organizationName :PRINTABLE:'Linux Journal' +organizationalUnitName:PRINTABLE:'SecTeam' +commonName :PRINTABLE:'test.linuxjournal.com' +name :PRINTABLE:'test.linuxjournal.com' +emailAddress :IA5STRING:'bill.childers@linuxjournal.com' +Certificate is to be certified until Sep 1 06:23:59 2025 GMT (3650 days) +Sign the certificate? [y/n]:y + +1 out of 1 certificate requests certified, commit? [y/n]y +Write out database with 1 new entries +Data Base Updated +``` + +下一步需要用掉一些时间来生成 OpenVPN 服务器需要的 Diffie-Hellman 密钥。这个步骤在一般的桌面级 CPU 上会需要几分钟的时间,但在 ARM 构架的树莓派上,会用掉超级超级长的时间。耐心点,只要终端上的点还在跳,那么一切就在按部就班运行(下面的示例省略了不少的点): + + +``` +root@test:/etc/openvpn/easy-rsa# ./build-dh +Generating DH parameters, 4096 bit long safe prime, generator 2 +This is going to take a long time +....................................................+ + +``` + +### 生成客户端证书 + +现在你要生成一下客户端用于登录 OpenVPN 的密钥。通常来说 OpenVPN 都会被配置成使用证书验证的加密方式,在这个配置下客户端需要持有由服务端签发的一份证书: + +``` +root@test:/etc/openvpn/easy-rsa# ./build-key bills-computer +Generating a 4096 bit RSA private key +...................................................++ +...................................................++ +writing new private key to 'bills-computer.key' +----- +You are about to be asked to enter information that +will be incorporated into your certificate request. +What you are about to enter is what is called a +Distinguished Name or a DN. There are quite a few +fields but you can leave some blank. +For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [US]: +State or Province Name (full name) [CA]: +Locality Name (eg, city) [Silicon Valley]: +Organization Name (eg, company) [Linux Journal]: +Organizational Unit Name (eg, section) [changeme]:SecTeam +Common Name (eg, your name or your server's hostname) [bills-computer]: +Name [changeme]:bills-computer +Email Address [bill.childers@linuxjournal.com]: + +Please enter the following 'extra' attributes +to be sent with your certificate request +A challenge password []: +An optional company name []: +Using configuration from /etc/openvpn/easy-rsa/openssl-1.0.0.cnf +Check that the request matches the signature +Signature ok +The Subject's Distinguished Name is as follows +countryName :PRINTABLE:'US' +stateOrProvinceName :PRINTABLE:'CA' +localityName :PRINTABLE:'Silicon Valley' +organizationName :PRINTABLE:'Linux Journal' +organizationalUnitName:PRINTABLE:'SecTeam' +commonName :PRINTABLE:'bills-computer' +name :PRINTABLE:'bills-computer' +emailAddress :IA5STRING:'bill.childers@linuxjournal.com' +Certificate is to be certified until Sep 1 07:35:07 2025 GMT (3650 days) +Sign the certificate? [y/n]:y + +1 out of 1 certificate requests certified, commit? [y/n]y +Write out database with 1 new entries +Data Base Updated +root@test:/etc/openvpn/easy-rsa# +``` + +现在你需要再生成一个 HMAC 码作为共享密钥来进一步增加整个加密提供的安全性: + + +``` +root@test:~# openvpn --genkey --secret /etc/openvpn/easy-rsa/keys/ta.key +``` + +### 配置服务器 + +最后,我们到了配置 OpenVPN 服务的时候了。你需要创建一个 `/etc/openvpn/server.conf` 文件;这个配置文件的大多数地方都可以套用模板解决。设置 OpenVPN 服务的主要修改在于让它只用 TCP 而不是 UDP 链接。这是下一步所必需的---如果不是 TCP 连接那么你的服务将不能工作在端口 443 上。创建 `/etc/openvpn/server.conf` 然后把下述配置丢进去: + + +``` +port 1194 +proto tcp +dev tun +ca easy-rsa/keys/ca.crt +cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever your hostname was +key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key - This file should be kept secret +management localhost 7505 +dh easy-rsa/keys/dh4096.pem +tls-auth /etc/openvpn/certs/ta.key 0 +server 10.8.0.0 255.255.255.0 # The server will use this subnet for clients connecting to it +ifconfig-pool-persist ipp.txt +push "redirect-gateway def1 bypass-dhcp" # Forces clients to redirect all traffic through the VPN +push "dhcp-option DNS 192.168.1.1" # Tells the client to use the DNS server at 192.168.1.1 for DNS - replace with the IP address of the OpenVPN machine and clients will use the BIND server setup earlier +keepalive 30 240 +comp-lzo # Enable compression +persist-key +persist-tun +status openvpn-status.log +verb 3 +``` + +最后,你将需要在服务器上启用 IP 转发,配置 OpenVPN 为开机启动,并立刻启动 OpenVPN 服务: + + +``` +root@test:/etc/openvpn/easy-rsa/keys# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf +root@test:/etc/openvpn/easy-rsa/keys# sysctl -p /etc/sysctl.conf +net.core.wmem_max = 12582912 +net.core.rmem_max = 12582912 +net.ipv4.tcp_rmem = 10240 87380 12582912 +net.ipv4.tcp_wmem = 10240 87380 12582912 +net.core.wmem_max = 12582912 +net.core.rmem_max = 12582912 +net.ipv4.tcp_rmem = 10240 87380 12582912 +net.ipv4.tcp_wmem = 10240 87380 12582912 +net.core.wmem_max = 12582912 +net.core.rmem_max = 12582912 +net.ipv4.tcp_rmem = 10240 87380 12582912 +net.ipv4.tcp_wmem = 10240 87380 12582912 +net.ipv4.ip_forward = 0 +net.ipv4.ip_forward = 1 + +root@test:/etc/openvpn/easy-rsa/keys# update-rc.d openvpn defaults +update-rc.d: using dependency based boot sequencing + +root@test:/etc/openvpn/easy-rsa/keys# /etc/init.d/openvpn start +[ ok ] Starting virtual private network daemon:. +``` + +### 配置 OpenVPN 客户端 + +客户端的安装取决于客户端的操作系统,但你需要将之前生成的证书和密钥复制到你的客户端上,并导入你的 OpenVPN 客户端并新建一个配置文件。每种操作系统下的 OpenVPN 客户端在操作上会有些稍许不同,这也不在这篇文章的覆盖范围内,所以你最好去看看特定操作系统下的 OpenVPN 文档来获取更多信息。请参考本文档里的资源那一节。 + +### 安装 SSLH —— "魔法"多协议切换工具 + +本文章介绍的解决方案最有趣的部分就是运用 SSLH 了。SSLH 是一个多重协议工具——它可以监听 443 端口的流量,然后分析他们是 SSH,HTTPS 还是 OpenVPN 的通讯包,并把它们分别转发给正确的系统服务。这就是为何本解决方案可以让你绕过大多数端口封杀——你可以一直使用 HTTPS 通讯,因为它几乎从来不会被封杀。 + +同样,直接 `apt-get` 安装: + + +``` +root@test:/etc/openvpn/easy-rsa/keys# apt-get install sslh +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common + libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig9 +Suggested packages: + apache2-doc apache2-suexec apache2-suexec-custom openbsd-inetd inet-superserver +The following NEW packages will be installed: + apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common + libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig9 sslh +0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. +Need to get 1,568 kB of archives. +After this operation, 5,822 kB of additional disk space will be used. +Do you want to continue [Y/n]? y +``` + +在 SSLH 被安装之后,包管理器会询问要在 inetd 还是 standalone 模式下允许。选择 standalone 模式,因为你希望 SSLH 在它自己的进程里运行。如果你没有安装 Apache,apt 包管理器会自动帮你下载并安装的,尽管它也不是完全不可或缺。如果你已经有 Apache 了,那你需要确保它只监听 localhost 端口而不是所有的端口(不然的话 SSLH 会无法运行,因为 443 端口已经被 Apache 监听占用)。安装后,你会看到一个如下所示的错误信息: + +``` +[....] Starting ssl/ssh multiplexer: sslhsslh disabled, please adjust the configuration to your needs +[FAIL] and then set RUN to 'yes' in /etc/default/sslh to enable it. ... failed! +failed! +``` + +这其实并不是错误信息,只是 SSLH 在提醒你它还未被配置所以无法启动,这很正常。配置 SSLH 相对来说比较简单。它的配置文件放置在 `/etc/default/sslh`,你只需要修改 `RUN` 和 `DAEMON_OPTS` 变量就可以了。我的 SSLH 配置文件如下所示: + +``` +# Default options for sslh initscript +# sourced by /etc/init.d/sslh + +# Disabled by default, to force yourself +# to read the configuration: +# - /usr/share/doc/sslh/README.Debian (quick start) +# - /usr/share/doc/sslh/README, at "Configuration" section +# - sslh(8) via "man sslh" for more configuration details. +# Once configuration ready, you *must* set RUN to yes here +# and try to start sslh (standalone mode only) + +RUN=yes + +# binary to use: forked (sslh) or single-thread (sslh-select) version +DAEMON=/usr/sbin/sslh + +DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn 127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid" + ``` + +保存编辑并启动 SSLH: + +``` + root@test:/etc/openvpn/easy-rsa/keys# /etc/init.d/sslh start +[ ok ] Starting ssl/ssh multiplexer: sslh. +``` + +现在你应该可以从 443 端口 ssh 到你的树莓派了,它会正确地使用 SSLH 转发: + +``` +$ ssh -p 443 root@test.linuxjournal.com +root@test:~# +``` + +SSLH 现在开始监听端口 443 并且可以转发流量信息到 SSH、Apache 或者 OpenVPN ,这取决于抵达流量包的类型。这套系统现已整装待发了! + +### 结论 + +现在你可以启动 OpenVPN 并且配置你的客户端连接到服务器的 443 端口了,然后 SSLH 会从那里把流量转发到服务器的 1194 端口。但鉴于你正在和服务器的 443 端口通信,你的 VPN 流量不会被封锁。现在你可以舒服地坐在陌生小镇的咖啡店里,畅通无阻地通过你的树莓派上的 OpenVPN 浏览互联网。你顺便还给你的链接增加了一些安全性,这个额外作用也会让你的链接更安全和私密一些。享受通过安全跳板浏览互联网把! + + +### 参考资源 + +- 安装与配置 OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) 和 [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn) +- OpenVPN 客户端下载: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html) +- OpenVPN iOS 客户端: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8) +- OpenVPN Android 客户端: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en) +- Tunnelblick for Mac OS X (OpenVPN 客户端): [https://tunnelblick.net](https://tunnelblick.net) +- SSLH 介绍: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) 和 [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh) + + +---------- +via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-landing-point?page=0,0 + +作者:[Bill Childers][a] +译者:[Moelf](https://github.com/Moelf) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/bill-childers diff --git a/published/201601/20101020 19 Years of KDE History--Step by Step.md b/published/201601/20101020 19 Years of KDE History--Step by Step.md index 50cfff73fb..a13d468b7d 100644 --- a/published/201601/20101020 19 Years of KDE History--Step by Step.md +++ b/published/201601/20101020 19 Years of KDE History--Step by Step.md @@ -222,7 +222,7 @@ KDE Plasma 5 – 第五代 KDE。大幅改进了设计和系统,新的默认 via: [https://tlhp.cf/kde-history/](https://tlhp.cf/kde-history/) 作者:[Pavlo Rudyi][a] -译者:[jerryling315](https://github.com/jerryling315) +译者:[Moelf](https://github.com/Moelf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20160218 How to Best Manage Encryption Keys on Linux.md b/published/20160218 How to Best Manage Encryption Keys on Linux.md new file mode 100644 index 0000000000..fa0e559ff9 --- /dev/null +++ b/published/20160218 How to Best Manage Encryption Keys on Linux.md @@ -0,0 +1,104 @@ +在 Linux 上管理加密密钥的最佳体验 +============================================= + +存储 SSH 的加密秘钥和记住密码一直是一个让人头疼的问题。但是不幸的是,在当前这个充满了恶意黑客和攻击的世界中,基本的安全预防是必不可少的。对于许多普通用户来说,大多数人只能是记住密码,也可能寻找到一个好程序去存储密码,正如我们提醒这些用户不要在每个网站采用相同的密码。但是对于在各个 IT 领域的人们,我们需要将这个事情提高一个层面。我们需要使用像 SSH 密钥这样的加密秘钥,而不只是密码。 + +设想一个场景:我有一个运行在云上的服务器,用作我的主 git 库。我有很多台工作电脑,所有这些电脑都需要登录到这个中央服务器去做 push 与 pull 操作。这里我设置 git 使用 SSH。当 git 使用 SSH 时,git 实际上是以 SSH 的方式登录到服务器,就好像你通过 SSH 命令打开一个服务器的命令行一样。为了把这些配置好,我在我的 .ssh 目录下创建一个配置文件,其中包含一个有服务器名字、主机名、登录用户、密钥文件路径等信息的主机项。之后我可以通过输入如下命令来测试这个配置是否正确。 + + ssh gitserver + +很快我就可以访问到服务器的 bash shell。现在我可以配置 git 使用相同配置项以及存储的密钥来登录服务器。这很简单,只是有一个问题:对于每一个我要用它登录服务器的电脑,我都需要有一个密钥文件,那意味着需要密钥文件会放在很多地方。我会在当前这台电脑上存储这些密钥文件,我的其他电脑也都需要存储这些。就像那些有特别多的密码的用户一样,我们这些 IT 人员也被这些特别多的密钥文件淹没。怎么办呢? + +### 清理 + +在我们开始帮助你管理密钥之前,你需要有一些密钥应该怎么使用的基础知识,以及明白我们下面的提问的意义所在。同时,有个前提,也是最重要的,你应该知道你的公钥和私钥该放在哪里。然后假设你应该知道: + +1. 公钥和私钥之间的差异; +2. 为什么你不可以从公钥生成私钥,但是反之则可以? +3. `authorized_keys` 文件的目的以及里面包含什么内容; +4. 如何使用私钥去登录一个你的对应公钥存储在其上的 `authorized_keys` 文件中的服务器。 + +![](http://www.linux.com/images/stories/41373/key-management-diagram.png) + +这里有一个例子。当你在亚马逊的网络服务上创建一个云服务器,你必须提供一个用于连接你的服务器的 SSH 密钥。每个密钥都有一个公开的部分(公钥)和私密的部分(私钥)。你要想让你的服务器安全,乍看之下你可能应该将你的私钥放到服务器上,同时你自己带着公钥。毕竟,你不想你的服务器被公开访问,对吗?但是实际上的做法正好是相反的。 + +你应该把自己的公钥放到 AWS 服务器,同时你持有用于登录服务器的私钥。你需要保护好私钥,并让它处于你的控制之中,而不是放在一些远程服务器上,正如上图中所示。 + +原因如下:如果公钥被其他人知道了,它们不能用于登录服务器,因为他们没有私钥。进一步说,如果有人成功攻入你的服务器,他们所能找到的只是公钥,他们不可以从公钥生成私钥。同时,如果你在其他的服务器上使用了相同的公钥,他们不可以使用它去登录别的电脑。 + +这就是为什么你要把你自己的公钥放到你的服务器上以便通过 SSH 登录这些服务器。你持有这些私钥,不要让这些私钥脱离你的控制。 + +但是还有一点麻烦。试想一下我 git 服务器的例子。我需要做一些抉择。有时我登录架设在别的地方的开发服务器,而在开发服务器上,我需要连接我的 git 服务器。如何使我的开发服务器连接 git 服务器?显然是通过使用私钥,但这样就会有问题。在该场景中,需要我把私钥放置到一个架设在别的地方的服务器上,这相当危险。 + +一个进一步的场景:如果我要使用一个密钥去登录许多的服务器,怎么办?如果一个入侵者得到这个私钥,这个人就能用这个私钥得到整个服务器网络的权限,这可能带来一些严重的破坏,这非常糟糕。 + +同时,这也带来了另外一个问题,我真的应该在这些其他服务器上使用相同的密钥吗?因为我刚才描述的,那会非常危险的。 + +最后,这听起来有些混乱,但是确实有一些简单的解决方案。让我们有条理地组织一下。 + +(注意,除了登录服务器,还有很多地方需要私钥密钥,但是我提出的这个场景可以向你展示当你使用密钥时你所面对的问题。) + +### 常规口令 + +当你创建你的密钥时,你可以选择是否包含一个密钥使用时的口令。有了这个口令,私钥文件本身就会被口令所加密。例如,如果你有一个公钥存储在服务器上,同时你使用私钥去登录服务器的时候,你会被提示输入该口令。没有口令,这个密钥是无法使用的。或者你也可以配置你的密钥不需要口令,然后只需要密钥文件就可以登录服务器了。 + +一般来说,不使用口令对于用户来说是更方便的,但是在很多情况下我强烈建议使用口令,原因是,如果私钥文件被偷了,偷密钥的人仍然不可以使用它,除非他或者她可以找到口令。在理论上,这个将节省你很多时间,因为你可以在攻击者发现口令之前,从服务器上删除公钥文件,从而保护你的系统。当然还有一些使用口令的其它原因,但是在很多场合这个原因对我来说更有价值。(举一个例子,我的 Android 平板上有 VNC 软件。平板上有我的密钥。如果我的平板被偷了之后,我会马上从服务器上删除公钥,使得它的私钥没有作用,无论有没有口令。)但是在一些情况下我不使用口令,是因为我正在登录的服务器上没有什么有价值的数据,这取决于情境。 + +### 服务器基础设施 + +你如何设计自己服务器的基础设施将会影响到你如何管理你的密钥。例如,如果你有很多用户登录,你将需要决定每个用户是否需要一个单独的密钥。(一般来说,应该如此;你不会想在用户之间共享私钥。那样当一个用户离开组织或者失去信任时,你可以删除那个用户的公钥,而不需要必须给其他人生成新的密钥。相似地,通过共享密钥,他们能以其他人的身份登录,这就更糟糕了。)但是另外一个问题是你如何配置你的服务器。举例来说,你是否使用像 Puppet 这样工具配置大量的服务器?你是否基于你自己的镜像创建大量的服务器?当你复制你的服务器,是否每一个的密钥都一样?不同的云服务器软件允许你配置如何选择;你可以让这些服务器使用相同的密钥,也可以给每一个服务器生成一个新的密钥。 + +如果你在操作这些复制的服务器,如果用户需要使用不同的密钥登录两个不同但是大部分都一样的系统,它可能导致混淆。但是另一方面,服务器共享相同的密钥会有安全风险。或者,第三,如果你的密钥有除了登录之外的需要(比如挂载加密的驱动),那么你会在很多地方需要相同的密钥。正如你所看到的,你是否需要在不同的服务器上使用相同的密钥不是我能为你做的决定;这其中有权衡,你需要自己去决定什么是最好的。 + +最终,你可能会有: + +- 需要登录的多个服务器 +- 多个用户登录到不同的服务器,每个都有自己的密钥 +- 每个用户使用多个密钥登录到不同的服务器 + +(如果你正在别的情况下使用密钥,这个同样的普适理论也能应用于如何使用密钥,需要多少密钥,它们是否共享,你如何处理公私钥等方面。) + +### 安全方法 + +了解你的基础设施和特有的情况,你需要组合一个密钥管理方案,它会指导你如何去分发和存储你的密钥。比如,正如我之前提到的,如果我的平板被偷了,我会从我服务器上删除公钥,我希望这在平板在用于访问服务器之前完成。同样的,我会在我的整体计划中考虑以下内容: + +1. 私钥可以放在移动设备上,但是必须包含口令; +2. 必须有一个可以快速地从服务器上删除公钥的方法。 + +在你的情况中,你可能决定你不想在自己经常登录的系统上使用口令;比如,这个系统可能是一个开发者一天登录多次的测试机器。这没有问题,但是你需要调整一点你的规则。你可以添加一条规则:不可以通过移动设备登录该机器。换句话说,你需要根据自己的状况构建你的准则,不要假设某个方案放之四海而皆准。 + +### 软件 + +至于软件,令人吃惊的是,现实世界中并没有很多好的、可靠的存储和管理私钥的软件解决方案。但是应该有吗?考虑下这个,如果你有一个程序存储你所有服务器的全部密钥,并且这个程序被一个快捷的密钥锁住,那么你的密钥就真的安全了吗?或者类似的,如果你的密钥被放置在你的硬盘上,用于 SSH 程序快速访问,密钥管理软件是否真正提供了任何保护吗? + +但是对于整体基础设施和创建/管理公钥来说,有许多的解决方案。我已经提到了 Puppet,在 Puppet 的世界中,你可以创建模块以不同的方式管理你的服务器。这个想法是服务器是动态的,而且不需要精确地复制彼此。[这里有一个聪明的方法](http://manuel.kiessling.net/2014/03/26/building-manageable-server-infrastructures-with-puppet-part-4/),在不同的服务器上使用相同的密钥,但是对于每一个用户使用不同的 Puppet 模块。这个方案可能适合你,也可能不适合你。 + +或者,另一个选择就是完全换个不同的档位。在 Docker 的世界中,你可以采取一个不同的方式,正如[关于 SSH 和 Docker 博客](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)所描述的那样。 + +但是怎么样管理私钥?如果你搜索过的话,你无法找到很多可以选择的软件,原因我之前提到过;私钥存放在你的硬盘上,一个管理软件可能无法提到更多额外的安全。但是我使用这种方法来管理我的密钥: + +首先,我的 `.ssh/config` 文件中有很多的主机项。我要登录的都有一个主机项,但是有时我对于一个单独的主机有不止一项。如果我有很多登录方式,就会出现这种情况。对于放置我的 git 库的服务器来说,我有两个不同的登录项;一个限制于 git,另一个用于一般用途的 bash 访问。这个为 git 设置的登录选项在机器上有极大的限制。还记得我之前说的我存储在远程开发机器上的 git 密钥吗?好了。虽然这些密钥可以登录到我其中一个服务器,但是使用的账号是被严格限制的。 + +其次,大部分的私钥都包含口令。(对于需要多次输入口令的情况,考虑使用 [ssh-agent](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)。) + +再次,我有一些我想要更加小心地保护的服务器,我不会把这些主机项放在我的 host 文件中。这更加接近于社会工程方面,密钥文件还在,但是可能需要攻击者花费更长的时间去找到这个密钥文件,分析出来它们对应的机器。在这种情况下,我就需要手动打出来一条长长的 SSH 命令。(没那么可怕。) + +同时你可以看出来我没有使用任何特别的软件去管理这些私钥。 + +## 无放之四海而皆准的方案 + +我们偶尔会在 linux.com 收到一些问题,询问管理密钥的好软件的建议。但是退一步看,这个问题事实上需要重新思考,因为没有一个普适的解决方案。你问的问题应该基于你自己的状况。你是否简单地尝试找到一个位置去存储你的密钥文件?你是否寻找一个方法去管理多用户问题,其中每个人都需要将他们自己的公钥插入到 `authorized_keys` 文件中? + +通过这篇文章,我已经囊括了这方面的基础知识,希望到此你明白如何管理你的密钥,并且,只有当你问出了正确的问题,无论你寻找任何软件(甚至你需要另外的软件),它都会出现。 + +------------------------------------------------------------------------------ + +via: http://www.linux.com/learn/tutorials/838235-how-to-best-manage-encryption-keys-on-linux + +作者:[Jeff Cogswell][a] +译者:[mudongliang](https://github.com/mudongliang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linux.com/community/forums/person/62256 diff --git a/published/20160218 What do Linux developers think of Git and GitHub.md b/published/20160218 What do Linux developers think of Git and GitHub.md new file mode 100644 index 0000000000..1824c6e19e --- /dev/null +++ b/published/20160218 What do Linux developers think of Git and GitHub.md @@ -0,0 +1,110 @@ +Linux 开发者如何看待 Git 和 Github? +===================================================== + +### Linux 开发者如何看待 Git 和 Github? + +Git 和 Github 在 Linux 开发者中有很高的知名度。但是开发者如何看待它们呢?另外,Github 是不是真的和 Git 是一个意思?一个 Linux reddit 用户最近问到了这个问题,并且得到了很有意思的答案。 + +Dontwakemeup46 提问: + +> 我正在学习 Git 和 Github。我感兴趣社区如何看待两者?据我所知,Git 和 Github 应用十分广泛。但是 Git 或 Github 有没有严重的不足?社区喜欢去改变些什么呢? + +[更多见 Reddit](https://www.reddit.com/r/linux/comments/45jy59/the_popularity_of_git_and_github/) + +与他志同道合的 Linux reddit 用户回答了他们对于 Git 和 Github的观点: + +>**Derenir**: “Github 并不附属于 Git。 + +> Git 是由 Linus Torvalds 开发的。 + +> Github 几乎不支持 Linux。 + +> Github 是一家企图借助 Git 赚钱的公司。 + +> https://desktop.github.com/ 并没有支持 Linux。” + +--- +>**Bilog78**: “一个小的补充: Linus Torvalds 已经不再维护 Git了。维护者是 Junio C Hamano,以及 在他之后的主要贡献者是 Jeff King 和 Shawn O. Pearce。” + +--- + +>**Fearthefuture**: “我喜欢 Git,但是不明白人们为什么还要使用 Github。从我的角度,Github 比 Bitbucket 好的一点是用户统计和更大的用户基础。Bitbucket 有无限的免费私有库,更好的 UI,以及更好地集成了其他服务,比如说 Jenkins。” + +--- + +>**Thunger**: “Gitlab.com 也很不错,特别是你可以在自己的服务器上架设自己的实例。” + +--- + +>**Takluyver**: “很多人熟悉 Github 的 UI 以及相关联的服务,比如说 Travis 。并且很多人都有 Github 账号,所以它是存储项目的一个很好的地方。人们也使用他们的 Github 个人信息页作为一种求职用的作品选辑,所以他们很积极地将更多的项目放在这里。Github 是一个存放开源项目的事实标准。” + +--- + +>**Tdammers**: “Git 严重问题在于 UI,它有些违反直觉,以至于很多用户只能达到使用一些容易记住的咒语的程度。” + +> Github:最严重的问题在于它是商业托管的解决方案;你买了方便,但是代价是你的代码在别人的服务器上面,已经不在你的掌控范围之内了。另一个对于 Github 的普遍批判是它的工作流和 Git 本身的精神不符,特别是 pull requests 工作的方式。最后, Github 垄断了代码的托管环境,同时对于多样性是很不好的,这反过来对于旺盛的免费软件社区很重要。” + +--- + +>**Dies**: “更重要的是,如果一旦是这样,按照现状来说,我猜我们会被 Github 所困,因为它们控制如此多的项目。” + +--- + +>**Tdammers**: “代码托管在别人的服务器上,这里"别人"指的是 Github。这对于开源项目来说,并不是什么太大的问题,但是尽管如此,你无法控制它。如果你在 Github 上有私有项目,“它将保持私有”的唯一的保险只是 Github 的承诺而已。如果你决定删除东西,你不能确定东西是否被删除了,或者只是隐藏了。 + +Github 并不自己控制这些项目(你总是可以拿走你的代码,然后托管到别的地方,声明新位置是“官方”的),它只是有比开发者本身有更深的使用权。” + +--- + +>**Drelos**: “我已经读了大量的关于 Github 的赞美与批评。(这里有一个[例子](http://www.wired.com/2015/06/problem-putting-worlds-code-github/)),但是我的幼稚问题是为什么不向一个免费开源的版本努力呢?” + +--- + +>**Twizmwazin**: “Gitlab 的源码就存在这里” + +--- + +[更多见 Reddit](https://www.reddit.com/r/linux/comments/45jy59/the_popularity_of_git_and_github/) + +### DistroWatch 评估 XStream 桌面 153 版本 + +XStreamOS 是一个由 Sonicle 创建的 Solaris 的一个版本。XStream 桌面将 Solaris 的强大带给了桌面用户,同时新手用户很可能有兴趣体验一下。DistroWatch 对于 XStream 桌面 153 版本做了一个很全面的评估,并且发现它运行相当好。 + +Jesse Smith 为 DistroWatch 报道: + +> 我认为 XStream 桌面做好了很多事情。诚然,当操作系统无法在我的硬件上启动,同时当运行在 VirtualBox 中时我无法使得桌面使用我显示器的完整分辨率,我的开端并不很成功。不过,除此之外,XStream 表现的很好。安装器工作的很好,该系统自动设置和使用了引导环境(boot environments),这让我们可以在发生错误时恢复该系统。包管理器有工作的不错, XStream 带了一套有用的软件。 + +> 我确实在播放多媒体文件时遇见一些问题,特别是使声卡工作。我不确定这是不是又一个硬件兼容问题,或者是该操作系统自带的多媒体软件的问题。另一方面,像 Web 浏览器,电子邮件,生产工具套件以及配置工具这样的工作的很好。 + +> 我最欣赏 XStream 的地方是这个操作系统是 OpenSolaris 家族的一个使用保持最新的分支。OpenSolaris 的其他衍生系统有落后的倾向,但是至少在桌面软件上,XStream 搭载最新版本的火狐和 LibreOffice。 + +> 对我个人来说,XStream 缺少一些组件,比如打印机管理器,多媒体支持和我的特定硬件的驱动。这个操作系统的其他方面也是相当吸引人的。我喜欢开发者搭配了 LXDE,也喜欢它的默认软件集,以及我最喜欢文件系统快照和启动环境开箱即用的方式。大多数的 Linux 发行版,openSUSE 除外,并没有利用好引导环境(boot environments)的用途。我希望它是一个被更多项目采用的技术。 + +[更多见 DistroWatch](http://distrowatch.com/weekly.php?issue=20160215#xstreamos) + +### 街头霸王 V 和 SteamOS + +街头霸王是最出名的游戏之一,并且 [Capcom 已经宣布](http://steamcommunity.com/games/310950/announcements/detail/857177755595160250) 街头霸王 V 将会在这个春天进入 Linux 和 StreamOS。这对于 Linux 游戏者是非常好的消息。 + +Joe Parlock 为 Destructoid 报道: + +> 你是不足 1% 的那些在 Linux 系统上玩游戏的 Stream 用户吗?你是更少数的那些在 Linux 平台上玩游戏,同时也很喜欢街头霸王 V 的人之一吗?是的话,我有一些好消息要告诉你。 + +> Capcom 已经宣布,这个春天街头霸王 V 通过 Stream 进入 StreamOS 以及其他 Linux 发行版。它无需任何额外的花费,所以那些已经在自己的个人电脑上安装了该游戏的人可以很容易在 Linux 上安装它并玩了。 + +[更多 Destructoid](https://www.destructoid.com/street-fighter-v-is-coming-to-linux-and-steamos-this-spring-341531.phtml) + +你是否错过了摘要?检查 [开源之眼的主页](http://www.infoworld.com/blog/eye-on-open/) 来获得关于 Linux 和开源的最新的新闻。 + +------------------------------------------------------------------------------ + +via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html + +作者:[Jim Lynch][a] +译者:[mudongliang](https://github.com/mudongliang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/Jim-Lynch/ + diff --git a/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md b/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md new file mode 100644 index 0000000000..3db9318138 --- /dev/null +++ b/published/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md @@ -0,0 +1,213 @@ +如何在 Ubuntu 15.04/CentOS 7 中安装 Lighttpd Web 服务器 +================================================================================= + +Lighttpd 是一款开源 Web 服务器软件。Lighttpd 安全快速,符合行业标准,适配性强并且针对高配置环境进行了优化。相对于其它的 Web 服务器而言,Lighttpd 占用内存更少;因其对 CPU 占用小和对处理速度的优化而在效率和速度方面从众多 Web 服务器中脱颖而出。而 Lighttpd 诸如 FastCGI、CGI、认证、输出压缩、URL 重写等高级功能更是那些面临性能压力的服务器的福音。 + +以下便是我们在运行 Ubuntu 15.04 或 CentOS 7 Linux 发行版的机器上安装 Lighttpd Web 服务器的简要流程。 + +### 安装Lighttpd + +#### 使用包管理器安装 + +这里我们通过使用包管理器这种最简单的方法来安装 Lighttpd。只需以 sudo 模式在终端或控制台中输入下面的指令即可。 + +**CentOS 7** + +由于 CentOS 7.0 官方仓库中并没有提供 Lighttpd,所以我们需要在系统中安装额外的软件源 epel 仓库。使用下面的 yum 指令来安装 epel。 + + # yum install epel-release + +然后,我们需要更新系统及为 Lighttpd 的安装做前置准备。 + + # yum update + # yum install lighttpd + +![Install Lighttpd Centos](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-centos.png) + +**Ubuntu 15.04** + +Ubuntu 15.04 官方仓库中包含了 Lighttpd,所以只需更新本地仓库索引并使用 apt-get 指令即可安装 Lighttpd。 + + # apt-get update + # apt-get install lighttpd + +![Install lighttpd ubuntu](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-ubuntu.png) + +#### 从源代码安装 Lighttpd + +如果想从 Lighttpd 源码安装最新版本(例如 1.4.39),我们需要在本地编译源码并进行安装。首先我们要安装编译源码所需的依赖包。 + + # cd /tmp/ + # wget http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.39.tar.gz + +下载完成后,执行下面的指令解压缩。 + + # tar -zxvf lighttpd-1.4.39.tar.gz + +然后使用下面的指令进行编译。 + + # cd lighttpd-1.4.39 + # ./configure + # make + +**注:**在这份教程中,我们安装的是默认配置的 Lighttpd。其他拓展功能,如对 SSL 的支持,mod_rewrite,mod_redirect 等,需自行配置。 + +当编译完成后,我们就可以把它安装到系统中了。 + + # make install + +### 设置 Lighttpd + +如果有更高的需求,我们可以通过修改默认设置文件,如`/etc/lighttpd/lighttpd.conf`,来对 Lighttpd 进行进一步设置。 而在这份教程中我们将使用默认设置,不对设置文件进行修改。如果你曾做过修改并想检查设置文件是否出错,可以执行下面的指令。 + + # lighttpd -t -f /etc/lighttpd/lighttpd.conf + +#### 使用 CentOS 7 + +在 CentOS 7 中,我们需创建一个在 Lighttpd 默认配置文件中设置的 webroot 文件夹,例如`/src/www/htdocs`。 + + # mkdir -p /srv/www/htdocs/ + +而后将默认欢迎页面从`/var/www/lighttpd`复制至刚刚新建的目录中: + + # cp -r /var/www/lighttpd/* /srv/www/htdocs/ + +### 开启服务 + +现在,通过执行 systemctl 指令来重启 Web 服务。 + + # systemctl start lighttpd + +然后我们将它设置为伴随系统启动自动运行。 + + # systemctl enable lighttpd + +### 设置防火墙 + +如要让我们运行在 Lighttpd 上的网页或网站能在 Internet 或同一个网络内被访问,我们需要在防火墙程序中设置打开 80 端口。由于 CentOS 7 和 Ubuntu15.04 都附带 Systemd 作为默认初始化系统,所以我们默认用的都是 firewalld。如果要打开 80 端口或 http 服务,我们只需执行下面的命令: + + # firewall-cmd --permanent --add-service=http + success + # firewall-cmd --reload + success + +### 连接至 Web 服务器 + +在将 80 端口设置为默认端口后,我们就可以直接访问 Lighttpd 的默认欢迎页了。我们需要根据运行 Lighttpd 的设备来设置浏览器的 IP 地址和域名。在本教程中,我们令浏览器访问 [http://lighttpd.linoxide.com/](http://lighttpd.linoxide.com/) ,同时将该子域名指向上述 IP 地址。如此一来,我们就可以在浏览器中看到如下的欢迎页面了。 + +![Lighttpd Welcome Page](http://blog.linoxide.com/wp-content/uploads/2016/02/lighttpd-welcome-page.png) + +此外,我们可以将网站的文件添加到 webroot 目录下,并删除 Lighttpd 的默认索引文件,使我们的静态网站可以在互联网上访问。 + +如果想在 Lighttpd Web 服务器中运行 PHP 应用,请参考下面的步骤: + +### 安装 PHP5 模块 + +在 Lighttpd 成功安装后,我们需要安装 PHP 及相关模块,以在 Lighttpd 中运行 PHP5 脚本。 + +#### 使用 Ubuntu 15.04 + + # apt-get install php5 php5-cgi php5-fpm php5-mysql php5-curl php5-gd php5-intl php5-imagick php5-mcrypt php5-memcache php-pear + +#### 使用 CentOS 7 + + # yum install php php-cgi php-fpm php-mysql php-curl php-gd php-intl php-pecl-imagick php-mcrypt php-memcache php-pear lighttpd-fastcgi + +### 设置 Lighttpd 的 PHP 服务 + +如要让 PHP 与 Lighttpd 协同工作,我们只要根据所使用的发行版执行如下对应的指令即可。 + +#### 使用 CentOS 7 + +首先要做的便是使用文件编辑器编辑 php 设置文件(例如`/etc/php.ini`)并取消掉对**cgi.fix_pathinfo=1**这一行的注释。 + + # nano /etc/php.ini + +完成上面的步骤之后,我们需要把 PHP-FPM 进程的所有权从 Apache 转移至 Lighttpd。要完成这些,首先用文件编辑器打开`/etc/php-fpm.d/www.conf`文件。 + + # nano /etc/php-fpm.d/www.conf + +然后在文件中增加下面的语句: + + user = lighttpd + group = lighttpd + +做完这些,我们保存并退出文本编辑器。然后从`/etc/lighttpd/modules.conf`设置文件中添加 FastCGI 模块。 + + # nano /etc/lighttpd/modules.conf + +然后,去掉下面语句前面的`#`来取消对它的注释。 + + include "conf.d/fastcgi.conf" + +最后我们还需在文本编辑器设置 FastCGI 的设置文件。 + + # nano /etc/lighttpd/conf.d/fastcgi.conf + +在文件尾部添加以下代码: + + fastcgi.server += ( ".php" => + (( + "host" => "127.0.0.1", + "port" => "9000", + "broken-scriptfilename" => "enable" + )) + ) + +在编辑完成后保存并退出文本编辑器即可。 + +#### 使用 Ubuntu 15.04 + +如需启用 Lighttpd 的 FastCGI,只需执行下列代码: + + # lighttpd-enable-mod fastcgi + + Enabling fastcgi: ok + Run /etc/init.d/lighttpd force-reload to enable changes + + # lighttpd-enable-mod fastcgi-php + + Enabling fastcgi-php: ok + Run `/etc/init.d/lighttpd` force-reload to enable changes + +然后,执行下列命令来重启 Lighttpd。 + + # systemctl force-reload lighttpd + +### 检测 PHP 工作状态 + +如需检测 PHP 是否按预期工作,我们需在 Lighttpd 的 webroot 目录下新建一个 php 文件。本教程中,在 Ubuntu 下 /var/www/html 目录,CentOS 下 /src/www/htdocs 目录下使用文本编辑器创建并打开 info.php。 + +**使用 CentOS 7** + + # nano /var/www/info.php + +**使用 Ubuntu 15.04** + + # nano /srv/www/htdocs/info.php + +然后只需将下面的语句添加到文件里即可。 + + + +在编辑完成后保存并推出文本编辑器即可。 + +现在,我们需根据路径 [http://lighttpd.linoxide.com/info.php](http://lighttpd.linoxide.com/info.php) 下的 info.php 文件的 IP 地址或域名,来让我们的网页浏览器指向系统上运行的 Lighttpd。如果一切都按照以上说明进行,我们将看到如下图所示的 PHP 页面信息。 + +![phpinfo lighttpd](http://blog.linoxide.com/wp-content/uploads/2016/02/phpinfo-lighttpd.png) + +### 总结 + +至此,我们已经在 CentOS 7 和 Ubuntu 15.04 Linux 发行版上成功安装了轻巧快捷并且安全的 Lighttpd Web 服务器。现在,我们已经可以上传网站文件到网站根目录、配置虚拟主机、启用 SSL、连接数据库,在我们的 Lighttpd Web 服务器上运行 Web 应用等功能了。 如果你有任何疑问,建议或反馈请在下面的评论区中写下来以让我们更好的改良 Lighttpd。谢谢! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-lighttpd-web-server-ubuntu-15-04-centos-7/ + +作者:[Arun Pyasi][a] +译者:[HaohongWANG](https://github.com/HaohongWANG) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ diff --git a/translated/tech/20160220 Best Cloud Services For Linux To Replace Copy.md b/published/20160220 Best Cloud Services For Linux To Replace Copy.md similarity index 76% rename from translated/tech/20160220 Best Cloud Services For Linux To Replace Copy.md rename to published/20160220 Best Cloud Services For Linux To Replace Copy.md index ce9380b600..a43c5776e8 100644 --- a/translated/tech/20160220 Best Cloud Services For Linux To Replace Copy.md +++ b/published/20160220 Best Cloud Services For Linux To Replace Copy.md @@ -3,9 +3,9 @@ ![](http://itsfoss.com/wp-content/uploads/2016/02/Linux-cloud-services.jpg) -云存储服务 Copy 即将关闭,我们 Linux 用户是时候该寻找其他优秀的** Copy 之外的 Linux 云存储服务**。 +云存储服务 Copy 已经关闭,我们 Linux 用户是时候该寻找其他优秀的** Copy 之外的 Linux 云存储服务**。 -全部文件将会在 2016年5月1号 被删除。如果你是 Copy 的用户,你应该保存你的文件并将它们移至其他地方。 +全部文件会在 2016年5月1号 被删除。如果你是 Copy 的用户,你应该保存你的文件并将它们移至其他地方。 在过去的两年里,Copy 已经成为了我最喜爱的云存储。它为我提供了大量的免费空间并且带有桌面平台的原生应用程序,包括 Linux 和移动平台如 iOS 和 Android。 @@ -13,16 +13,16 @@ 当我从 Copy.com 看到它即将关闭的消息,我的担忧成真了。事实上,Copy 并不孤独。它的母公司 [Barracuda Networks](https://www.barracuda.com/)正经历一段困难时期并且已经[雇佣 Morgan Stanely 寻找 合适的卖家](http://www.bloomberg.com/news/articles/2016-02-01/barracuda-networks-said-to-work-with-morgan-stanley-to-seek-sale)(s) -无论什么理由,我们所知道的是 Copy 将会成为历史,我们需要寻找相似的**优秀的 Linux 云服务**。我之所以强调 Linux 是因为其他流行的云存储服务,如[微软的OneDrive](https://onedrive.live.com/about/en-us/) 和 [Google Drive](https://www.google.com/drive/) 都没有提供本地 Linux 客户端。这是微软预计的事情,但是谷歌对 Linux 的冷漠令人震惊。 +无论什么理由,我们所知道的是 Copy 将会成为历史,我们需要寻找相似的**优秀的 Linux 云服务**。我之所以强调 Linux 是因为其他流行的云存储服务,如[微软的 OneDrive](https://onedrive.live.com/about/en-us/) 和 [Google Drive](https://www.google.com/drive/) 都没有提供本地 Linux 客户端。微软并没有出乎我们的预料,但是[谷歌对 Linux 的冷漠][1]令人震惊。 ## Linux 下 Copy 的最佳替代者 -现在,作为一个 Linux 存储,在云存储中你需要什么?让我们猜猜: +什么样的云服务才适合作为 Linux 下的存储服务?让我们猜猜: -- 大量的免费空间。毕竟,个人用户无法每月支付巨额款项。 -- 原生的 Linux 客户端。因此你能够使用提供的服务,方便地同步文件,而不用做一些特殊的调整或者定时执行脚本。 -- 其他桌面系统的客户端,比如 Windows 和 OS X。便携性是必要的,并且同步设备间的文件是一种很好的缓解。 -- Android 和 iOS 的移动应用程序。在今天的现代世界里,你需要连接所有设备。 +- 大量的免费空间。毕竟,个人用户无法支付每月的巨额款项。 +- 原生的 Linux 客户端。以便你能够方便的在服务器之间同步文件,而不用做一些特殊的调整或者定时执行脚本。 +- 其他桌面系统的客户端,比如 Windows 和 OS X。移动性是必要的,并且同步设备间的文件也很有必要。 +- 基于 Android 和 iOS 的移动应用程序。在今天的现代世界里,你需要连接所有设备。 我不将自托管的云服务计算在内,比如 OwnCloud 或 [Seafile](https://www.seafile.com/en/home/) ,因为它们需要自己建立和运行一个服务器。这不适合所有想要类似 Copy 的云服务的家庭用户。 @@ -32,7 +32,7 @@ ![](http://itsfoss.com/wp-content/uploads/2016/02/Mega-Linux.jpg) -如果你是一个 It’s FOSS 的普通读者,你可能已经看过我之前的一篇有关[Mega on Linux](http://itsfoss.com/install-mega-cloud-storage-linux/)的文章。这种云服务由[Megaupload scandal](https://en.wikipedia.org/wiki/Megaupload) 公司下臭名昭著的[Kim Dotcom](https://en.wikipedia.org/wiki/Kim_Dotcom)提供。这也使一些用户怀疑它,因为 Kim Dotcom 已经很长一段时间成为美国当局的目标。 +如果你是一个 It’s FOSS 的普通读者,你可能已经看过我之前的一篇有关 [Mega on Linux](http://itsfoss.com/install-mega-cloud-storage-linux/)的文章。这种云服务由 [Megaupload scandal](https://en.wikipedia.org/wiki/Megaupload) 公司下臭名昭著的 [Kim Dotcom](https://en.wikipedia.org/wiki/Kim_Dotcom) 提供。这也使一些用户怀疑它,因为 Kim Dotcom 已经很长一段时间成为美国当局的目标。 Mega 拥有方便免费云服务下你所期望的一切。它给每个个人用户提供 50 GB 的免费存储空间。提供Linux 和其他平台下的原生客户端,并带有端到端的加密。原生的 Linux 客户端运行良好,可以无缝地跨平台同步。你也能在浏览器上查看操作你的文件。 @@ -74,7 +74,7 @@ Hubic 拥有一些不错的功能。除了简单的用户界面、文件共享 ![](http://itsfoss.com/wp-content/uploads/2016/02/pCloud-Linux.jpeg) -pCloud 是另一款欧洲的发行软件,但这一次从瑞士横跨法国边境。专注于加密和安全,pCloud 为每一个注册者提供 10 GB 的免费存储空间。你可以通过邀请好友、在社交媒体上分享链接等方式将空间增加至 20 GB。 +pCloud 是另一款欧洲的发行软件,但这一次跨过了法国边境,它来自瑞士。专注于加密和安全,pCloud 为每一个注册者提供 10 GB 的免费存储空间。你可以通过邀请好友、在社交媒体上分享链接等方式将空间增加至 20 GB。 它拥有云服务的所有标准特性,例如文件共享、同步、选择性同步等等。pCloud 也有跨平台原生客户端,当然包括 Linux。 @@ -128,8 +128,9 @@ via: http://itsfoss.com/cloud-services-linux/ 作者:[ABHISHEK][a] 译者:[cposture](https://github.com/cposture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/google-hates-desktop-linux/ \ No newline at end of file diff --git a/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md b/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md new file mode 100644 index 0000000000..fa3448b1bb --- /dev/null +++ b/published/20160301 The Evolving Market for Commercial Software Built On Open Source.md @@ -0,0 +1,37 @@ +构建在开源之上的商业软件市场持续成长 +===================================================================== + +![](https://www.linux.com/images/stories/41373/Structure-event-photo.jpg) + +*与会者在 Structure 上听取演讲,Structure Data 2016 也将在 UCSF Mission Bay 会议中心举办。图片来源:Structure Events。* + +如今真的很难低估开源项目对于企业软件市场的影响;开源软件的集成如此快速地形成了业界常态,我们没能捕捉到转折点也情有可原。 + +举个例子,Hadoop,改变的不止是数据分析界,它引领了新一代数据公司,它们围绕开源项目创造自己的软件,按需调整和支持那些代码,更像红帽在上世纪 90 年代和本世纪早期拥抱 Linux 那样。软件越来越多地通过公有云交付,而不是运行在购买者自己的服务器,拥有了令人惊奇的操作灵活性,但同时也带来了一些关于授权、支持以及价格之类的新问题。 + +我们多年来持续追踪这个趋势,这些话题充斥了我们的 Structure Data 会议,而今年的 Structure Data 2016 也不例外。三家围绕 Hadoop 最重要的大数据公司——Hortonworks、Cloudera 和 MapR ——的 CEO 们将会共同讨论它们是如何销售他们围绕开源项目的企业软件和服务,获利的同时回报社区项目。 + +以前在企业软件上获利是很容易的事情。一个客户购买了之后,企业供应商的一系列软件就变成了收银机,从维护合同和阶段性升级中获得近乎终生的收入,软件也越来越难以被替代,因为它已经成为了客户的业务核心。客户抱怨这种绑定,但如果它们想提高工作队伍的生产力也确实没有多少选择。 + +而现在的情况不再是这样了。尽管无数的公司还陷于在他们的基础设施上运行至关重要的巨型软件包,新的项目被部署到使用开源技术的云服务器上。这让升级功能不再需要去掉大量软件包再重新安装别的,同时也让公司按需付费,而不是为一堆永远用不到的特性买单。 + +有很多客户想要利用开源项目的优势,而又不想建立和支持一支工程师队伍来调整那些开源项目以满足自己的需求。这些客户愿意为开源项目和在这之上的专有特性之间的差异付费。 + +这对于基础设施相关的软件来说格外正确。当然,你的客户们可以自己对项目进行调整,比如 Hadoop,Spark 或 Node.js,但付费可以帮助他们自定义地打包部署如今这些重要的开源技术,而不用自己干这些活儿。只需看看 Structure Data 2016 的发言者就明白了,比如 Confluent(Kafka),Databricks(Spark),以及 Cloudera-Hortonworks-MapR(Hadoop)三人组。 + +当然还有一个值得提到的是在出错的时候有个供应商给你背锅。如果你的工程师弄糟了开源项目的实现,那你只能怪你自己了。但是如果你和一个愿意提供服务级品质、能确保性能和正常运行时间指标的公司签订了合同,你实际上就是为得到支持、指导,以及有人背锅而买单。 + +构建在开源之上的商业软件市场的持续成长是我们在 Structure Data 上追踪多年的内容,如果这个话题正合你意,我们鼓励你加入我们,在旧金山,3 月 9 日和 10 日。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/enterprise/cloud-computing/889564-the-evolving-market-for-commercial-software-built-on-open-source- + +作者:[Tom Krazit][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/community/forums/person/70513 diff --git a/published/20160303 Top 5 open source command shells for Linux.md b/published/20160303 Top 5 open source command shells for Linux.md new file mode 100644 index 0000000000..ec6aa3517a --- /dev/null +++ b/published/20160303 Top 5 open source command shells for Linux.md @@ -0,0 +1,86 @@ +Linux 下五个顶级的开源命令行 Shell +=============================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa) + +这个世界上有两种 Linux 用户:敢于冒险的和态度谨慎的。 + +其中一类用户总是本能的去尝试任何能够戳中其痛点的新选择。他们尝试过不计其数的窗口管理器、系统发行版和几乎所有能找到的桌面插件。 + +另一类用户找到他们喜欢的东西后,会一直使用下去。他们往往喜欢所使用的系统发行版的默认配置。最先熟练掌握的文本编辑器会成为他们最钟爱的那一个。 + +作为一个使用桌面版和服务器版十五年之久的 Linux 用户,比起第一类来,我无疑属于第二类用户。我更倾向于使用现成的东西,如此一来,很多时候我就可以通过文档和示例方便地找到我所需要的使用案例。如果我决定选择使用非费标准的东西,这个切换过程一定会基于细致的研究,并且前提是来自好基友的大力推荐。 + +但这并不意味着我不喜欢尝试新事物并且查漏补失。所以最近一段时间,在我不假思索的使用了 bash shell 多年之后,决定尝试一下另外四个 shell 工具:ksh、tcsh、zsh 和 fish。这四个 shell 都可以通过我所用的 Fedora 系统的默认库轻松安装,并且他们可能已经内置在你所使用的系统发行版当中了。 + +这里对它们每个选择都稍作介绍,并且阐述下它适合做为你的下一个 Linux 命令行解释器的原因所在。 + +### bash + +首先,我们回顾一下最为熟悉的一个。 [GNU Bash][1],又名 Bourne Again Shell,它是我这些年使用过的众多 Linux 发行版的默认选择。它最初发布于 1989 年,并且轻松成长为 Linux 世界中使用最广泛的 shell,甚至常见于其他一些类 Unix 系统当中。 + +Bash 是一个广受赞誉的 shell,当你通过互联网寻找各种事情解决方法所需的文档时,总能够无一例外的发现这些文档都默认你使用的是 bash shell。但 bash 也有一些缺点存在,如果你写过 Bash 脚本就会发现我们写的代码总是得比真正所需要的多那么几行。这并不是说有什么事情是它做不到的,而是说它读写起来并不总是那么直观,至少是不够优雅。 + +如上所述,基于其巨大的安装量,并且考虑到各类专业和非专业系统管理员已经适应了它的使用方式和独特之处,至少在将来一段时间内,bash 或许会一直存在。 + +### ksh + +[KornShell][4],或许你对这个名字并不熟悉,但是你一定知道它的调用命令 ksh。这个替代性的 shell 于 80 年代起源于贝尔实验室,由 David Korn 所写。虽然最初是一个专有软件,但是后期版本是在 [Eclipse Public 许可][5]下发布的。 + +ksh 的拥趸们列出了他们觉得其优越的诸多理由,包括更好的循环语法,清晰的管道退出代码,处理重复命令和关联数组的更简单的方式。它能够模拟 vi 和 emacs 的许多行为,所以如果你是一个重度文本编辑器患者,它值得你一试。最后,我发现它虽然在高级脚本方面拥有不同的体验,但在基本输入方面与 bash 如出一辙。 + +### tcsh + +[tcsh][6] 衍生于 csh(Berkely Unix C shell),并且可以追溯到早期的 Unix 和计算机时代开始。 + +tcsh 最大的卖点在于它的脚本语言,对于熟悉 C 语言编程的人来说,看起来会非常亲切。tcsh 的脚本编写有人喜欢,有人憎恶。但是它也有其他的技术特色,包括可以为 aliases 添加参数,各种可能迎合你偏好的默认行为,包括 tab 自动完成和将 tab 完成的工作记录下来以备后查。 + +tcsh 以 [BSD 许可][7]发布。 + +### zsh + +[zsh][8] 是另外一个与 bash 和 ksh 有着相似之处的 shell。诞生于 90 年代初,zsh 支持众多有用的新技术,包括拼写纠正、主题化、可命名的目录快捷键,在多个终端中共享同一个命令历史信息和各种相对于原来的 bash 的轻微调整。 + +虽然部分需要遵照 GPL 许可,但 zsh 的代码和二进制文件可以在一个类似 MIT 许可证的许可下进行分发; 你可以在 [actual license][9] 中查看细节。 + +### fish + +之前我访问了 [fish][10] 的主页,当看到 “好了,这是一个为 90 后而生的命令行 shell” 这条略带调侃的介绍时(fish 完成于 2005 年),我就意识到我会爱上这个交互友好的 shell 的。 + +fish 的作者提供了若干切换过来的理由,这些理由有点小幽默并且能戳中笑点,不过还真是那么回事。这些特性包括自动建议(“注意, Netscape Navigator 4.0 来了”,LCTT 译注:NN4 是一个重要版本。),支持“惊人”的 256 色 VGA 调色,不过也有真正有用的特性,包括根据你机器上的 man 页面自动补全命令,清除脚本和基于 web 界面的配置方式。 + +fish 的许可主要基于 GPLv2,但有些部分是在其他许可下的。你可以查看资源库来了解[完整信息][11]。 + +*** + +如果你想要寻找关于每个选择确切不同之处的详尽纲要,[这个网站][12]应该可以帮到你。 + +我的立场到底是怎样的呢?好吧,最终我应该还是会重新投入 bash 的怀抱,因为对于大多数时间都在使用命令行交互的人来说,切换过程对于编写高级的脚本能带来的好处微乎其微,并且我已经习惯于使用 bash 了。 + +但是我很庆幸做出了敞开大门并且尝试新选择的决定。我知道门外还有许许多多其他的东西。你尝试过哪些 shell,更中意哪一个?请在评论里告诉我们。 + +--- + +via: https://opensource.com/business/16/3/top-linux-shells + +作者:[Jason Baker][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jason-baker + +[1]: https://www.gnu.org/software/bash/ +[2]: http://mywiki.wooledge.org/BashPitfalls +[3]: http://www.gnu.org/licenses/gpl.html +[4]: http://www.kornshell.org/ +[5]: https://www.eclipse.org/legal/epl-v10.html +[6]: http://www.tcsh.org/Welcome +[7]: https://en.wikipedia.org/wiki/BSD_licenses +[8]: http://www.zsh.org/ +[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE +[10]: https://fishshell.com/ +[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING +[12]: http://hyperpolyglot.org/unix-shells + diff --git a/published/20160304 Microservices with Python RabbitMQ and Nameko.md b/published/20160304 Microservices with Python RabbitMQ and Nameko.md new file mode 100644 index 0000000000..d70afff960 --- /dev/null +++ b/published/20160304 Microservices with Python RabbitMQ and Nameko.md @@ -0,0 +1,284 @@ +用 Python、 RabbitMQ 和 Nameko 实现微服务 +============================================== + +>"微服务是一股新浪潮" - 现如今,将项目拆分成多个独立的、可扩展的服务是保障代码演变的最好选择。在 Python 的世界里,有个叫做 “Nameko” 的框架,它将微服务的实现变得简单并且强大。 + + +### 微服务 + +> 在最近的几年里,“微服务架构”如雨后春笋般涌现。它用于描述一种特定的软件应用设计方式,这种方式使得应用可以由多个独立部署的服务以服务套件的形式组成。 - M. Fowler + +推荐各位读一下 [Fowler 的文章][1] 以理解它背后的原理。 + +#### 好吧,那它究竟意味着什么呢? + +简单来说,**微服务架构**可以将你的系统拆分成多个负责不同任务的小的(单一上下文内)功能块(responsibilities blocks),它们彼此互无感知,各自只提供用于通讯的通用指向(common point)。这个指向通常是已经将通讯协议和接口定义好的消息队列。 + +#### 这里给大家提供一个真实案例 + +> 案例的代码可以通过 github: 访问,查看 service 和 api 文件夹可以获取更多信息。 + +想象一下,你有一个 REST API ,这个 API 有一个端点(LCTT 译注:REST 风格的 API 可以有多个端点用于处理对同一资源的不同类型的请求)用来接受数据,并且你需要将接收到的数据进行一些运算工作。那么相比阻塞接口调用者的请求来说,异步实现此接口是一个更好的选择。你可以先给用户返回一个 "OK - 你的请求稍后会处理" 的状态,然后在后台任务中完成运算。 + +同样,如果你想要在不阻塞主进程的前提下,在计算完成后发送一封提醒邮件,那么将“邮件发送”委托给其他服务去做会更好一些。 + + +#### 场景描述 + +![](http://brunorocha.org/static/media/microservices/micro_services.png) + + +### 用代码说话 + +让我们将系统创建起来,在实践中理解它: + +#### 环境 + +我们需要的环境: + +- 运行良好的 RabbitMQ(LCTT 译注:[RabbitMQ][2] 是一个流行的消息队列实现) +- 由 VirtualEnv 提供的 Services 虚拟环境 +- 由 VirtualEnv 提供的 API 虚拟环境 + +#### Rabbit + +在开发环境中使用 RabbitMQ 最简单的方式就是运行其官方的 docker 容器。在你已经拥有 Docker 的情况下,运行: + +``` +docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management +``` + +在浏览器中访问 ,如果能够使用 guest:guest 验证信息登录 RabbitMQ 的控制面板,说明它已经在你的开发环境中运行起来了。 + +![](http://brunorocha.org/static/media/microservices/RabbitMQManagement.png) + +#### 服务环境 + +现在让我们创建微服务来满足我们的任务需要。其中一个服务用来执行计算任务,另一个用来发送邮件。按以下步骤执行: + +在 Shell 中创建项目的根目录 + +``` +$ mkdir myproject +$ cd myproject +``` + +用 virtualenv 工具创建并且激活一个虚拟环境(你也可以使用virtualenv-wrapper) + +``` +$ virtualenv service_env +$ source service_env/bin/activate +``` + +安装 nameko 框架和 yagmail + +``` +(service_env)$ pip install nameko +(service_env)$ pip install yagmail +``` + +#### 服务的代码 + +现在我们已经准备好了 virtualenv 所提供的虚拟环境(可以想象成我们的服务是运行在一个独立服务器上的,而我们的 API 运行在另一个服务器上),接下来让我们编码,实现 nameko 的 RPC 服务。 + +我们会将这两个服务放在同一个 python 模块中,当然如果你乐意,也可以把它们放在单独的模块里并且当成不同的服务运行: + +在名为 `service.py` 的文件中 + +```python +import yagmail +from nameko.rpc import rpc, RpcProxy + + +class Mail(object): + name = "mail" + + @rpc + def send(self, to, subject, contents): + yag = yagmail.SMTP('myname@gmail.com', 'mypassword') + # 以上的验证信息请从安全的地方进行读取 + # 贴士: 可以去看看 Dynaconf 设置模块 + yag.send(to=to.encode('utf-8), + subject=subject.encode('utf-8), + contents=[contents.encode('utf-8)]) + + +class Compute(object): + name = "compute" + mail = RpcProxy('mail') + + @rpc + def compute(self, operation, value, other, email): + operations = {'sum': lambda x, y: int(x) + int(y), + 'mul': lambda x, y: int(x) * int(y), + 'div': lambda x, y: int(x) / int(y), + 'sub': lambda x, y: int(x) - int(y)} + try: + result = operations[operation](value, other) + except Exception as e: + self.mail.send.async(email, "An error occurred", str(e)) + raise + else: + self.mail.send.async( + email, + "Your operation is complete!", + "The result is: %s" % result + ) + return result +``` + +现在我们已经用以上代码定义好了两个服务,下面让我们将 Nameko RPC service 运行起来。 + +> 注意:我们会在控制台中启动并运行它。但在生产环境中,建议大家使用 supervisord 替代控制台命令。 + +在 Shell 中启动并运行服务 + +``` +(service_env)$ nameko run service --broker amqp://guest:guest@localhost +starting services: mail, compute +Connected to amqp://guest:**@127.0.0.1:5672// +Connected to amqp://guest:**@127.0.0.1:5672// +``` + + +#### 测试 + +在另外一个 Shell 中(使用相同的虚拟环境),用 nameko shell 进行测试: + +``` +(service_env)$ nameko shell --broker amqp://guest:guest@localhost +Nameko Python 2.7.9 (default, Apr 2 2015, 15:33:21) +[GCC 4.9.2] shell on linux2 +Broker: amqp://guest:guest@localhost +>>> +``` + +现在你已经处在 RPC 客户端中了,Shell 的测试工作是通过 n.rpc 对象来进行的,它的使用方法如下: + +``` +>>> n.rpc.mail.send("name@email.com", "testing", "Just testing") +``` + +上边的代码会发送一封邮件,我们同样可以调用计算服务对其进行测试。需要注意的是,此测试还会附带进行异步的邮件发送。 + +``` +>>> n.rpc.compute.compute('sum', 30, 10, "name@email.com") +40 +>>> n.rpc.compute.compute('sub', 30, 10, "name@email.com") +20 +>>> n.rpc.compute.compute('mul', 30, 10, "name@email.com") +300 +>>> n.rpc.compute.compute('div', 30, 10, "name@email.com") +3 +``` + +### 在 API 中调用微服务 + +在另外一个 Shell 中(甚至可以是另外一台服务器上),准备好 API 环境。 + +用 virtualenv 工具创建并且激活一个虚拟环境(你也可以使用 virtualenv-wrapper) + +``` +$ virtualenv api_env +$ source api_env/bin/activate +``` + +安装 Nameko、 Flask 和 Flasgger + +``` +(api_env)$ pip install nameko +(api_env)$ pip install flask +(api_env)$ pip install flasgger +``` + +>注意: 在 API 中并不需要 yagmail ,因为在这里,处理邮件是服务的职责 + +创建含有以下内容的 `api.py` 文件: + +```python +from flask import Flask, request +from flasgger import Swagger +from nameko.standalone.rpc import ClusterRpcProxy + +app = Flask(__name__) +Swagger(app) +CONFIG = {'AMQP_URI': "amqp://guest:guest@localhost"} + + +@app.route('/compute', methods=['POST']) +def compute(): + """ + Micro Service Based Compute and Mail API + This API is made with Flask, Flasgger and Nameko + --- + parameters: + - name: body + in: body + required: true + schema: + id: data + properties: + operation: + type: string + enum: + - sum + - mul + - sub + - div + email: + type: string + value: + type: integer + other: + type: integer + responses: + 200: + description: Please wait the calculation, you'll receive an email with results + """ + operation = request.json.get('operation') + value = request.json.get('value') + other = request.json.get('other') + email = request.json.get('email') + msg = "Please wait the calculation, you'll receive an email with results" + subject = "API Notification" + with ClusterRpcProxy(CONFIG) as rpc: + # asynchronously spawning and email notification + rpc.mail.send.async(email, subject, msg) + # asynchronously spawning the compute task + result = rpc.compute.compute.async(operation, value, other, email) + return msg, 200 + +app.run(debug=True) +``` + +在其他的 shell 或者服务器上运行此文件 + +``` +(api_env) $ python api.py + * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) +``` + +然后访问 这个 url,就可以看到 Flasgger 的界面了,利用它可以进行 API 的交互并可以发布任务到队列以供服务进行消费。 + +![](http://brunorocha.org/static/media/microservices/Flasgger_API_documentation.png) + +> 注意: 你可以在 shell 中查看到服务的运行日志,打印信息和错误信息。也可以访问 RabbitMQ 控制面板来查看消息在队列中的处理情况。 + +Nameko 框架还为我们提供了很多高级特性,你可以从 获取更多的信息。 + +别光看了,撸起袖子来,实现微服务! + + +-------------------------------------------------------------------------------- + +via: http://brunorocha.org/python/microservices-with-python-rabbitmq-and-nameko.html + +作者: [Bruno Rocha][a] +译者: [mr-ping](http://www.mr-ping.com) +校对: [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://facebook.com/rochacbruno +[1]:http://martinfowler.com/articles/microservices.html +[2]:http://rabbitmq.mr-ping.com/description.html diff --git a/published/20160425 How to Use Awk to Print Fields and Columns in File.md b/published/20160425 How to Use Awk to Print Fields and Columns in File.md new file mode 100644 index 0000000000..94b914cd3d --- /dev/null +++ b/published/20160425 How to Use Awk to Print Fields and Columns in File.md @@ -0,0 +1,107 @@ +如何使用 Awk 打印文件中的字段和列 +=================================================== + +在 [Linux Awk 命令系列介绍][1] 的这部分,我们来看一下 awk 最重要的功能之一,字段编辑。 + +首先我们要知道 Awk 会自动把输入的行切分为字段,字段可以定义为是一些字符集,这些字符集和其它字段被内部字段分隔符分离。 + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Print-Fields-and-Columns.png) +>Awk 输出字段和列 + +如果你熟悉 Unix/Linux 或者懂得 [bash shell 编程][2],那么你也应该知道内部字段分隔符(IFS)变量。Awk 默认的 IFS 是 tab 和空格。 + +Awk 字段切分的工作原理如下:当获得一行输入时,根据定义的 IFS,第一个字符集是字段一,用 $1 表示,第二个字符集是字段二,用 $2 表示,第三个字符集是字段三,用 $3 表示,以此类推直到最后一个字符集。 + +为了更好的理解 Awk 的字段编辑,让我们来看看下面的例子: + +**事例 1:**: 我创建了一个名为 tecmintinfo.txt 的文件。 + +``` +# vi tecmintinfo.txt +# cat tecmintinfo.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Create-File-in-Linux.png) +>在 Linux 中创建文件 + +然后在命令行中使用以下命令打印 tecmintinfo.txt 文件中的第一、第二和第三个字段。 + +``` +$ awk '//{print $1 $2 $3 }' tecmintinfo.txt +TecMint.comisthe +``` +从上面的输出中你可以看到,三个字段中的第一个是按照定义的 IFS,也就是空格,打印的。 + +- 字段一 “TecMint.com” 使用 $1 访问。 +- 字段二 “is” 通过 $2 访问。 +- 字段三 “the” 通过 $3 访问。 + +如果你注意打印的输出,可以看到字段值之间并没有分隔开,这是 print 默认的方式。 + +为了在字段值之间加入空格,你需要像下面这样添加(,)分隔符: + +``` +$ awk '//{print $1, $2, $3; }' tecmintinfo.txt + +TecMint.com is the +``` + +很重要而且必须牢记的一点是,Awk 中 ($) 的使用和在 shell 脚本中不一样。 + +在 shell 脚本中 ($) 用于获取变量的值,而在 Awk 中 ($) 只用于获取一个字段的内容,而不能用于获取变量的值。 + +**事例2**: 让我们再看一个使用多行文件 my_shoping.list 的例子。 + +``` +No Item_Name Unit_Price Quantity Price +1 Mouse #20,000 1 #20,000 +2 Monitor #500,000 1 #500,000 +3 RAM_Chips #150,000 2 #300,000 +4 Ethernet_Cables #30,000 4 #120,000 +``` + +假设你只想打印购物清单中每个物品的 Unit_Price,你需要允许下面的命令: + +``` +$ awk '//{print $2, $3 }' my_shopping.txt + +Item_Name Unit_Price +Mouse #20,000 +Monitor #500,000 +RAM_Chips #150,000 +Ethernet_Cables #30,000 +``` + +Awk 也有一个 printf 命令,它能帮助你用更好的方式格式化输出,正如你可以看到上面的输出并不清晰。 + +使用 printf 格式化输出 Item_Name 和 Unit_Price: + +``` +$ awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt + +Item_Name Unit_Price +Mouse #20,000 +Monitor #500,000 +RAM_Chips #150,000 +Ethernet_Cables #30,000 +``` + +### 总结 + +使用 Awk 进行文本和字符串过滤时字段编辑功能非常重要,它能帮助你从列表中获取列的特定数据。同时需要记住 Awk 中 ($) 操作符和 shell 脚本中不一样。 + +我希望这篇文章能对你有所帮助,如果你需要获取其它信息或者有任何疑问,都可以在下面的评论框中告诉我们。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/tag/awk-command/ +[2]: http://www.tecmint.com/category/bash-shell/ diff --git a/published/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md b/published/201605/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md similarity index 100% rename from published/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md rename to published/201605/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md diff --git a/published/20151019 Gaming On Linux--All You Need To Know.md b/published/201605/20151019 Gaming On Linux--All You Need To Know.md similarity index 100% rename from published/20151019 Gaming On Linux--All You Need To Know.md rename to published/201605/20151019 Gaming On Linux--All You Need To Know.md diff --git a/published/20151122 Doubly linked list in the Linux Kernel.md b/published/201605/20151122 Doubly linked list in the Linux Kernel.md similarity index 100% rename from published/20151122 Doubly linked list in the Linux Kernel.md rename to published/201605/20151122 Doubly linked list in the Linux Kernel.md diff --git a/published/20151123 Data Structures in the Linux Kernel.md b/published/201605/20151123 Data Structures in the Linux Kernel.md similarity index 100% rename from published/20151123 Data Structures in the Linux Kernel.md rename to published/201605/20151123 Data Structures in the Linux Kernel.md diff --git a/published/20151124 Review--5 memory debuggers for Linux coding.md b/published/201605/20151124 Review--5 memory debuggers for Linux coding.md similarity index 100% rename from published/20151124 Review--5 memory debuggers for Linux coding.md rename to published/201605/20151124 Review--5 memory debuggers for Linux coding.md diff --git a/published/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md b/published/201605/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md similarity index 100% rename from published/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md rename to published/201605/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md diff --git a/published/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md b/published/201605/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md similarity index 100% rename from published/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md rename to published/201605/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md diff --git a/published/20151202 8 things to do after installing openSUSE Leap 42.1.md b/published/201605/20151202 8 things to do after installing openSUSE Leap 42.1.md similarity index 100% rename from published/20151202 8 things to do after installing openSUSE Leap 42.1.md rename to published/201605/20151202 8 things to do after installing openSUSE Leap 42.1.md diff --git a/published/20151202 A new Mindcraft moment.md b/published/201605/20151202 A new Mindcraft moment.md similarity index 100% rename from published/20151202 A new Mindcraft moment.md rename to published/201605/20151202 A new Mindcraft moment.md diff --git a/published/20151202 KDE vs GNOME vs XFCE Desktop.md b/published/201605/20151202 KDE vs GNOME vs XFCE Desktop.md similarity index 100% rename from published/20151202 KDE vs GNOME vs XFCE Desktop.md rename to published/201605/20151202 KDE vs GNOME vs XFCE Desktop.md diff --git a/published/20151207 5 great Raspberry Pi projects for the classroom.md b/published/201605/20151207 5 great Raspberry Pi projects for the classroom.md similarity index 100% rename from published/20151207 5 great Raspberry Pi projects for the classroom.md rename to published/201605/20151207 5 great Raspberry Pi projects for the classroom.md diff --git a/published/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md b/published/201605/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md similarity index 100% rename from published/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md rename to published/201605/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md diff --git a/published/20151227 Upheaval in the Debian Live project.md b/published/201605/20151227 Upheaval in the Debian Live project.md similarity index 100% rename from published/20151227 Upheaval in the Debian Live project.md rename to published/201605/20151227 Upheaval in the Debian Live project.md diff --git a/published/20160204 An Introduction to SELinux.md b/published/201605/20160204 An Introduction to SELinux.md similarity index 100% rename from published/20160204 An Introduction to SELinux.md rename to published/201605/20160204 An Introduction to SELinux.md diff --git a/published/20160223 Intel shows budget Android phone powering big-screen Linux.md b/published/201605/20160223 Intel shows budget Android phone powering big-screen Linux.md similarity index 100% rename from published/20160223 Intel shows budget Android phone powering big-screen Linux.md rename to published/201605/20160223 Intel shows budget Android phone powering big-screen Linux.md diff --git a/published/20160225 How to add open source experience to your resume.md b/published/201605/20160225 How to add open source experience to your resume.md similarity index 99% rename from published/20160225 How to add open source experience to your resume.md rename to published/201605/20160225 How to add open source experience to your resume.md index cff9f4c6f1..6c36651976 100644 --- a/published/20160225 How to add open source experience to your resume.md +++ b/published/201605/20160225 How to add open source experience to your resume.md @@ -68,7 +68,7 @@ via: https://opensource.com/business/16/2/add-open-source-to-your-resume 作者:[edunham][a] -译者:[pengkai](https://github.com/pengkai) +译者:[kylepeng93](https://github.com/kylepeng93) 校对:[mudongliang](https://github.com/mudongliang),[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20160225 The Tao of project management.md b/published/201605/20160225 The Tao of project management.md similarity index 100% rename from published/20160225 The Tao of project management.md rename to published/201605/20160225 The Tao of project management.md diff --git a/published/20160301 Linux gives me all the tools I need.md b/published/201605/20160301 Linux gives me all the tools I need.md similarity index 100% rename from published/20160301 Linux gives me all the tools I need.md rename to published/201605/20160301 Linux gives me all the tools I need.md diff --git a/published/20160415 Docker 1.11 Adopts Open Container Project Components.md b/published/201605/20160415 Docker 1.11 Adopts Open Container Project Components.md similarity index 100% rename from published/20160415 Docker 1.11 Adopts Open Container Project Components.md rename to published/201605/20160415 Docker 1.11 Adopts Open Container Project Components.md diff --git a/published/20160426 IS UBUNTU’S SNAP PACKAGING REALLY SECURE.md b/published/201605/20160426 IS UBUNTU’S SNAP PACKAGING REALLY SECURE.md similarity index 100% rename from published/20160426 IS UBUNTU’S SNAP PACKAGING REALLY SECURE.md rename to published/201605/20160426 IS UBUNTU’S SNAP PACKAGING REALLY SECURE.md diff --git a/published/20160429 Master OpenStack with 5 new tutorials.md b/published/201605/20160429 Master OpenStack with 5 new tutorials.md similarity index 100% rename from published/20160429 Master OpenStack with 5 new tutorials.md rename to published/201605/20160429 Master OpenStack with 5 new tutorials.md diff --git a/published/20160516 Open source from a recruiter's perspective.md b/published/201605/20160516 Open source from a recruiter's perspective.md similarity index 100% rename from published/20160516 Open source from a recruiter's perspective.md rename to published/201605/20160516 Open source from a recruiter's perspective.md diff --git a/published/20160511 An introduction to data processing with Cassandra and Spark.md b/published/20160511 An introduction to data processing with Cassandra and Spark.md new file mode 100644 index 0000000000..bec55c2e7c --- /dev/null +++ b/published/20160511 An introduction to data processing with Cassandra and Spark.md @@ -0,0 +1,45 @@ +Cassandra 和 Spark 数据处理一窥 +============================================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28) + +Apache Cassandra 数据库近来引起了很多的兴趣,这主要源于现代云端软件对于可用性及性能方面的要求。 + +那么,Apache Cassandra 是什么?它是一种为高可用性及线性可扩展性优化的分布式的联机交易处理 (OLTP) 数据库。具体说到 Cassandra 的用途时,可以想想你希望贴近用户的系统,比如说让我们的用户进行交互的系统、需要保证实时可用的程序等等,如:产品目录,物联网,医疗系统,以及移动应用。对这些程序而言,下线时间意味着利润降低甚至导致其他更坏的结果。Netfilix 是这个在 2008 年开源的项目的早期使用者,他们对此项目的贡献以及带来的成功让这个项目名声大噪。 + +Cassandra 于2010年成为了 Apache 软件基金会的顶级项目,并从此之后就流行起来。现在,只要你有 Cassadra 的相关知识,找工作时就能轻松不少。想想看,NoSQL 语言和开源技术能达到企业级 SQL 技术的高度,真让人觉得十分疯狂而又不可思议的。这引出了一个问题。是什么让它如此的流行? + +因为采用了[亚马逊发表的 Dynamo 论文][1]中率先提出的设计,Cassandra 有能力在大规模的硬件及网络故障时保持实时在线。由于采用了点对点模式,在没有单点故障的情况下,我们能幸免于机架故障甚至全网中断。我们能在不影响用户体验的前提下处理数据中心故障。一个能考虑到故障的分布式系统才是一个没有后顾之忧的分布式系统,因为老实说,故障是迟早会发生的。有了 Cassandra, 我们可以直面残酷的生活并将之融入数据库的结构和功能中。 + +我们能猜到你现在在想什么,“但我只有关系数据库相关背景,难道这样的转变不会很困难吗?”这问题的答案介于是和不是之间。使用 Cassandra 建立数据模型对有关系数据库背景的开发者而言是轻车熟路。我们使用表格来建立数据模型,并使用 CQL ( Cassandra 查询语言)来查询数据库。然而,与 SQL 不同的是,Cassandra 支持更加复杂的数据结构,例如嵌套和用户自定义类型。举个例子,当要储存对一个小猫照片的点赞数目时,我们可以将整个数据储存在一个包含照片本身的集合之中从而获得更快的顺序查找而不是建立一个独立的表。这样的表述在 CQL 中十分的自然。在我们照片表中,我们需要记录名字,URL以及给此照片点赞过的人。 + +![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png) + +在一个高性能系统中,毫秒级处理都能对用户体验和客户维系产生影响。昂贵的 JOIN 操作制约了我们通过增加不可预见的网络调用而扩容的能力。当我们将数据反范式化使其能通过尽可能少的请求就可获取时,我们即可从磁盘空间成本的降低中获益并获得可预期的、高性能应用。我们将反范式化同 Cassandra 一同介绍是因为它提供了很有吸引力的的折衷方案。 + +很明显,我们不会局限于对于小猫照片的点赞数量。Canssandra 是一款为高并发写入优化的方案。这使其成为需要时常吞吐数据的大数据应用的理想解决方案。实时应用和物联网方面的应用正在稳步增长,无论是需求还是市场表现,我们也会不断的利用我们收集到的数据来寻求改进技术应用的方式。 + +这就引出了我们的下一步,我们已经提到了如何以一种现代的、性价比高的方式储存数据,但我们应该如何获得更多的动力呢?具体而言,当我们收集到了所需的数据,我们应该怎样处理呢?如何才能有效的分析几百 TB 的数据呢?如何才能实时的对我们所收集到的信息进行反馈,并在几秒而不是几小时的时间利作出决策呢?Apache Spark 将给我们答案。 + +Spark 是大数据变革中的下一步。 Hadoop 和 MapReduce 都是革命性的产品,它们让大数据界获得了分析所有我们所取得的数据的机会。Spark 对性能的大幅提升及对代码复杂度的大幅降低则将大数据分析提升到了另一个高度。通过 Spark,我们能大批量的处理计算,对流处理进行快速反应,通过机器学习作出决策,并通过图遍历来理解复杂的递归关系。这并非只是为你的客户提供与快捷可靠的应用程序连接(Cassandra 已经提供了这样的功能),这更是能洞悉 Canssandra 所储存的数据,作出更加合理的商业决策并同时更好地满足客户需求。 + +你可以看看 [Spark-Cassandra Connector][2] (开源) 并动手试试。若想了解更多关于这两种技术的信息,我们强烈推荐名为 [DataStax Academy][3] 的自学课程 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing + +作者:[Jon Haddad][a],[Dani Traphagen][b] +译者:[KevinSJ](https://github.com/KevinSJ) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/rustyrazorblade +[b]: https://opensource.com/users/dtrapezoid +[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf +[2]: https://github.com/datastax/spark-cassandra-connector +[3]: https://academy.datastax.com/ +[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162 +[5]: https://twitter.com/dtrapezoid +[6]: https://twitter.com/rustyrazorblade diff --git a/published/20160527 Turn Your Old Laptop into a Chromebook.md b/published/20160527 Turn Your Old Laptop into a Chromebook.md new file mode 100644 index 0000000000..df776e73b5 --- /dev/null +++ b/published/20160527 Turn Your Old Laptop into a Chromebook.md @@ -0,0 +1,128 @@ +把你的旧笔记本变成 Chromebook +======================================== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-ready-main.jpg?itok=gtzJVSq0) + +*学习如何用 CloudReady 在你的旧电脑上安装 Chrome OS* + +Linux 之年就在眼前。根据[报道][1],Google 在 2016 年第一季度卖出了比苹果卖出的 Macbook 更多的 Chromebook。并且,Chromebook 即将变得更加激动人心。在 Google I/O 大会上,Google 宣布安卓 Google Play 商店将在 6 月中旬来到 Chromebook,这让用户能够在他们的 Chrome OS 设备上运行安卓应用。 + +但是,你不需要购买一台全新的使用 Chrome OS 的笔记本,你可以轻松地将你的旧笔记本或电脑转换成强大的 Chromebook。我在一台 Dell Mini 和一台 2009 年购买的 Dell 笔记本上进行了尝试。那两台设备都在吃灰,而且本来注定是要被回收的,因为现代的操作系统和桌面环境,比如 Unity,Plasma 以及 Gnome 它们跑不动。 + +如果你手边有旧设备,你可以轻松地将它变成 Chromebook。你还可以在你的笔记本上安装 Chrome OS 双系统,这样你就可以同时享受不同系统的优点了。 + +多亏了 Chrome OS 的开源基础,有很多方案可以让你在你的设备上安装 Chrome OS。我试过几个,但我最喜欢的方案是 [Neverware][2] 的 CloudReady。这家公司提供一个免费的,社区支持版的系统,还有一个商业支持版,每台设备每年 49 美元。好消息是所有的授权都是可转移的,所以如果你卖掉或捐掉了设备,你也可以将 Neverware 授权转让给新用户。 + +### 你需要什么 + +在你开始在笔记本上安装 CloudReady 之前,你需要一些准备: + +- 一个容量不小于 4GB 的 USB 存储设备 +- 打开 Chrome 浏览器,到 Google Chrome Store 去安装 [Chromebook Recovery Utility(Chrome 恢复工具)][3] +- 更改目标机器的 BIOS 设置以便能从 USB 启动 + +### 开始 + +Neverware 提供两个版本的 CloudReady 镜像:32 位和 64 位。从下载页面[下载][4]合适你硬件的系统版本。 + +解压下载的 zip 文件,你会得到一个 chromiumos_image.bin 文件。现在插入 U 盘并打开 Chromebook Recovery Utility。点击工具右上角的齿轮,选择 erase recovery media(擦除恢复媒介,如图 1)。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudready-erase.png?itok=1si1QrCL) + +*图 1:选择 erase recovery media。[image:cloudready-erase]* + +接下来,选择目标 USB 驱动器并把它格式化。格式化完成后,再次打开右上齿轮,这次选择 use local image(使用本地镜像)。浏览解压的 bin 文件并选中,选好 USB 驱动器,点击继续,然后点击创建按钮(图 2)。它会开始将镜像写入驱动器。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudready-create.png?itok=S1FGzRp-) + +*图 2:创建 CloudReady 镜像。[Image:cloudready-create]* + +驱动器写好可启动的 CloudReady 之后,插到目标 PC 上并启动。系统启动进 Chromium OS 需要一小段时间。启动之后,你会看到图 3 中的界面。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-ready-install-1.jpg?itok=D6SjlIQ4) + +*图 3:准备好安装 CloudReady。* + +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/cloud-ready-install-single_crop.jpg?itok=My2rUjYC) + +*图 4:单系统选项。* + +到任务栏选择 Install CloudReady(安装 CloudReady)。 + +你可以安装 Chromium OS 和其它系统的双系统启动,但另一个系统这时应该已经安装好了。 + +在下一个窗口选择单系统(图 4)或是双系统(图 5)。 + +按照下一步按钮说明选择安装。 + +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/cloud-ready-install-dual_crop.jpg?itok=Daywck_s) + +*图 5:双系统选项。* + +整个过程最多 20 分钟左右,这取决于存储媒介和处理能力。安装完成后,电脑会关闭并重启。 + +重启之后,你会看到网络设置页面(图 6)。让人激动的是,虽然我在相同硬件上要给 Linux 发行版安装无线驱动,到了 Chromium OS 这里是开箱即用的。 + +你连上无线网络之后,系统会自动查找更新并提供 Adobe Flash 安装。安装完成后,你会看到 Chromium OS 登录界面。现在你只需登录你的 Gmail 账户,开始使用你的“Chromebook”即可。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-ready-post-install-network.jpg?itok=gSX2fQZS) + +*图 6:网络设置。* + +### 让 Netflix 正常工作 + +如果你想要播放 Netflix 或其它 DRM 保护流媒体站点,你需要做一些额外的工作。转到设置并点击安装 Widevine 插件(图 7)。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/install_widevine.png?itok=bUJaRmyx0) + +*图 7:安装 Widevine。* + +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/user-agent-changer.jpg?itok=5QDCLrZk) + +*图 8:安装 User Agent Switcher。* + +现在你需要使用 user agent switcher 这个伎俩(图 8)。 + +到 Chrome Webstore 去安装 [User Agent Switcher][5]。插件安装完成后,它会自动添加到浏览器的书签栏。 + +右键点击 agent switcher 图标并创建一个新条目(图 9): + +``` +Name: "CloudReady Widevine" + +String: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Ubuntu/16.10 Chrome/49.0.1453.93" + +Group: "Chrome" (应该被自动填上了) + +Append: "Replace" + +Indicator Flag: "IE" + +``` + +点击“添加(Add)”。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/spoof-netflix.png?itok=8DEZK4Pl) + +*图 9:为 CloudReady 创建条目。* + +然后,到“permanent spoof list(永久欺骗列表)”选项中将 CloudReady Widevine 添加为 [www.netflix.com](http://www.netflix.com) 的永久 UA 串。 + +现在,重启机器,你就可以观看 Netflix 和其它一些服务了。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/turn-your-old-laptop-chromebook + +作者:[SWAPNIL BHARTIYA][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/arnieswap +[1]: https://chrome.googleblog.com/2016/05/the-google-play-store-coming-to.html +[2]: http://www.neverware.com/#introtext-3 +[3]: https://chrome.google.com/webstore/detail/chromebook-recovery-utili/jndclpdbaamdhonoechobihbbiimdgai?hl=en +[4]: http://www.neverware.com/freedownload +[5]: https://chrome.google.com/webstore/detail/user-agent-switcher-for-c/djflhoibgkdhkhhcedjiklpkjnoahfmg diff --git a/published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md new file mode 100644 index 0000000000..0ff5f7a412 --- /dev/null +++ b/published/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md @@ -0,0 +1,322 @@ +在 Ubuntu 16.04 为 Nginx 服务器安装 LEMP 环境(MariaDB,PHP 7 并支持 HTTP 2.0) +===================== + +LEMP 是个缩写,代表一组软件包(L:Linux OS,E:Nginx 网络服务器,M:MySQL/MariaDB 数据库和 P:PHP 服务端动态编程语言),它被用来搭建动态的网络应用和网页。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-with-FastCGI-on-Ubuntu-16.04.png) + +*在 Ubuntu 16.04 安装 Nginx 以及 MariaDB,PHP7 并且支持 HTTP 2.0* + +这篇教程会教你怎么在 Ubuntu 16.04 的服务器上安装 LEMP (Nginx 和 MariaDB 以及 PHP7)。 + +**前置准备** + +- [安装 Ubuntu 16.04 服务器版本][1] + +### 步骤 1:安装 Nginx 服务器 + +1、Nginx 是一个先进的、资源优化的 Web 服务器程序,用来向因特网上的访客展示网页。我们从 Nginx 服务器的安装开始介绍,使用 [apt 命令][2] 从 Ubuntu 的官方软件仓库中获取 Nginx 程序。 + +``` +$ sudo apt-get install nginx +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-on-Ubuntu-16.04.png) + +*在 Ubuntu 16.04 安装 Nginx* + +2、 然后输入 [netstat][3] 和 [systemctl][4] 命令,确认 Nginx 进程已经启动并且绑定在 80 端口。 + +``` +$ netstat -tlpn +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Network-Port-Connection.png) + +*检查 Nginx 网络端口连接* + +``` +$ sudo systemctl status nginx.service +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Service-Status.png) + +*检查 Nginx 服务状态* + +当你确认服务进程已经启动了,你可以打开一个浏览器,使用 HTTP 协议访问你的服务器 IP 地址或者域名,浏览 Nginx 的默认网页。 + +``` +http://IP-Address +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-Nginx-Webpage.png) + +*验证 Nginx 网页* + +### 步骤 2:启用 Nginx HTTP/2.0 协议 + +3、 对 HTTP/2.0 协议的支持默认包含在 Ubuntu 16.04 最新发行版的 Nginx 二进制文件中了,它只能通过 SSL 连接并且保证加载网页的速度有巨大提升。 + +要启用Nginx 的这个协议,首先找到 Nginx 提供的网站配置文件,输入下面这个命令备份配置文件。 + +``` +$ cd /etc/nginx/sites-available/ +$ sudo mv default default.backup +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Backup-Nginx-Sites-Configuration-File.png) + +*备份 Nginx 的网站配置文件* + +4、然后,用文本编辑器新建一个默认文件,输入以下内容: + +``` +server { + listen 443 ssl http2 default_server; + listen [::]:443 ssl http2 default_server; + + root /var/www/html; + + index index.html index.htm index.php; + + server_name 192.168.1.13; + + location / { + try_files $uri $uri/ =404; + } + + ssl_certificate /etc/nginx/ssl/nginx.crt; + ssl_certificate_key /etc/nginx/ssl/nginx.key; + + ssl_protocols TLSv1 TLSv1.1 TLSv1.2; + ssl_prefer_server_ciphers on; + ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; + ssl_dhparam /etc/nginx/ssl/dhparam.pem; + ssl_session_cache shared:SSL:20m; + ssl_session_timeout 180m; + resolver 8.8.8.8 8.8.4.4; + add_header Strict-Transport-Security "max-age=31536000; + #includeSubDomains" always; + + + location ~ \.php$ { + include snippets/fastcgi-php.conf; + fastcgi_pass unix:/run/php/php7.0-fpm.sock; + } + + location ~ /\.ht { + deny all; + } + +} + +server { + listen 80; + listen [::]:80; + server_name 192.168.1.13; + return 301 https://$server_name$request_uri; +} +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-Nginx-HTTP-2-Protocol.png) + +*启用 Nginx HTTP 2 协议* + +上面的配置片段向所有的 SSL 监听指令中添加 http2 参数来启用 `HTTP/2.0`。 + +上述添加到服务器配置的最后一段,是用来将所有非 SSL 的流量重定向到 SSL/TLS 默认主机。然后用你主机的 IP 地址或者 DNS 记录(最好用 FQDN 名称)替换掉 `server_name` 选项的参数。 + +5、 当你按照以上步骤编辑完 Nginx 的默认配置文件之后,用下面这些命令来生成、查看 SSL 证书和密钥。 + +用你自定义的设置完成证书的制作,注意 Common Name 设置成和你的 DNS FQDN 记录或者服务器 IP 地址相匹配。 + +``` +$ sudo mkdir /etc/nginx/ssl +$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt +$ ls /etc/nginx/ssl/ +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Generate-SSL-Certificate-and-Key.png) + +*生成 Nginx 的 SSL 证书和密钥* + +6、 通过输入以下命令使用一个强 DH 加密算法,这会修改之前的配置文件 `ssl_dhparam` 所配置的文件。 + +``` +$ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-Diffie-Hellman-Key.png) + +*创建 Diffie-Hellman 密钥* + +7、 当 `Diffie-Hellman` 密钥生成之后,验证 Nginx 的配置文件是否正确、能否被 Nginx 网络服务程序应用。然后运行以下命令重启守护进程来观察有什么变化。 + +``` +$ sudo nginx -t +$ sudo systemctl restart nginx.service +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Configuration.png) + +*检查 Nginx 的配置* + +8、 键入下面的命令来测试 Nginx 使用的是 HTTP/2.0 协议。看到协议中有 `h2` 的话,表明 Nginx 已经成功配置使用 HTTP/2.0 协议。所有最新的浏览器默认都能够支持这个协议。 + +``` +$ openssl s_client -connect localhost:443 -nextprotoneg '' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Test-Nginx-HTTP-2-Protocol.png) + +*测试 Nginx HTTP 2.0 协议* + +### 第 3 步:安装 PHP 7 解释器 + +通过 FastCGI 进程管理程序的协助,Nginx 能够使用 PHP 动态语言解释器生成动态网络内容。FastCGI 能够从 Ubuntu 官方仓库中安装 php-fpm 二进制包来获取。 + +9、 在你的服务器控制台里输入下面的命令来获取 PHP7.0 和扩展包,这能够让 PHP 与 Nginx 网络服务进程通信。 + +``` +$ sudo apt install php7.0 php7.0-fpm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-PHP-FPM-for-Ngin.png) + +*安装 PHP 7 以及 PHP-FPM* + +10、 当 PHP7.0 解释器安装成功后,输入以下命令启动或者检查 php7.0-fpm 守护进程: + +``` +$ sudo systemctl start php7.0-fpm +$ sudo systemctl status php7.0-fpm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Start-Verify-php-fpm-Service.png) + +*开启、验证 php-fpm 服务* + +11、 当前的 Nginx 配置文件已经配置了使用 PHP FPM 来提供动态内容。 + +下面给出的这部分服务器配置让 Nginx 能够使用 PHP 解释器,所以不需要对 Nginx 配置文件作别的修改。 + +``` +location ~ \.php$ { + include snippets/fastcgi-php.conf; + fastcgi_pass unix:/run/php/php7.0-fpm.sock; + } +``` + +下面是的截图是 Nginx 默认配置文件的内容。你可能需要对其中的代码进行修改或者取消注释。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-PHP-FastCGI-for-Nginx.png) + +*启用 PHP FastCGI* + +12、 要测试启用了 PHP-FPM 的 Nginx 服务器,用下面的命令创建一个 PHP 测试配置文件 `info.php`。接着用 `http://IP_or domain/info.php` 这个网址来查看配置。 + +``` +$ sudo su -c 'echo "" |tee /var/www/html/info.php' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-PHP-Info-File.png) + +*创建 PHP Info 文件* + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-PHP-FastCGI-Info.png) + +*检查 PHP FastCGI 的信息* + +检查服务器是否宣告支持 HTTP/2.0 协议,定位到 PHP 变量区域中的 `$_SERVER[‘SERVER_PROTOCOL’]` 就像下面这张截图一样。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-HTTP-2.0-Protocol-Info.png) + +*检查 HTTP2.0 协议信息* + +13、 为了安装其它的 PHP7.0 模块,使用 `apt search php7.0` 命令查找 php 的模块然后安装。 + +如果你想要 [安装 WordPress][5] 或者别的 CMS,需要安装以下的 PHP 模块,这些模块迟早有用。 + +``` +$ sudo apt install php7.0-mcrypt php7.0-mbstring +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-Modules.png) + +*安装 PHP 7 模块* + +14、 要注册这些额外的 PHP 模块,输入下面的命令重启 PHP-FPM 守护进程。 + +``` +$ sudo systemctl restart php7.0-fpm.service +``` + +### 第 4 步:安装 MariaDB 数据库 + +15、 最后,我们需要 MariaDB 数据库来存储、管理网站数据,才算完成 LEMP 的搭建。 + +运行下面的命令安装 MariaDB 数据库管理系统,重启 PHP-FPM 服务以便使用 MySQL 模块与数据库通信。 + +``` +$ sudo apt install mariadb-server mariadb-client php7.0-mysql +$ sudo systemctl restart php7.0-fpm.service +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-MariaDB-for-Nginx.png) + +*安装 MariaDB* + +16、 为了安全加固 MariaDB,运行来自 Ubuntu 软件仓库中的二进制包提供的安全脚本,这会询问你设置一个 root 密码,移除匿名用户,禁用 root 用户远程登录,移除测试数据库。 + +输入下面的命令运行脚本,并且确认所有的选择。参照下面的截图。 + +``` +$ sudo mysql_secure_installation +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Secure-MariaDB-Installation-for-Nginx.png) + +*MariaDB 的安全安装* + +17、 配置 MariaDB 以便普通用户能够不使用系统的 sudo 权限来访问数据库。用 root 用户权限打开 MySQL 命令行界面,运行下面的命令: + +``` +$ sudo mysql +MariaDB> use mysql; +MariaDB> update user set plugin=’‘ where User=’root’; +MariaDB> flush privileges; +MariaDB> exit +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/MariaDB-User-Permissions.png) + +*MariaDB 的用户权限* + +最后通过执行以下命令登录到 MariaDB 数据库,就可以不需要 root 权限而执行任意数据库内的命令: + +``` +$ mysql -u root -p -e 'show databases' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-MariaDB-Databases.png) + +*查看 MariaDB 数据库* + +好了!现在你拥有了配置在 **Ubuntu 16.04** 服务器上的 **LEMP** 环境,你能够部署能够与数据库交互的复杂动态网络应用。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/ + +作者:[Matei Cezar][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/cezarmatei/ +[1]: http://www.tecmint.com/installation-of-ubuntu-16-04-server-edition/ +[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/ +[3]: http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ +[4]: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ +[5]: http://www.tecmint.com/install-wordpress-using-lamp-or-lemp-on-rhel-centos-fedora/ diff --git a/published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md b/published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md new file mode 100644 index 0000000000..e6287d62e9 --- /dev/null +++ b/published/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md @@ -0,0 +1,186 @@ +在 Ubuntu Linux 中使用 WebP 图片 +========================================= + +![](http://itsfoss.com/wp-content/uploads/2016/05/support-webp-ubuntu-linux.jpg) + +> 简介:这篇指南会向你展示如何在 Linux 下查看 WebP 图片以及将 WebP 图片转换为 JPEG 或 PNG 格式。 + +### 什么是 WebP? + +自从 Google 推出 [WebP 图片格式][0],已经过去五年了。Google 说,WebP 提供有损和无损压缩,相比 JPEG 压缩,WebP 压缩文件大小,能更小约 25%。 + +Google 的目标是让 WebP 成为 web 图片的新标准,但是并没有成为现实。已经五年过去了,除了谷歌的生态系统以外它仍未被接受成为一个标准。但正如我们所知的,Google 对它的技术很有进取心。几个月前 Google 将 Google Plus 的所有图片改为了 WebP 格式。 + +如果你用 Google Chrome 从 Google Plus 上下载那些图片,你会得到 WebP 图片,不论你之前上传的是 PNG 还是 JPEG。这都不是重点。真正的问题在于当你尝试着在 Ubuntu 中使用默认的 GNOME 图片查看器打开它时你会看到如下错误: + +> **Could not find XYZ.webp(无法找到 XYZ.webp)** + +> **Unrecognized image file format(未识别文件格式)** + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-1.png) + +*GNOME 图片查看器不支持 WebP 图片* + +在这个教程里,我们会看到 + +- 如何在 Linux 中添加 WebP 支持 +- 支持 WebP 图片的程序列表 +- 如何将 WebP 图片转换到 PNG 或 JPEG +- 如何将 WebP 图片直接下载为 PNG 格式 + +### 如何在 Ubuntu 以及其它 Linux 发行版中查看 WebP 图片 + +[GNOME 图片查看器][3]是许多 Linux 发行版的默认图片查看器,包括 Ubuntu,它不支持 WebP 图片。目前也没有可用的插件给 GNOME 图片查看器添加 WebP 支持。 + +这无非是意味着我们不能在 Linux 上用 GNOME 图片查看器打开 WebP 文件而已。一个更好的替代品,[gThumb][4],默认就支持 WebP 图片。 + +要在 Ubuntu 以及其它基于 Ubuntu 的发行版上安装 gThumb 的话,使用以下命令: + +``` +sudo apt-get install gthumb +``` + +一旦安装完成,你就可以简单地右键点击 WebP 图片,选择 gThumb 来打开它。你现在应该可以看到如下画面: + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-2.jpeg) + +*gThumb 中显示的 WebP 图片* + +### 让 gThumb 成为 Ubuntu 中 WebP 图片的默认应用 + +对 Ubuntu 新手而言,如果你想要让 gThumb 成为打开 WebP 文件的默认应用,跟着以下步骤操作: + +#### 步骤 1:右键点击 WebP 文件选择属性。 + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) + +*从右键菜单中选择属性* + +#### 步骤 2:转到打开方式标签,选择 gThumb 并点击设置为默认。 + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) + +*让 gThumb 成为 Ubuntu 中 WebP 图片的默认应用* + +### 让 gThumb 成为所有图片的默认应用 + +gThumb 的功能比图片查看器更多。举个例子,你可以做一些简单的图片编辑,给图片添加滤镜等。添加滤镜的效率没有 XnRetro(在[ Linux 下添加类似 Instagram 滤镜效果][5]的专用工具)那么高,但它还是有一些基础的滤镜可以用。 + +我非常喜欢 gThumb 并且决定让它成为默认的图片查看器。如果你也想在 Ubuntu 中让 gThumb 成为所有图片的默认应用,遵照以下步骤操作: + +步骤1:打开系统设置 + +![](http://itsfoss.com/wp-content/uploads/2014/04/System_Settings_ubuntu_1404.jpeg) + +步骤2:转到详情(Details) + +![](http://itsfoss.com/wp-content/uploads/2013/11/System_settings_Ubuntu_1.jpeg) + +步骤3:在这里将 gThumb 设置为图片的默认应用 + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-5.png) + +### Linux 上打开 WebP 文件的替代程序 + +可能你不喜欢 gThumb。如果这样的话,你可以选择下列应用来在 Linux 中查看 WebP 图片: + +- [XnView][6](非开源) +- GIMP 加上非官方 WebP 插件,可以从这个 [PPA][7] 安装,支持到 Ubuntu 15.10。我会在另一篇文章里提到。 +- [Gwenview][8] + +### 在 Linux 中将 WebP 图片转换为 PNG 和 JPEG + +在 Linux 上转换 WebP 图片有两种途径: + +- 命令行 +- 图形界面 + +#### 1.在 Linux 使用命令行转换 WebP 图片 + +你需要先安装 WebP 工具。打开终端并使用下列命令: + +``` +sudo apt-get install webp +``` + +##### 将 JPEG/PNG 转换为 WebP + +我们将使用 cwebp 命令(它代表转换为 WebP 的意思吗?)来将 JPEG 或 PNG 文件转换为 WebP。命令格式是这样的: + +``` +cwebp -q [图片质量] [JPEG/PNG_文件名] -o [WebP_文件名] +``` + +举个例子,你可以使用下列命令: + +``` +cwebp -q 90 example.jpeg -o example.webp +``` + +##### 将 WebP 转换为 JPEG/PNG + +要将 WebP 图片转换为 JPEG 或 PNG,我们将使用 dwebp 命令。命令格式是: + +``` +dwebp [WebP_文件名] -o [PNG_文件名] +``` + +该命令的一个例子: + +``` +dwebp example.webp -o example.png +``` + +#### 2.使用图形工具将 WebP 转换为 JPEG/PNG + +要实现这个目标,我们要使用 XnConvert,它是免费的应用但不是开源的。你可以从他们的网站上下载安装文件: + +[下载 XnConvert][1] + +XnConvert 是个强大的工具,你可以用它来批量修改图片尺寸。但在这个教程里,我们只介绍如何将单个 WebP 图片转换为 PNG/JPEG。 + +打开 XnConvert 并选择输入文件: + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-6.jpeg) + +在输出标签,选择你想要的输出格式。选择完后点击转换。 + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-7.jpeg) + +要将 WebP 图片转换为 PNG,JPEG 或其它你选择的图片格式,这就是你所需要做的一切了。 + +### 在 Chrome 浏览器中直接将 WebP 图片下载为 PNG + +也许你一点都不喜欢 WebP 图片格式,也不想在 Linux 仅仅为了查看 WebP 图片而安装一个新软件。如果你不得不将 WebP 文件转换以备将来使用,这会是件更痛苦的事情。 + +解决这个问题的一个更简单、不那么痛苦的途径是安装一个 Chrome 扩展 Save Image as PNG。有了这个插件,你可以右键点击 WebP 图片并直接存储为 PNG 格式。 + +![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-8.png) + +*在 Google Chrome 中将 WebP 图片保存为 PNG 格式* + +- [获取 Save Image as PNG 扩展][2] + +### 你的选择是? + +我希望这个详细的教程能够帮你在 Linux 上支持 WebP 并帮你转换 WebP 图片。你在 Linux 怎么处理 WebP 图片?你使用哪个工具?以上描述的方法中,你最喜欢哪一个? + +---------------------- +via: http://itsfoss.com/webp-ubuntu-linux/ + +作者:[Abhishek Prakash][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://itsfoss.com/author/abhishek/ +[0]: https://developers.google.com/speed/webp/ +[1]: http://www.xnview.com/en/xnconvert/#downloads +[2]: https://chrome.google.com/webstore/detail/save-image-as-png/nkokmeaibnajheohncaamjggkanfbphi?utm_source=chrome-ntp-icon +[3]: https://wiki.gnome.org/Apps/EyeOfGnome +[4]: https://wiki.gnome.org/Apps/gthumb +[5]: http://itsfoss.com/add-instagram-effects-xnretro-ubuntu-linux/ +[6]: http://www.xnview.com/en/xnviewmp/#downloads +[7]: https://launchpad.net/~george-edison55/+archive/ubuntu/webp +[8]: https://userbase.kde.org/Gwenview diff --git a/published/20160531 Why Ubuntu-based Distros Are Leaders.md b/published/20160531 Why Ubuntu-based Distros Are Leaders.md new file mode 100644 index 0000000000..c909e753f7 --- /dev/null +++ b/published/20160531 Why Ubuntu-based Distros Are Leaders.md @@ -0,0 +1,63 @@ +为什么 Ubuntu 家族会占据 Linux 发行版的主导地位? +========================================= + +在过去的数年中,我体验了一些优秀的 Linux 发行版。给我印象最深刻的是那些由强大的社区维护的发行版,而流行的发行版比强大的社区给我的印象更深。流行的 Linux 发行版往往能吸引新用户,这通常是由于其流行而使得使用该发行版会更加容易。并非绝对如此,但一般来说是这样的。 + +说到这里,首先映入我脑海的一个发行版是 [Ubuntu][1]。其基于健壮的 [Debian][2] 发行版构建,它不仅成为了一个非常受欢迎的 Linux 发行版,而且它也衍生出了不可计数的其他分支,比如 Linux Mint 就是一个例子。在本文中,我会探讨为何我认为 Ubuntu 会赢得 Linux 发行版之战的原因,以及它是怎样影响到了整个 Linux 桌面领域。 + +### Ubuntu 易于使用 + +在我几年前首次尝试使用 Ubuntu 前,我更喜欢使用 KED 桌面。在那个时期,我接触的大多是这种 KDE 桌面环境。主要原因还是 KDE 是大多数新手容易入手的 Linux 发行版中最受欢迎的。这些新手友好的发行版有 Knoppix、Simply Mepis、Xandros、Linspire 以及其它的发行版等等,这些发行版都推荐他们的用户去使用广受欢迎的 KDE。 + +现在 KDE 能满足我的需求,我也没有什么理由去折腾其他的桌面环境。有一天我的 Debian 安装失败了(由于我个人的操作不当),我决定尝试开发代号为 Dapper Drake 的 Ubuntu 版本(LCTT 译注:Ubuntu 6.06 - Dapper Drake,发布日期:2006 年 6 月 1 日),每个人都对它赞不绝口。那个时候,我对于它的印象仅限于屏幕截图,但是我想试试也挺有趣的。 + +Ubuntu Dapper Drake 给我的最大的印象是它让我很清楚地知道每个东西都在哪儿。记住,我是来自于 KDE 世界的用户,在 KDE 上要想改变菜单的设置就有 15 种方法 !而 Ubuntu 上的 GNOME 实现极具极简主义的。 + +时间来到 2016 年,最新的版本号是 16.04:我们有了好几种 Ubuntu 特色版本,也有一大堆基于 Ubuntu 的发行版。所有的 Ubuntu 特色版和衍生发行版的共同具有的核心都是为易用而设计。发行版想要增大用户基数时,这就是最重要的原因。 + +### Ubuntu LTS + +过去,我几乎一直坚持使用 LTS(Long Term Support)发行版作为我的主要桌面系统。10月份的发行版很适合我测试硬盘驱动器,甚至把它用在一个老旧的手提电脑上。我这样做的原因很简单——我没有兴趣在一个正式使用的电脑上折腾短期发行版。我是个很忙的家伙,我觉得这样会浪费我的时间。 + +对于我来说,我认为 Ubuntu 提供 LTS 发行版是 Ubuntu 能够变得流行的最大的原因。这样说吧———给普罗大众提供一个桌面 Linux 发行版,这个发行版能够得到长期的有效支持就是它的优势。事实上,不只 Ubuntu 是这样,其他的分支在这一点上也做的很好。长期支持策略以及对新手的友好环境,我认为这就为 Ubuntu 的普及带来了莫大的好处。 + +### Ubuntu Snap 软件包 + +以前,用户会夸赞可以在他们的系统上使用 PPA(personal package archive 个人软件包档案)获得新的软件。不好的是,这种技术也有缺点。当它用在各种软件名称时, PPA 经常会找不到,这种情况很常见。 + +现在有了 [Snap 软件包][3] 。当然这不是一个全新的概念,过去已经进行了类似的尝试。用户可以在一个长期支持版本上运行最新的软件,而不必去使用最新的 Ubuntu 发行版。虽然我认为目前还处于 Snap 软件包的早期,但是我很期待可以在一个稳定的发行版上运行的崭新的软件。 + +最明显的问题是,如果你要运行很多软件,那么 Snap 包实际会占用很多硬盘空间。不仅如此,大多数 Ubuntu 软件仍然需要由官方从 deb 包进行转换。第一个问题可以通过使用更大的硬盘空间得到解决,而后一个问题的解决则需要等待。 + +### Ubuntu 社区 + +首先,我承认大多数主要的 Linux 发行版都有强大的社区。然而,我坚信 Ubuntu 社区的成员是最多样化的,他们来自各行各业。例如,我们的论坛包括从苹果硬件支持到游戏等不同分类。特别是这些专业的讨论话题还非常广泛。 + +除过论坛,Ubuntu 也提供了一个很正式的社区组织。这个组织包括一个理事会、技术委员会、[本地社区团队][4]和开发者成员委员会。还有很多,但是这些都是我知道的社区组织部分。 + +我们还有一个 [Ubuntu 问答][5]版块。我认为,这种功能可以代替人们从论坛寻求帮助的方式,我发现在这个网站你得到有用信息的可能性更大。不仅如此,那些提供的解决方案中被选出的最精准的答案也会被写入到官方文档中。 + +### Ubuntu 的未来 + +我认为 Ubuntu 的 Unity 界面(LCTT 译注:Unity 是 Canonical 公司为 Ubuntu 操作系统的 GNOME 桌面环境开发的图形化界面)在提升桌面占有率上少有作为。我能理解其中的缘由,现在它主要做一些诸如可以使开发团队的工作更轻松的事情。但是最终,我还是认为 Unity 为 Ubuntu MATE 和 Linux Mint 的普及铺平道路。 + +我最好奇的一点是 Ubuntu's IRC 和邮件列表的发展(LCTT 译注:可以在 Ubuntu LoCo Teams 的 IRC Chat 上提问关于地方团队和计划的事件的问题,也可以和一些不同团队的成员进行交流)。事实是,他们都不能像 Ubuntu 问答板块那样文档化。至于邮件列表,我一直认为这对于合作是一种很痛苦的过时方法,但这仅仅是我的个人看法——其他人可能有不同的看法,也可能会认为它很好。 + +你怎么看?你认为 Ubuntu 将来会占据主要的份额吗?也许你会认为 Arch 和 Linux Mint 或者其他的发行版会在普及度上打败 Ubuntu? 既然这样,那请大声说出你最喜爱的发行版。如果这个发行版是 Ubuntu 衍生版 ,说说你为什么更喜欢它而不是 Ubuntu 本身。如果不出意外,Ubuntu 会成为构建其他发行版的基础,我想很多人都是这样认为的。 + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/why-ubuntu-based-distros-are-leaders.html + +作者:[Matt Hartley][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: http://www.ubuntu.com/ +[2]: https://www.debian.org/ +[3]: http://www.datamation.com/open-source/ubuntu-snap-packages-the-good-the-bad-the-ugly.html +[4]: http://loco.ubuntu.com/ +[5]: http://askubuntu.com/ diff --git a/published/201606/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md b/published/201606/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md new file mode 100644 index 0000000000..b5923ca86a --- /dev/null +++ b/published/201606/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md @@ -0,0 +1,129 @@ +马克·沙特尔沃思 – Ubuntu 背后的那个男人 +================================================================================ + +![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg) + +**马克·理查德·沙特尔沃思(Mark Richard Shuttleworth)** 是 Ubuntu 的创始人,也被称作 [Debian 背后的人][1]([之一][2])。他于 1973 年出生在南非的韦尔科姆(Welkom)。他不仅是个企业家,还是个太空游客——他是第一个前往太空旅行的非洲独立国家的公民。 + +马克曾在 1996 年成立了一家名为 **Thawte** 的互联网商务安全公司,那时他还在开普敦大学( University of Cape Town)的学习金融和信息技术。 + +2000 年,马克创立了 HBD(Here be Dragons (此处有龙/危险)的缩写,所以其吉祥物是一只龙),这是一家投资公司,同时他还创立了沙特尔沃思基金会(Shuttleworth Foundation),致力于以奖金和投资等形式给社会中有创新性的领袖提供资助。 + +> “移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,是因为在这里没有盗版 Windows 操作系统的市场。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会一直使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。” +> +> — 马克·沙特尔沃思 + +2002 年,他在俄罗斯的星城(Star City)接受了为期一年的训练,随后作为联盟号 TM-34 任务组的一员飞往了国际空间站。再后来,在面向有志于航空航天或者其相关学科的南非学生群体发起了推广科学、编程及数学的运动后,马克 创立了 **Canonical Ltd**。此后直至2013年,他一直在领导 Ubuntu 操作系统的开发。 + +现今,沙特尔沃思拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire,两条黑色母狗以及时不时经过的羊群。 + +> “电脑不仅仅是一台电子设备了。它现在是你思维的延续,以及通向他人的大门。” +> +> — 马克·沙特尔沃思 + +### 马克·沙特尔沃思的早年生活### + +正如我们之前提到的,马克出生在南非的奥兰治自由邦(Orange Free State)的韦尔科姆(Welkom)。他是一名外科医生和护士学校教师的孩子。他在西部省预科学校就读并在 1986 年成为了学生会主席,一个学期后就读于 Rondebosch 男子高中,再之后入学 Bishops Diocesan 学院并在 1991 年再次成为那里的学生会主席。 + +马克在开普敦大学( University of Cape Town)拿到了金融和信息系统的商业科学双学士学位,他在学校就读时住在 Smuts Hall。作为学生,他也在那里帮助安装了学校的第一条宿舍互联网接入。 + +>“无数的企业和国家已经证明,引入开源政策能提高竞争力和效率。在不同层面上创造生产力对于公司和国家而言都是至关重要的。” +> +> — 马克·沙特尔沃思 + +### 马克·沙特尔沃思的职业生涯 ### + +马克在 1995 年创立了 Thawte,公司专注于数字证书和互联网安全,然后在 1999 年把公司卖给了 VeriSign,赚取了大约 5.75 亿美元。 + +2000 年,马克创立了 HBD 风险资本公司,成为了商业投资人和项目孵化器。2004 年,他创立了 Canonical Ltd. 以支持和鼓励自由软件开发项目的商业化,特别是 Ubuntu 操作系统的项目。直到 2009 年,马克才从 Canonical CEO 的位置上退下。 + +> “在 [DDC](https://en.wikipedia.org/wiki/DCC_Alliance) (LCTT 译注:一个 Debian GNU/Linux 开发者联盟) 的早期,我更倾向于让拥护者们放手去做,看看能发展出什么。” +> +> — 马克·沙特尔沃思 + +### Linux、自由开源软件与马克·沙特尔沃思 ### + +在 90 年代后期,马克曾作为一名开发者参与 Debian 操作系统项目。 + +2001 年,马克创立了沙特尔沃思基金会,这是个扎根南非的、非赢利性的基金会,专注于赞助社会创新、免费/教育用途开源软件,曾赞助过[自由烤面包机][3](Freedom Toaster)(LCTT 译注:自由烤面包机是一个可以给用户带来的 CD/DVD 上刻录自由软件的公共信息亭)。 + +2004 年,马克通过出资开发基于 Debian 的 Ubuntu 操作系统返回了自由软件界,这一切也经由他的 Canonical 公司完成。 + +2005 年,马克出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,人们经常用一个朗朗上口的名字称呼他——“**SABDFL :自封的生命之仁慈独裁者(Self-Appointed Benevolent Dictator for Life)**”。为了能够找到足够多的高手开发这个巨大的项目,马克花费了 6 个月的时间从 Debian 邮件列表里寻找,这一切都是在他乘坐在南极洲的一艘破冰船——赫列布尼科夫船长号(Kapitan Khlebnikov)——上完成的。同年,马克买下了 Impi Linux 65% 的股份。 + + +> “我呼吁电信公司的掌权者们尽快开发出跨洲际的高效信息传输服务。” +> +> — 马克·沙特尔沃思 + +2006 年,KDE 宣布沙特尔沃思成为 KDE 的**第一赞助人(first patron)**——彼时 KDE 最高级别的赞助。这一赞助协议在 2012 年终止,取而代之的是对 Kubuntu 的资金支持,这是一个使用 KDE 作为默认桌面环境的 Ubuntu 变种。 + +![](http://www.unixmen.com/wp-content/uploads/2015/10/shuttleworth-kde.jpg) + +2009 年,Shuttleworth 宣布他会从 Canonical 的 CEO 上退位以更好地关注合作关系、产品设计和客户。从 2004 年起担任公司 COO 的珍妮·希比尔(Jane Silber)晋升为 CEO。 + +2010 年,马克由于其贡献而被开放大学(Open University)授予了荣誉学位。 + +2012 年,马克和肯尼斯·罗格夫(Kenneth Rogoff)一同在牛津大学与彼得·蒂尔(Peter Thiel)和加里·卡斯帕罗夫(Garry Kasparov)就**创新悖论**(The Innovation Enigma)展开辩论。 + +2013 年,马克和 Ubuntu 一同被授予**澳大利亚反个人隐私大哥奖**(Austrian anti-privacy Big Brother Award),理由是默认情况下, Ubuntu 会把 Unity 桌面的搜索框的搜索结果发往 Canonical 服务器(LCTT 译注:因此侵犯了个人隐私)。而一年前,马克曾经申明过这一过程进行了匿名化处理。 + +> “所有主流 PC 厂家现在都提供 Ubuntu 预安装选项,所以我们和业界的合作已经相当紧密了。但那些 PC 厂家对于给买家推广新东西这件事都很紧张。如果我们可以让 PC 买家习惯 Ubuntu 的平板/手机操作系统的体验,那他们也应该更愿意买预装 Ubuntu 的 PC。没有哪个操作系统是通过抄袭模仿获得成功的,Android 很棒,但如果我们想成功的话我们必须给市场带去更新更好的东西(LCTT 译注:而不是改进或者模仿 Android)。如果我们中没有人追寻未来的话,我们将陷入停滞不前的危险。但如果你尝试去追寻未来了,那你必须接受不是所有人对未来的预见都和你一样这一事实。” +> +> — 马克·沙特尔沃思 + +### 马克·沙特尔沃思的太空之旅 ### + +马克在 2002 年作为世界第二名自费太空游客而闻名世界,同时他也是南非第一个旅行太空的人。这趟旅行中,马克作为俄罗斯联盟号 TM-34 任务的一名乘员加入,并为此支付了约两千万美元。2 天后,联盟号宇宙飞船抵达了国际空间站,在那里马克呆了 8 天并参与了艾滋病和基因组研究的相关实验。同年晚些时候,马克随联盟号 TM-33 任务返回了地球。为了参与这趟旅行,马克花了一年时间准备与训练,其中有 7 个月居住在俄罗斯的星城。 + +![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg) + +在太空中,马克与纳尔逊·曼德拉(Nelson Mandela)和另一个 14 岁的南非女孩米歇尔·福斯特(Michelle Foster) (她问马克要不要娶她)通过无线电进行了交谈。马克礼貌地回避了这个结婚问题,但在巧妙地改换话题之前他说他感到很荣幸。身患绝症的女孩福斯特通过梦想基金会( Dream foundation)的赞助获得了与马克和纳尔逊·曼德拉交谈的机会。 + +归来后,马克在世界各地做了旅行,并和各地的学生就太空之旅发表了感言。 + +>“粗略的统计数据表明 Ubuntu 的实际用户依然在增长。而我们的合作方——戴尔、惠普、联想和其他硬件生产商,以及游戏厂商 EA、Valve 都在加入我们——这让我觉得我们在关键的领域继续领先。” +> +> — 马克·沙特尔沃思 + +### 马克·沙特尔沃思的交通工具 ### + +马克有他自己的私人客机庞巴迪全球特快(Bombardier Global Express),虽然它经常被称为 Canonical 一号,但事实上此飞机是通过 HBD 风险投资公司注册拥有的。涂画在飞机侧面的龙图案是 HBD 风投公司的吉祥物 ,名叫 Norman。 + +![](http://www.leader.co.za/leadership/logos/logomarkshuttleworthdirectory_31ce.gif) + +### 与南非储备银行的法律冲突 ### + +在从南非转移 25 亿南非兰特去往 Isle of Man 的过程中,南非储备银行征收了 2.5 亿南非兰特的税金。马克上诉了,经过冗长的法庭唇枪舌战,南非储备银行被勒令返还 2.5 亿征税,以及其利息。马克宣布他会把这 2.5 亿存入信托基金,以用于帮助那些上诉到宪法法院的案子。 + + +> “离境征税倒也不和宪法冲突。但离境征税的主要目的不是提高税收,而是通过监管资金流出来保护本国经济。” +> +> — Dikgang Moseneke 法官 + +2015 年,南非宪法法院修正了低级法院的判决结果,并宣布了上述对于离岸征税的理解。 + +### 马克·沙特尔沃思喜欢的东西 ### + +Cesária Évora、mp3、春天、切尔西(Chelsea)、“恍然大悟”(finally seeing something obvious for first time)、回家、辛纳屈(Sinatra)、白日梦、暮后小酌、挑逗、苔丝(d’Urberville)、弦理论、Linux、粒子物理、Python、转世、米格-29、雪、旅行、Mozilla、酸橙果酱、激情代价(body shots)、非洲丛林、豹、拉贾斯坦邦、俄罗斯桑拿、单板滑雪、失重、Iain m 银行、宽度、阿拉斯泰尔·雷诺兹(Alastair Reynolds)、化装舞会服装、裸泳、灵机一动、肾上腺素激情消退、莫名(the inexplicable)、活动顶篷式汽车、Clifton、国家公路、国际空间站、机器学习、人工智能、维基百科、Slashdot、风筝冲浪(kitesurfing)和 Manx lanes。 + + + +### 马克·沙特尔沃思不喜欢的东西 ### + +行政、涨工资、法律术语和公众演讲。 + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system/ + +作者:[M.el Khamlichi][a] +译者:[Moelf](https://github.com/Moelf) +校对:[PurlingNayuki](https://github.com/PurlingNayuki), [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ +[1]:https://wiki.debian.org/PeopleBehindDebian +[2]:https://raphaelhertzog.com/2011/11/17/people-behind-debian-mark-shuttleworth-ubuntus-founder/ +[3]:https://en.wikipedia.org/wiki/Freedom_Toaster \ No newline at end of file diff --git a/published/201606/20151210 Getting started with Docker by Dockerizing this Blog.md.md b/published/201606/20151210 Getting started with Docker by Dockerizing this Blog.md.md new file mode 100644 index 0000000000..7dd950d4a9 --- /dev/null +++ b/published/201606/20151210 Getting started with Docker by Dockerizing this Blog.md.md @@ -0,0 +1,464 @@ +通过 Docker 化一个博客网站来开启我们的 Docker 之旅 +=== + +![](http://bencane.com/static/img/post-bg.jpg) + +> 这篇文章包含 Docker 的基本概念,以及如何通过创建一个定制的 Dockerfile 来 Docker 化(Dockerize)一个应用。 + +Docker 是一个过去两年来从某个 idea 中孕育而生的有趣技术,公司组织们用它在世界上每个角落来部署应用。在今天的文章中,我将讲述如何通过“Docker 化(Dockerize)”一个现有的应用,来开始我们的 Docker 之旅。这里提到的应用指的就是这个博客! + +## 什么是 Docker? + +当我们开始学习 Docker 基本概念时,让我们先去搞清楚什么是 Docker 以及它为什么这么流行。Docker 是一个操作系统容器管理工具,它通过将应用打包在操作系统容器中,来方便我们管理和部署应用。 + +### 容器 vs. 虚拟机 + +容器和虚拟机并不完全相似,它是另外一种提供**操作系统虚拟化**的方式。它和标准的虚拟机还是有所不同。 + +标准的虚拟机一般会包括一个完整的操作系统、操作系统软件包、最后还有一至两个应用。这都得益于为虚拟机提供硬件虚拟化的管理程序。这样一来,一个单一的服务器就可以将许多独立的操作系统作为虚拟客户机运行了。 + +容器和虚拟机很相似,它们都支持在单一的服务器上运行多个操作环境,只是,在容器中,这些环境并不是一个个完整的操作系统。容器一般只包含必要的操作系统软件包和一些应用。它们通常不会包含一个完整的操作系统或者硬件的虚拟化。这也意味着容器比传统的虚拟机开销更少。 + +容器和虚拟机常被误认为是两种对立的技术。虚拟机采用一个物理服务器来提供全功能的操作环境,该环境会和其余虚拟机一起共享这些物理资源。容器一般用来隔离一个单一主机上运行的应用进程,以保证隔离后的进程之间不能相互影响。事实上,容器和 **BSD Jails** 以及 `chroot` 进程的相似度,超过了和完整虚拟机的相似度。 + +### Docker 在容器之上提供了什么 + +Docker 本身不是一个容器运行环境,事实上,只是一个与具体实现无关的容器技术,Docker 正在努力支持 [Solaris Zones](https://blog.docker.com/2015/08/docker-oracle-solaris-zones/) 和 [BSD Jails](https://wiki.freebsd.org/Docker)。Docker 提供了一种管理、打包和部署容器的方式。虽然一定程度上,虚拟机多多少少拥有这些类似的功能,但虚拟机并没有完整拥有绝大多数的容器功能,即使拥有,这些功能用起来都并没有 Docker 来的方便或那么完整。 + +现在,我们应该知道 Docker 是什么了,然后,我们将从安装 Docker,并部署一个公开的预构建好的容器开始,学习 Docker 是如何工作的。 + +## 从安装开始 + +默认情况下,Docker 并不会自动被安装在您的计算机中,所以,第一步就是安装 Docker 软件包;我们的教学机器系统是 Ubuntu 14.0.4,所以,我们将使用 Apt 软件包管理器,来执行安装操作。 + +``` +# apt-get install docker.io +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + aufs-tools cgroup-lite git git-man liberror-perl +Suggested packages: + btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc + git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki + git-svn +The following NEW packages will be installed: + aufs-tools cgroup-lite docker.io git git-man liberror-perl +0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. +Need to get 7,553 kB of archives. +After this operation, 46.6 MB of additional disk space will be used. +Do you want to continue? [Y/n] y +``` + +为了检查当前是否有容器运行,我们可以执行`docker`命令,加上`ps`选项 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +``` + +`docker`命令中的`ps`功能类似于 Linux 的`ps`命令。它将显示可找到的 Docker 容器及其状态。由于我们并没有启动任何 Docker 容器,所以命令没有显示任何正在运行的容器。 + +## 部署一个预构建好的 nginx Docker 容器 + +我比较喜欢的 Docker 特性之一就是 Docker 部署预先构建好的容器的方式,就像`yum`和`apt-get`部署包一样。为了更好地解释,我们来部署一个运行着 nginx web 服务器的预构建容器。我们可以继续使用`docker`命令,这次选择`run`选项。 + +``` +# docker run -d nginx +Unable to find image 'nginx' locally +Pulling repository nginx +5c82215b03d1: Download complete +e2a4fb18da48: Download complete +58016a5acc80: Download complete +657abfa43d82: Download complete +dcb2fe003d16: Download complete +c79a417d7c6f: Download complete +abb90243122c: Download complete +d6137c9e2964: Download complete +85e566ddc7ef: Download complete +69f100eb42b5: Download complete +cd720b803060: Download complete +7cc81e9a118a: Download complete +``` + +`docker`命令的`run`选项,用来通知 Docker 去寻找一个指定的 Docker 镜像,然后启动运行着该镜像的容器。默认情况下,Docker 容器运行在前台,这意味着当你运行`docker run`命令的时候,你的 shell 会被绑定到容器的控制台以及运行在容器中的进程。为了能在后台运行该 Docker 容器,我们使用了`-d` (**detach**)标志。 + +再次运行`docker ps`命令,可以看到 nginx 容器正在运行。 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande +``` + +从上面的输出信息中,我们可以看到正在运行的名为`desperate_lalande`的容器,它是由`nginx:latest image`(LCTT 译注: nginx 最新版本的镜像)构建而来得。 + +### Docker 镜像 + +镜像是 Docker 的核心特征之一,类似于虚拟机镜像。和虚拟机镜像一样,Docker 镜像是一个被保存并打包的容器。当然,Docker 不只是创建镜像,它还可以通过 Docker 仓库发布这些镜像,Docker 仓库和软件包仓库的概念差不多,它让 Docker 能够模仿`yum`部署软件包的方式来部署镜像。为了更好地理解这是怎么工作的,我们来回顾`docker run`执行后的输出。 + +``` +# docker run -d nginx +Unable to find image 'nginx' locally +``` + +我们可以看到第一条信息是,Docker 不能在本地找到名叫 nginx 的镜像。这是因为当我们执行`docker run`命令时,告诉 Docker 运行一个基于 nginx 镜像的容器。既然 Docker 要启动一个基于特定镜像的容器,那么 Docker 首先需要找到那个指定镜像。在检查远程仓库之前,Docker 首先检查本地是否存在指定名称的本地镜像。 + +因为系统是崭新的,不存在 nginx 镜像,Docker 将选择从 Docker 仓库下载之。 + +``` +Pulling repository nginx +5c82215b03d1: Download complete +e2a4fb18da48: Download complete +58016a5acc80: Download complete +657abfa43d82: Download complete +dcb2fe003d16: Download complete +c79a417d7c6f: Download complete +abb90243122c: Download complete +d6137c9e2964: Download complete +85e566ddc7ef: Download complete +69f100eb42b5: Download complete +cd720b803060: Download complete +7cc81e9a118a: Download complete +``` + +这就是第二部分输出信息显示给我们的内容。默认情况下,Docker 会使用 [Docker Hub](https://hub.docker.com/) 仓库,该仓库由 Docker 公司维护。 + +和 Github 一样,在 Docker Hub 创建公共仓库是免费的,私人仓库就需要缴纳费用了。当然,部署你自己的 Docker 仓库也是可以的,事实上只需要简单地运行`docker run registry`命令就行了。但在这篇文章中,我们的重点将不是讲解如何部署一个定制的注册服务。 + +### 关闭并移除容器 + +在我们继续构建定制容器之前,我们先清理一下 Docker 环境,我们将关闭先前的容器,并移除它。 + +我们利用`docker`命令和`run`选项运行一个容器,所以,为了停止同一个容器,我们简单地在执行`docker`命令时,使用`kill`选项,并指定容器名。 + +``` +# docker kill desperate_lalande +desperate_lalande +``` + +当我们再次执行`docker ps`,就不再有容器运行了 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +``` + +但是,此时,我们这是停止了容器;虽然它不再运行,但仍然存在。默认情况下,`docker ps`只会显示正在运行的容器,如果我们附加`-a` (all) 标识,它会显示所有运行和未运行的容器。 + +``` +# docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande +``` + +为了能完整地移除容器,我们在用`docker`命令时,附加`rm`选项。 + +``` +# docker rm desperate_lalande +desperate_lalande +``` + +虽然容器被移除了;但是我们仍拥有可用的**nginx**镜像(LCTT 译注:镜像缓存)。如果我们重新运行`docker run -d nginx`,Docker 就无需再次拉取 nginx 镜像即可启动容器。这是因为我们本地系统中已经保存了一个副本。 + +为了列出系统中所有的本地镜像,我们运行`docker`命令,附加`images`选项。 + +``` +# docker images +REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE +nginx latest 9fab4090484a 5 days ago 132.8 MB +``` + +## 构建我们自己的镜像 + +截至目前,我们已经使用了一些基础的 Docker 命令来启动、停止和移除一个预构建好的普通镜像。为了“Docker 化(Dockerize)”这篇博客,我们需要构建我们自己的镜像,也就是创建一个 **Dockerfile**。 + +在大多数虚拟机环境中,如果你想创建一个机器镜像,首先,你需要建立一个新的虚拟机、安装操作系统、安装应用,最后将其转换为一个模板或者镜像。但在 Docker 中,所有这些步骤都可以通过 Dockerfile 实现全自动。Dockerfile 是向 Docker 提供构建指令去构建定制镜像的方式。在这一章节,我们将编写能用来部署这个博客的定制 Dockerfile。 + +### 理解应用 + +我们开始构建 Dockerfile 之前,第一步要搞明白,我们需要哪些东西来部署这个博客。 + +这个博客本质上是由一个静态站点生成器生成的静态 HTML 页面,这个生成器是我编写的,名为 **hamerkop**。这个生成器很简单,它所做的就是生成该博客站点。所有的代码和源文件都被我放在了一个公共的 [Github 仓库](https://github.com/madflojo/blog)。为了部署这篇博客,我们要先从 Github 仓库把这些内容拉取下来,然后安装 **Python** 和一些 **Python** 模块,最后执行`hamerkop`应用。我们还需要安装 **nginx**,来运行生成后的内容。 + +截止目前,这些还是一个简单的 Dockerfile,但它却给我们展示了相当多的 [Dockerfile 语法]((https://docs.docker.com/v1.8/reference/builder/))。我们需要克隆 Github 仓库,然后使用你最喜欢的编辑器编写 Dockerfile,我选择`vi` + +``` +# git clone https://github.com/madflojo/blog.git +Cloning into 'blog'... +remote: Counting objects: 622, done. +remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 +Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. +Resolving deltas: 100% (242/242), done. +Checking connectivity... done. +# cd blog/ +# vi Dockerfile +``` + +### FROM - 继承一个 Docker 镜像 + +第一条 Dockerfile 指令是`FROM`指令。这将指定一个现存的镜像作为我们的基础镜像。这也从根本上给我们提供了继承其他 Docker 镜像的途径。在本例中,我们还是从刚刚我们使用的 **nginx** 开始,如果我们想从头开始,我们可以通过指定`ubuntu:latest`来使用 **Ubuntu** Docker 镜像。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane +``` + +除了`FROM`指令,我还使用了`MAINTAINER`,它用来显示 Dockerfile 的作者。 + +Docker 支持使用`#`作为注释,我将经常使用该语法,来解释 Dockerfile 的部分内容。 + +### 运行一次测试构建 + +因为我们继承了 **nginx** Docker镜像,我们现在的 Dockerfile 也就包括了用来构建 **nginx** 镜像的 [Dockerfile](https://github.com/nginxinc/docker-nginx/blob/08eeb0e3f0a5ee40cbc2bc01f0004c2aa5b78c15/Dockerfile) 中所有指令。这意味着,此时我们可以从该 Dockerfile 中构建出一个 Docker 镜像,然后以该镜像运行一个容器。虽然,最终的镜像和 **nginx** 镜像本质上是一样的,但是我们这次是通过构建 Dockerfile 的形式,然后我们将讲解 Docker 构建镜像的过程。 + +想要从 Dockerfile 构建镜像,我们只需要在运行 `docker` 命令的时候,加上 `build` 选项。 + +``` +# docker build -t blog /root/blog +Sending build context to Docker daemon 23.6 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Running in c97f36450343 + ---> 60a44f78d194 +Removing intermediate container c97f36450343 +Successfully built 60a44f78d194 +``` + +上面的例子,我们使用了`-t` (**tag**)标识给镜像添加“blog”的标签。实质上我们就是在给镜像命名,如果我们不指定标签,就只能通过 Docker 分配的 **Image ID** 来访问镜像了。本例中,从 Docker 构建成功的信息可以看出,**Image ID**值为 `60a44f78d194`。 + +除了`-t`标识外,我还指定了目录`/root/blog`。该目录被称作“构建目录”,它将包含 Dockerfile,以及其它需要构建该容器的文件。 + +现在我们构建成功了,下面我们开始定制该镜像。 + +### 使用 RUN 来执行 apt-get + +用来生成 HTML 页面的静态站点生成器是用 **Python** 语言编写的,所以,在 Dockerfile 中需要做的第一件定制任务是安装 Python。我们将使用 Apt 软件包管理器来安装 Python 软件包,这意味着在 Dockerfile 中我们要指定运行`apt-get update`和`apt-get install python-dev`;为了完成这一点,我们可以使用`RUN`指令。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip +``` + +如上所示,我们只是简单地告知 Docker 构建镜像的时候,要去执行指定的`apt-get`命令。比较有趣的是,这些命令只会在该容器的上下文中执行。这意味着,即使在容器中安装了`python-dev`和`python-pip`,但主机本身并没有安装这些。说的更简单点,`pip`命令将只在容器中执行,出了容器,`pip`命令不存在。 + +还有一点比较重要的是,Docker 构建过程中不接受用户输入。这说明任何被`RUN`指令执行的命令必须在没有用户输入的时候完成。由于很多应用在安装的过程中需要用户的输入信息,所以这增加了一点难度。不过我们例子中,`RUN`命令执行的命令都不需要用户输入。 + +### 安装 Python 模块 + +**Python** 安装完毕后,我们现在需要安装 Python 模块。如果在 Docker 外做这些事,我们通常使用`pip`命令,然后参考我的博客 Git 仓库中名叫`requirements.txt`的文件。在之前的步骤中,我们已经使用`git`命令成功地将 Github 仓库“克隆”到了`/root/blog`目录;这个目录碰巧也是我们创建`Dockerfile`的目录。这很重要,因为这意味着 Docker 在构建过程中可以访问这个 Git 仓库中的内容。 + +当我们执行构建后,Docker 将构建的上下文环境设置为指定的“构建目录”。这意味着目录中的所有文件都可以在构建过程中被使用,目录之外的文件(构建环境之外)是不能访问的。 + +为了能安装所需的 Python 模块,我们需要将`requirements.txt`从构建目录拷贝到容器中。我们可以在`Dockerfile`中使用`COPY`指令完成这一需求。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip + +## Create a directory for required files +RUN mkdir -p /build/ + +## Add requirements file and run pip +COPY requirements.txt /build/ +RUN pip install -r /build/requirements.txt +``` + +在`Dockerfile`中,我们增加了3条指令。第一条指令使用`RUN`在容器中创建了`/build/`目录。该目录用来拷贝生成静态 HTML 页面所需的一切应用文件。第二条指令是`COPY`指令,它将`requirements.txt`从“构建目录”(`/root/blog`)拷贝到容器中的`/build/`目录。第三条使用`RUN`指令来执行`pip`命令;安装`requirements.txt`文件中指定的所有模块。 + +当构建定制镜像时,`COPY`是条重要的指令。如果在 Dockerfile 中不指定拷贝文件,Docker 镜像将不会包含requirements.txt 这个文件。在 Docker 容器中,所有东西都是隔离的,除非在 Dockerfile 中指定执行,否则容器中不会包括所需的依赖。 + +### 重新运行构建 + +现在,我们让 Docker 执行了一些定制任务,现在我们尝试另一次 blog 镜像的构建。 + +``` +# docker build -t blog /root/blog +Sending build context to Docker daemon 19.52 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Using cache + ---> 8e0f1899d1eb +Step 2 : RUN apt-get update + ---> Using cache + ---> 78b36ef1a1a2 +Step 3 : RUN apt-get install -y python-dev python-pip + ---> Using cache + ---> ef4f9382658a +Step 4 : RUN mkdir -p /build/ + ---> Running in bde05cf1e8fe + ---> f4b66e09fa61 +Removing intermediate container bde05cf1e8fe +Step 5 : COPY requirements.txt /build/ + ---> cef11c3fb97c +Removing intermediate container 9aa8ff43f4b0 +Step 6 : RUN pip install -r /build/requirements.txt + ---> Running in c50b15ddd8b1 +Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) +Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) + +Successfully installed jinja2 PyYaml mistune markdown MarkupSafe +Cleaning up... + ---> abab55c20962 +Removing intermediate container c50b15ddd8b1 +Successfully built abab55c20962 +``` + +上述输出所示,我们可以看到构建成功了,我们还可以看到另外一个有趣的信息` ---> Using cache`。这条信息告诉我们,Docker 在构建该镜像时使用了它的构建缓存。 + +### Docker 构建缓存 + +当 Docker 构建镜像时,它不仅仅构建一个单独的镜像;事实上,在构建过程中,它会构建许多镜像。从上面的输出信息可以看出,在每一“步”执行后,Docker 都在创建新的镜像。 + +``` + Step 5 : COPY requirements.txt /build/ + ---> cef11c3fb97c +``` + +上面片段的最后一行可以看出,Docker 在告诉我们它在创建一个新镜像,因为它打印了**Image ID** : `cef11c3fb97c`。这种方式有用之处在于,Docker能在随后构建这个 **blog** 镜像时将这些镜像作为缓存使用。这很有用处,因为这样, Docker 就能加速同一个容器中新构建任务的构建流程。从上面的例子中,我们可以看出,Docker 没有重新安装`python-dev`和`python-pip`包,Docker 则使用了缓存镜像。但是由于 Docker 并没有找到执行`mkdir`命令的构建缓存,随后的步骤就被一一执行了。 + +Docker 构建缓存一定程度上是福音,但有时也是噩梦。这是因为决定使用缓存或者重新运行指令的因素很少。比如,如果`requirements.txt`文件发生了修改,Docker 会在构建时检测到该变化,然后 Docker 会重新执行该执行那个点往后的所有指令。这得益于 Docker 能查看`requirements.txt`的文件内容。但是,`apt-get`命令的执行就是另一回事了。如果提供 Python 软件包的 **Apt** 仓库包含了一个更新的 python-pip 包;Docker 不会检测到这个变化,转而去使用构建缓存。这会导致之前旧版本的包将被安装。虽然对`python-pip`来说,这不是主要的问题,但对使用了存在某个致命攻击缺陷的软件包缓存来说,这是个大问题。 + +出于这个原因,抛弃 Docker 缓存,定期地重新构建镜像是有好处的。这时,当我们执行 Docker 构建时,我简单地指定`--no-cache=True`即可。 + +## 部署博客的剩余部分 + +Python 软件包和模块安装后,接下来我们将拷贝需要用到的应用文件,然后运行`hamerkop`应用。我们只需要使用更多的`COPY` 和 `RUN`指令就可完成。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip + +## Create a directory for required files +RUN mkdir -p /build/ + +## Add requirements file and run pip +COPY requirements.txt /build/ +RUN pip install -r /build/requirements.txt + +## Add blog code nd required files +COPY static /build/static +COPY templates /build/templates +COPY hamerkop /build/ +COPY config.yml /build/ +COPY articles /build/articles + +## Run Generator +RUN /build/hamerkop -c /build/config.yml +``` + +现在我们已经写出了剩余的构建指令,我们再次运行另一次构建,并确保镜像构建成功。 + +``` +# docker build -t blog /root/blog/ +Sending build context to Docker daemon 19.52 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Using cache + ---> 8e0f1899d1eb +Step 2 : RUN apt-get update + ---> Using cache + ---> 78b36ef1a1a2 +Step 3 : RUN apt-get install -y python-dev python-pip + ---> Using cache + ---> ef4f9382658a +Step 4 : RUN mkdir -p /build/ + ---> Using cache + ---> f4b66e09fa61 +Step 5 : COPY requirements.txt /build/ + ---> Using cache + ---> cef11c3fb97c +Step 6 : RUN pip install -r /build/requirements.txt + ---> Using cache + ---> abab55c20962 +Step 7 : COPY static /build/static + ---> 15cb91531038 +Removing intermediate container d478b42b7906 +Step 8 : COPY templates /build/templates + ---> ecded5d1a52e +Removing intermediate container ac2390607e9f +Step 9 : COPY hamerkop /build/ + ---> 59efd1ca1771 +Removing intermediate container b5fbf7e817b7 +Step 10 : COPY config.yml /build/ + ---> bfa3db6c05b7 +Removing intermediate container 1aebef300933 +Step 11 : COPY articles /build/articles + ---> 6b61cc9dde27 +Removing intermediate container be78d0eb1213 +Step 12 : RUN /build/hamerkop -c /build/config.yml + ---> Running in fbc0b5e574c5 +Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux +Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux + +Successfully created file /usr/share/nginx/html//archive.html +Successfully created file /usr/share/nginx/html//sitemap.xml + ---> 3b25263113e1 +Removing intermediate container fbc0b5e574c5 +Successfully built 3b25263113e1 +``` + +### 运行定制的容器 + +成功的一次构建后,我们现在就可以通过运行`docker`命令和`run`选项来运行我们定制的容器,和之前我们启动 nginx 容器一样。 + +``` +# docker run -d -p 80:80 --name=blog blog +5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1 +``` + +我们这次又使用了`-d` (**detach**)标识来让Docker在后台运行。但是,我们也可以看到两个新标识。第一个新标识是`--name`,这用来给容器指定一个用户名称。之前的例子,我们没有指定名称,因为 Docker 随机帮我们生成了一个。第二个新标识是`-p`,这个标识允许用户从主机映射一个端口到容器中的一个端口。 + +之前我们使用的基础 **nginx** 镜像分配了80端口给 HTTP 服务。默认情况下,容器内的端口通道并没有绑定到主机系统。为了让外部系统能访问容器内部端口,我们必须使用`-p`标识将主机端口映射到容器内部端口。上面的命令,我们通过`-p 8080:80`语法将主机80端口映射到容器内部的80端口。 + +经过上面的命令,我们的容器看起来成功启动了,我们可以通过执行`docker ps`核实。 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog +``` + +## 总结 + +截止目前,我们拥有了一个运行中的定制 Docker 容器。虽然在这篇文章中,我们只接触了一些 Dockerfile 指令用法,但是我们还是要学习所有的指令。我们可以检查 [Docker's reference page](https://docs.docker.com/v1.8/reference/builder/) 来获取所有的 Dockerfile 指令用法,那里对指令的用法说明得很详细。 + +另一个比较好的资源是 [Dockerfile Best Practices page](https://docs.docker.com/engine/articles/dockerfile_best-practices/),它有许多构建定制 Dockerfile 的最佳练习。有些技巧非常有用,比如战略性地组织好 Dockerfile 中的命令。上面的例子中,我们将`articles`目录的`COPY`指令作为 Dockerfile 中最后的`COPY`指令。这是因为`articles`目录会经常变动。所以,将那些经常变化的指令尽可能地放在最后面的位置,来最优化那些可以被缓存的步骤。 + +通过这篇文章,我们涉及了如何运行一个预构建的容器,以及如何构建,然后部署定制容器。虽然关于 Docker 你还有许多需要继续学习的地方,但我想这篇文章给了你如何继续开始的好建议。当然,如果你认为还有一些需要继续补充的内容,在下面评论即可。 + +-------------------------------------- +via: http://bencane.com/2015/12/01/getting-started-with-docker-by-dockerizing-this-blog/ + +作者:Benjamin Cane +译者:[su-kaiyao](https://github.com/su-kaiyao) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + diff --git a/translated/tech/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md b/published/201606/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md similarity index 86% rename from translated/tech/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md rename to published/201606/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md index 8d452af0ad..7bb0c3d00f 100644 --- a/translated/tech/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md +++ b/published/201606/20160214 How to Install MariaDB 10 on CentOS 7 CPanel Server.md @@ -1,8 +1,7 @@ - 在 CentOS 7 CPanel 服务器上安装 MariaDB 10 ================================================================================ -MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主要由 MariaDB 社区在维护,采用 GPL v2 授权许可。软件的安全性是 MariaDB 开发者的主要焦点。他们保持为 MariaDB 的每个版本发布安全补丁。当有任何安全问题被发现时,开发者会尽快修复并推出 MariaDB 的新版本。 +MariaDB 是一个增强版的、开源的 MySQL 替代品。它主要由 MariaDB 社区在维护,采用 GPL v2 授权许可。软件的安全性是 MariaDB 开发者的主要焦点。他们保持为 MariaDB 的每个版本发布安全补丁。当有任何安全问题被发现时,开发者会尽快修复并推出 MariaDB 的新版本。 ### MariaDB 的优势 ### @@ -12,7 +11,7 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 - 性能更好 - 比 MySQL 的存储引擎多 -在这篇文章中,我将谈论关于如何升级 MySQL5.5 到最新的 MariaDB 在CentOS7 CPanel 服务器上。在安装前先完成以下步骤。 +在这篇文章中,我将谈论关于如何在 CentOS7 CPanel 服务器上升级 MySQL5.5 到最新的 MariaDB 。在安装前先完成以下步骤。 ### 先决条件: ### @@ -62,7 +61,7 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 #### 3. 从服务器上删除和卸载 MySQL 所有的 RPM 包 #### -运行以下命令来禁用 MySQL 的 RPM 的目标。通过运行此命令,cPanel 将不再处理 MySQL 的更新,并在系统上将卸载的标记为 rpm.versions。 +运行以下命令来禁用 MySQL RPM 的目标(target)。通过运行此命令,cPanel 将不再处理 MySQL 的更新,并在系统上将这些 RPM 版本标记为已卸载。 /scripts/update_local_rpm_versions --edit target_settings.MySQL50 uninstalled /scripts/update_local_rpm_versions --edit target_settings.MySQL51 uninstalled @@ -72,7 +71,8 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 现在运行以下命令: /scripts/check_cpanel_rpms --fix --targets=MySQL50,MySQL51,MySQL55,MySQL56 -移除服务器上所有已存在的 MySQL rpms 来为 MariaDB 的安装清理环境。请看下面的输出: + +移除服务器上所有已有的 MySQL RPM 来为 MariaDB 的安装清理环境。请看下面的输出: root@server1 [/var/lib/mysql]# /scripts/check_cpanel_rpms --fix --targets=MySQL50,MySQL51,MySQL55,MySQL56 [2016-01-31 09:53:59 +0000] @@ -97,9 +97,9 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 [2016-01-31 09:54:04 +0000] Removed symlink /etc/systemd/system/multi-user.target.wants/mysql.service. [2016-01-31 09:54:04 +0000] Restoring service monitoring. -通过这些步骤,我们已经卸载了现有的 MySQL RPMs,并做了标记来防止 MySQL的更新,服务器的环境已经清理然后准备安装 MariaDB。 +通过这些步骤,我们已经卸载了现有的 MySQL RPM,并做了标记来防止 MySQL的更新,服务器的环境已经清理,然后准备安装 MariaDB。 -开始安装吧,我们需要在 CentOS 为 MariaDB 创建一个 yum 软件库。下面是我的做法! +开始安装吧,我们需要根据 CentOS 和 MariaDB 的版本为 MariaDB 创建一个 yum 软件库。下面是我的做法! ### 安装步骤: ### @@ -120,18 +120,20 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 #### 第2步:打开 /etc/yum.conf 并修改如下行: #### -**Remove this line** exclude=courier* dovecot* exim* filesystem httpd* mod_ssl* mydns* mysql* nsd* php* proftpd* pure-ftpd* spamassassin* squirrelmail* +**删除这一行:** -**And replace with this line** exclude=courier* dovecot* exim* filesystem httpd* mod_ssl* mydns* nsd* proftpd* pure-ftpd* spamassassin* squirrelmail* + exclude=courier* dovecot* exim* filesystem httpd* mod_ssl* mydns* mysql* nsd* php* proftpd* pure-ftpd* spamassassin* squirrelmail* -**\*\*\* IMPORTANT \*\*\*** +**替换为:** + + exclude=courier* dovecot* exim* filesystem httpd* mod_ssl* mydns* nsd* proftpd* pure-ftpd* spamassassin* squirrelmail* + +**重要** 需要确保我们已经从 exclude 列表中移除了 MySQL 和 PHP。 #### 第3步:运行以下命令来安装 MariaDB 和相关的包。 #### -**yum install MariaDB-server MariaDB-client MariaDB-devel php-mysql** - root@server1 [~]#yum install MariaDB-server MariaDB-client MariaDB-devel php-mysql Dependencies Resolved @@ -174,7 +176,7 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 #### 第5步:运行 mysql_upgrade 命令。 #### -它将检查所有数据库中的所有表与当前安装的版本是否兼容并在必要时会更新系统表采取新的特权或功能,可能会增加当前版本的性能。 +它将检查所有数据库中的所有表与当前安装的版本是否兼容,并在必要时会更新系统表,以赋予当前版本新增加的权限或能力。 root@server1 [~]# mysql_upgrade @@ -254,7 +256,7 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 Phase 6/6: Running 'FLUSH PRIVILEGES' OK -#### 第6步:再次重新启动MySQL的服务,以确保一切都运行完好。 #### +#### 第6步:再次重新启动 MySQL 的服务,以确保一切都运行完好。 #### root@server1 [~]# systemctl restart mysql root@server1 [~]# @@ -274,17 +276,18 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 Jan 31 10:04:11 server1.centos7-test.com mysql[23854]: Starting MySQL. SUCCESS! Jan 31 10:04:11 server1.centos7-test.com systemd[1]: Started LSB: start and stop MySQL. -#### 第7步:运行 EasyApache 用 MariaDB 重建 Apache/PHP,并确保所有 PHP 的模块保持不变。#### +#### 第7步:运行 EasyApache,重建 Apache/PHP 以支持 MariaDB,并确保所有 PHP 的模块保持不变。#### root@server1 [~]#/scripts/easyapache --build - ****IMPORTANT ***** - If you forget to rebuild Apache/PHP after the MariaDB installation, it will report the library error as below: +**重要** + +如果你在安装 MariaDB 之后忘记重建 Apache/PHP,将会报如下库错误: root@server1 [/etc/my.cnf.d]# php -v php: error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory -#### 第8步:现在验证安装的数据库。 #### +#### 第8步:现在验证安装的程序和数据库。 #### root@server1 [/var/lib/mysql]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. @@ -313,13 +316,14 @@ MariaDB 是一个增强版的,开源的并且可以直接替代 MySQL。它主 10 rows in set (0.00 sec) 就这样 :)。现在,我们该去欣赏 MariaDB 完善和高效的特点了。希望你喜欢阅读本文。希望留下您宝贵的建议和反馈! + -------------------------------------------------------------------------------- via: http://linoxide.com/how-tos/install-mariadb-10-centos-7-cpanel/ 作者:[Saheetha Shameer][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201606/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md b/published/201606/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md new file mode 100644 index 0000000000..c9809b68e1 --- /dev/null +++ b/published/201606/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md @@ -0,0 +1,121 @@ +如何启用 Nginx/Apache 的 PHP-FPM 多实例 +================================================================================ + +PHP-FPM 作为 FastCGI 进程管理器而广为熟知,它是 PHP FastCGI 实现的改进,带有更为有用的功能,用于处理高负载的服务器和网站。下面列出其中一些功能: + +### 新功能 ### + + - 拥有具有优雅(graceful)启动/停止选项的高级进程管理能力。 + - 可以通过不同的用户身份/组身份来以监听多个端口以及使用多个PHP配置。 + - 错误日志记录。 + - 支持上传加速。 + - 特别用于在处理一些耗时任务时结束请求和清空所有数据的功能。 + - 同时支持动态和静态的子进程重生。 + - 支持IP地址限制。 + +在本文中,我将要讨论的是,在运行 CPanel 11.52 及 EA3 (EasyApache)的 CentOS 7 服务器上,于 Nginx 和 Apache 之上安装 PHP-FPM,以及如何来通过 CPanel 管理这些安装好的多个 PHP-FPM 实例。 + +在我们开始安装前, 先看看安装的先决条件。 + +### 先决条件 ### + + 1. 启用 Mod_proxy_fcgi 模块 + 2. 启用 MPM_Event + +由于我们要将 PHP-FPM 安装到一台 EA3 服务器,我们需要运行 EasyApache 来编译 Apache 以启用这些模块。 + +你们可以参考我以前写的,关于如何在 Apache 服务器上安装 Nginx 作为反向代理的文档来了解 Nginx 的安装。 + +这里,我将再次简述那些安装步骤。具体细节,你可以参考我之前写的**(如何在 CentOS 7/CPanel 服务器上配置 Nginx 反向代理)**一文。 + +- 步骤 1:安装 Epel 仓库 +- 步骤 2:安装 nDeploy RPM 仓库,这是此次安装中最为**重要**的步骤。 +- 步骤 3:使用 yum 从 nDeploy 仓库安装 nDeploy 和 Nginx 插件。 +- 步骤 4:启用/配置 Nginx 为反向代理。 + +完成这些步骤后,下面为服务器中所有可用 PHP 版本安装 PHP-FPM 包,EA3 使用 remi 仓库来安装这些包。你可以运行这个 nDeploy 脚本来下载所有的包。 + + root@server1 [~]# /opt/nDeploy/scripts/easy_php_setup.sh + Loaded plugins: fastestmirror, tsflags, universal-hooks + EA4 | 2.9 kB 00:00:00 + base | 3.6 kB 00:00:00 + epel/x86_64/metalink | 9.7 kB 00:00:00 + epel | 4.3 kB 00:00:00 + extras | 3.4 kB 00:00:00 + updates | 3.4 kB 00:00:00 + (1/2): epel/x86_64/updateinfo | 460 kB 00:00:00 + (2/2): epel/x86_64/primary_db + +运行该脚本将为 PHP 54,PHP 55,PHP 56 和 PHP 70 安装所有这些 FPM 包。 + + Installed Packages + php54-php-fpm.x86_64 5.4.45-3.el7.remi @remi + php55-php-fpm.x86_64 5.5.31-1.el7.remi @remi + php56-php-fpm.x86_64 5.6.17-1.el7.remi @remi + php70-php-fpm.x86_64 7.0.2-1.el7.remi @remi + +在以上安装完成后,你需要为 Apache 启用 PHP-FPM SAPI。你可以运行下面这个脚本来启用 PHP-FPM 实例。 + + root@server1 [~]# /opt/nDeploy/scripts/apache_php-fpm_setup.sh enable + mod_proxy_fcgi.c + Please choose one default PHP version from the list below + PHP70 + PHP56 + PHP54 + PHP55 + Provide the exact desired version string here and press ENTER: PHP54 + ConfGen:: lxblogger + ConfGen:: blogr + ConfGen:: saheetha + ConfGen:: satest + which: no cagefsctl in (/usr/local/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/X11R6/bin:/root/bin) + info [rebuildhttpdconf] Missing owner for domain server1.centos7-test.com, force lookup to root + Built /usr/local/apache/conf/httpd.conf OK + Waiting for “httpd” to restart gracefully …waiting for “httpd” to initialize …… + …finished. + +它会问你需要运行哪个 PHP 版本作为服务器默认版本,你可以输入那些细节内容,然后继续配置并为现存的域名生成虚拟主机文件。 + +我选择了 PHP 54 作为我服务器上的默认 PHP-FPM 版本。 + +![confirm-php-fpm](http://blog.linoxide.com/wp-content/uploads/2016/01/confirm-php-fpm-1024x525.png) + +虽然服务器配置了 PHP-FPM 54,但是我们可以通过 CPanel 为各个独立的域名修改 PHP-FPM 实例。 + +下面我将通过一些截图来为你们说明一下,怎样通过 CPanel 为各个独立域修改 PHP-FPM 实例。 + +安装了 Nginx 插件后,你的域名的 CPanel 就会有一个 Nginx Webstack 图标,你可以点击该图标来配置你的 Web 服务器。我已经登录进了我其中的一个 CPanel 来配置相应的 Web 服务器。 + +请看这些截图。 + +![nginx webstack](http://blog.linoxide.com/wp-content/uploads/2016/01/nginx-webstack.png) + +![nginxicon1](http://blog.linoxide.com/wp-content/uploads/2016/01/nginxicon1-1024x253.png) + +现在,你可以根据需要为选中的主域配置 web 服务器(这里,我已经选择了主域 saheetha.com)。我已经继续通过自动化配置选项来进行了,因为我不需要添加任何手动设置。 + +![nginx_auto_proxy](http://blog.linoxide.com/wp-content/uploads/2016/01/nginx_auto_proxy-1024x408.png) + +当 Nginx 配置完后,你可以在这里为你的域名选择 PHP-FPM 实例。 + +![php-fpm1](http://blog.linoxide.com/wp-content/uploads/2016/01/php-fpm1-1024x408.png) + +![php54](http://blog.linoxide.com/wp-content/uploads/2016/01/php54-1024x169.png) + +![php55](http://blog.linoxide.com/wp-content/uploads/2016/01/php55.png) + +就像你在截图中所看到的,我服务器上的默认 PHP-FPM 是**PHP 54**,而我正要将我的域名的 PHP-FPM 实例单独修改成 **PHP 55**。当你为你的域修改 PHP-FPM 后,你可以通过访问 **phpinfo** 页面来确认。 + +谢谢你们参考本文,我相信这篇文章会给你提供不少信息和帮助。我会为你们推荐关于这个内容的有价值的评论 :)。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/enable-multiple-php-fpm-instances-nginx-apache/ + +作者:[Saheetha Shameer][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/saheethas/ diff --git a/published/201606/20160218 7 Steps to Start Your Linux SysAdmin Career.md b/published/201606/20160218 7 Steps to Start Your Linux SysAdmin Career.md new file mode 100644 index 0000000000..0f46f091d3 --- /dev/null +++ b/published/201606/20160218 7 Steps to Start Your Linux SysAdmin Career.md @@ -0,0 +1,59 @@ +七步开始你的 Linux 系统管理员生涯 +=============================================== + +Linux 现在是个大热门。每个人都在寻求 Linux 才能。招聘人员对有 Linux 经验的人求贤若渴,还有无数的职位虚位以待。但是如果你是 Linux 新手,又想要赶上这波热潮,该从何开始下手呢? + +###1、安装 Linux### + +这应该是不言而喻的,但学习 Linux 的第一关键就是安装 Linux。LFS101x 和 LFS201 课程都包含第一次安装和配置 Linux 的详细内容。 + +###2、 完成 LFS101x 课程### + +如果你是完完全全的 Linux 新手,最佳的起点是我们的免费 Linux 课程 [LFS101x Introduction to Linux](https://www.edx.org/course/introduction-linux-linuxfoundationx-lfs101x-2)。这个在线课程放在 edX.org,探索 Linux 系统管理员和终端用户常用的各种工具和技能以及日常的 Linux 工作环境。该课程是为有一定经验,但较少或没有接触过 Linux 的电脑用户设计的,不论他们是在个人还是企业环境中工作。这个课程会从图形界面和命令行两个方面教会你有用的 Linux 知识,让你能够了解主流的 Linux 发行版。 + +###3、 看看 LFS201 课程### + +在你完成 LFS101x 之后,你就可以开始挑战 Linux 中更加复杂的任务了,这是成为一名专业的系统管理员所必须的。为了掌握这些技能,你应该看看 [LFS201 Essentials of Linux System Administration](http://training.linuxfoundation.org/linux-courses/system-administration-training/essentials-of-system-administration) 这个课程。该课程对每个话题进行了深度的解释和介绍,还有大量的练习和实验,帮助你获得相关主题实际的上手经验。 + +如果你更愿意有个教练,或者你的雇主想将你培养成 Linux 系统管理员的话,你可能会对 LFS220 Linux System Administration 感兴趣。这个课程有 LFS201 中所有的主题,但是它是由专家专人教授的,帮助你进行实验以及解答你在课程主题中的问题。 + +###4、 练习!### + +熟能生巧,和对任何乐器或运动适用一样,这对 Linux 来说也一样适用。在你安装 Linux 之后,经常使用它。一遍遍地练习关键任务,直到你不需要参考材料也能轻而易举地完成。练习命令行的输入输出以及图形界面。这些练习能够保证你掌握成为成功的 Linux 系统管理员所必需的知识和技能。 + +###5、 获得认证### + +在你完成 LFS201 或 LFS220 并且充分练习之后,你现在已经准备好获得系统管理员的认证了。你需要这个证书,因为你需要向雇主证明你拥有一名专业 Linux 系统管理员必需的技能。 + +现在有一些不同的 Linux 证书,它们每个都有其独到之处。但是,它们里大部分不是在特定发行版(如红帽)上认证,就是纯粹的知识测试,没有演示 Linux 的实际技能。Linux 基金会认证系统管理员(Linux Foundation Certified System Administrator)证书对想要一个灵活的,有意义的初级证书的人来说是个不错的选择。 + +###6、 参与进来### + +如果你所在的地方有本地 Linux 用户组(Linux Users Group,LUG)的话,这时候你可以考虑加入他们。这些组织通常由各种年龄和经验水平的人组成,所以不管你的 Linux 经验水平如何,你都能找到和你类似技能水平的人互助,或是更高水平的 Linux 用户来解答你的问题以及介绍有用的资源。要想知道你附近有没有 LUG,上 meet.com 看看,或是附近的大学,又或是上网搜索一下。 + +还有不少在线社区可以在你学习 Linux 的时候帮助你。这些站点和社区向 Linux 新手和有经验的管理员都能够提供帮助和支持: + + - [Linux Admin subreddit](https://www.reddit.com/r/linuxadmin) + - [Linux.com](http://www.linux.com/) + - [training.linuxfoundation.org](http://training.linuxfoundation.org/) + - [http://community.ubuntu.com/help-information/](http://community.ubuntu.com/help-information/) + - [https://forums.opensuse.org/forum.php](https://forums.opensuse.org/forum.php) + - [http://wiki.centos.org/Documentation](http://wiki.centos.org/Documentation) + +###7、 学会热爱文档### + +最后但同样重要的是,如果你困在 Linux 的某些地方,别忘了 Linux 包含的文档。使用命令 man(manual,手册),info 和 help,你从系统内就可以找到 Linux 几乎所有方面的信息。这些内置资源的用处再夸大也不为过,你会发现你在生涯中始终会用到,所以你可能最好早点掌握使用它们。 + +想要了解更多开始你 Linux IT 生涯的信息?查看我们免费的电子书“[开始你 Linux IT 生涯的简短指南](http://training.linuxfoundation.org/sysadmin-it-career-guide)”。 + +------------------------------------------------------------------------------ + +via: http://www.linux.com/news/featured-blogs/191-linux-training/834644-7-steps-to-start-your-linux-sysadmin-career + +作者:[linux.com][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:linux.com diff --git a/published/201606/20160218 9 Key Trends in Hybrid Cloud Computing.md b/published/201606/20160218 9 Key Trends in Hybrid Cloud Computing.md new file mode 100644 index 0000000000..92fba66440 --- /dev/null +++ b/published/201606/20160218 9 Key Trends in Hybrid Cloud Computing.md @@ -0,0 +1,74 @@ +混合云计算的 9 大关键趋势 +======================================== + +自从几年前云计算的概念受到IT界的关注以来,公有云、私有云和混合云这三种云计算方式都有了可观的演进。其中混合云计算方式是最热门的云计算方式,在接受调查的公司中,有[88%的公司](https://www.greenhousedata.com/blog/hybrid-continues-to-be-most-popular-cloud-option-adoption-accelerating)将混合云计算摆在至关重要的地位。 + +混合云计算的疾速演进意味着一两年前的传统观念已经过时了。为此,我们询问了几个行业分析师,混合云在2016年的走势将会如何,我们得到了几个比较有意思的答案。 + +1. **2016年可能是我们将混合云投入使用的一年。** + +混合云从本质上来说依赖于私有云,这对企业来说是比较难实现的。事实上,亚马逊,谷歌和微软的公有云已经进行了大量的投资,并且起步也比较早。私有云拖了混合云发展和使用的后腿。 + +私有云没有得到这么多的投资,这是有私有云的性质决定的。私有云意味着维护和投资你自己的数据中心。而许多公有云提供商正在推动企业减少或者消除他们的数据中心。 + +然而,得益于 OpenStack 的发展和微软的 Azure Stack ,这两者基本上就是封装在一个盒子里的私有云,我们将会看到私有云慢慢追上公有云的发展步伐。支持混合云的工具、基础设施和架构也变得更加健壮。 + +2. **容器,微服务和 unikernels 将会促进混合云的发展。** + +分析师预言,到2016年底,这些原生云技术会或多或少成为主流的。这些云技术正在快速成熟,将会成为虚拟机的一个替代品,而虚拟机需要更多的资源。 + +更重要的是,他们既能工作在在线场景,也能工作在离线场景。容器化和编排允许快速的扩大规模,进行公有云和私有云之间的服务迁移,使你能够更容易移动你的服务。 + +3. **数据和相关性占据核心舞台。** + +所有的云计算方式都处在发展模式。这使得云计算变成了一个技术类的故事。咨询公司 [Avoa](http://avoa.com/2016/01/01/2016-is-the-year-of-data-and-relevance/)称,随着云趋于成熟,数据和相关性变得越来越重要。起初,云计算和大数据都是关于怎么得到尽可能多的数据,然后他们担心如何处理这海量的数据。 + +2016年,相关组织将会继续锤炼如何进行数据收集和使用的相关技术。在必须处理的技术和文化方面仍然有待提高。但是2016年应该重新将关注点放在从各个方面考虑的数据重要性上,发现最相关的信息,而不只是数据的数量。 + +4. **云服务将超越按需工作负载。** + +AWS(Amazon Web Services) 起初是提供给程序员或者是开发人员能够快速启动虚拟机、做一些工作然后离线的一个地方。本质上是按需使用,要花费更多的钱才能让这些服务持续运行、全天候工作。 + +然而,IT 公司正开始作为服务代理,为内部用户提供各种 IT 服务。可以是内部 IT 服务,公有云基础架构提供商,平台服务和软件服务。 + +他们将越来越多的认识到像云管理平台这样的工具的价值。云管理平台可以提供针对不同服务的基于策略的一致性管理。他们也将看到像提高可移植性的容器等技术的价值。然而,云服务代理,在不同云之间快速移动工作负载从而进行价格套利或者类似的原因,仍然是行不通的。 + +5. **服务提供商转变成了云服务提供商。** + +到目前为止,购买云服务成了直销模式。AWS EC2 服务的使用者通常变成了购买者,要么通过官方认证渠道,要么通过影子 IT。但是随着云服务越来越全面,提供的服务菜单越来越复杂,越来越多的人转向了经销商,服务提供商转变成了他们 IT 服务的购买者。 + +2nd Watch (2nd Watch 是为企业提供云管理的 AWS 的首选合作伙伴)最近的一项调查发现,在美国将近85%的 IT 高管愿意支付一个小的溢价从渠道商那里购买公有云服务,如果购买过程变得不再那么复杂。根据调查,这85%的高管有五分之四的愿意支付额外的15%或者更多。三分之一的受访高管表示,这些有助于他们购买、使用和管理公有云服务。 + +6. **物联网和云对于2016年的意义好比移动和云对2012年的意义。** + +物联网获得了广泛的关注,更重要的是,物联网已经从测试场景进行了实际应用。云的分布式特性使得云成为了物联网非常重要的一部分,对于工业物联网,与后端系统交互的机械和重型设备,混合云将会成为最自然的驱动者,连接,数据采集和处理将会发生在混合云环境中,这得益于私有云在安全和隐私方面的好处。 + +7. **NIST 对云的定义开始瓦解。** + +2011年,美国国家标准与技术研究院发布了“[ NIST 对于云计算的定义](http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf)”(PDF),这个定义成为了私有云、公有云、混合云和 aaS 模板的标准定义。 + +然而随着时间的推移,定义开始改变。IaaS 变得更加复杂,开始支持 OpenStack,[Swift](https://wiki.openstack.org/wiki/Swift) 对象存储和神经网络这样的项目。PaaS 似乎正在消退,因为 PaaS 和传统的中间件开发几乎无异。SaaS,只是通过浏览器进行访问的应用,也正在失去发展动力,因为许多 app 和服务提供了许多云接口,你可以通过各种手段调用接口,不仅仅通过浏览器。 + +8. **分析变得更加重要** + +对于混合云计算来说,分析将会成为一个巨大的增长机遇,云计算具有规模大、灵活性高的优势,使得云计算非常适合需要海量数据的分析工作。对于某些分析方式,比如高度敏感的数据,私有云仍然是主导地位,但是私有云也是混合云的一部分。因此,无论如何,混合云计算胜出。 + +9. **安全仍然是一个非常紧迫的问题。** + +随着混合云计算在2016年的发展,以及对物联网和容器等新技术的引进,这同时也增加了更多的脆弱可攻破的地方,从而导致数据泄露。先增加使用新技术的趋势,然后再去考虑安全性,这种问题经常发生,同时还有缺少经验的工程师不去考虑系统的安全问题,总有一天你会尝到灾难的后果的。 + +当一项新技术出来,管理规范总是落后于安全问题产生后,然后我们才考虑去保护技术。容器就是一个很鲜明的例子。你可以从 Docker 下载各种示例容器,但是你知道你下载的东西来自哪里么?在人们在对容器内容不知情的情况下下载并运行了容器之后,Docker 不得不重新加上安全验证。 + +像 Path 和 Snapchat 这样的移动技术在智能手机市场火起来之后也出现了重大的安全问题。一项新技术被恶意利用无可避免。所以安全研究人员需要通过各种手段来保证新技术的安全性,很有可能在部署之后才会发现安全问题。 + +------------------------------------------------------------------------------ + +via: http://www.datamation.com/cloud-computing/9-key-trends-in-hybrid-cloud-computing.html + +作者:[Andy Patrizio][a] +译者:[棣琦](https://github.com/sonofelice) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Andy-Patrizio-90720.html diff --git a/translated/tech/20160218 A Linux-powered microwave oven.md b/published/201606/20160218 A Linux-powered microwave oven.md similarity index 54% rename from translated/tech/20160218 A Linux-powered microwave oven.md rename to published/201606/20160218 A Linux-powered microwave oven.md index 57aaace3a7..4b0c132694 100644 --- a/translated/tech/20160218 A Linux-powered microwave oven.md +++ b/published/201606/20160218 A Linux-powered microwave oven.md @@ -1,9 +1,9 @@ -一个Linux驱动的微波炉 +一个 Linux 驱动的微波炉 ================================================================================ -[linux.conf.au](http://linux.conf.au/)里的人们都有一种想到什么就动手去实现的想法。随着硬件开源运动不断地发展壮大,这种想法越来越多,与现实世界联系的越来越紧密,而不仅仅存在于数字世界中。David Tulloh用他制作的[Linux驱动的微波炉 [WebM]](http://mirror.linux.org.au/linux.conf.au/2016/04_Thursday/D4.303_Costa_Theatre/Linux_driven_microwave.webm)来展示一个差劲的微波炉会多么难用以及说明他的项目可以改造这些微波炉使得它们不那么讨人厌。 +[linux.conf.au](http://linux.conf.au/)里的人们都有一种想到什么就动手去实现的想法。随着硬件开源运动不断地发展壮大,这种想法越来越多,也与现实世界联系的越来越紧密,而不仅仅存在于数字世界中。David Tulloh用他制作的[Linux驱动的微波炉 [WebM]](http://mirror.linux.org.au/linux.conf.au/2016/04_Thursday/D4.303_Costa_Theatre/Linux_driven_microwave.webm)来展示一个差劲的微波炉会多么难用,以及说明他的项目可以改造这些微波炉使得它们不那么讨人厌。 -Tulloh的故事要从他买到了一个公认很便宜的微波炉开始说起,它的用户界面比其它微波炉默认的还要糟糕。设定时间时必须使劲按按钮以至于把微波炉都向后推了一段距离——而事实上必须要用力拉仓门把手才能把微波炉拖回原来的位置,这形成了一个“优雅”的平衡。当然这只是极端情况。Tulloh很郁闷因为这个微波炉近十年来都没有一丁点明显的改善。他可能买到了一个又小又便宜的微波炉,而且特点是大部分人不研究使用手册就不会使用它——和智能手机的对比更加明显:智能手机只需知道一点点的操作指南并且被广泛使用。 +Tulloh的故事要从他买到了一个公认很便宜的微波炉开始说起,它的用户界面比其它微波炉默认的还要糟糕。设定时间时必须使劲按按钮以至于把微波炉都向后推了一段距离——而事实上必须要用力拉仓门把手才能把微波炉拖回原来的位置,这形成了一个“优雅”的平衡。当然这只是极端情况。Tulloh郁闷的是因为这个微波炉近十年来都没有一丁点明显的改善。他可能买到了一个又小又便宜的微波炉,而且特点是大部分人不研究使用手册就不会使用它——和智能手机的对比更加明显:智能手机只需知道一点点的操作指南并且被广泛使用。 改造这个微波炉不一定没有前途,“让微波炉重获新生”——这个想法成为了一个原型,如果Tulloh可以再平衡一下想做的功能和需求之间的关系的话他希望这变成一个众筹项目:一个Linux驱动的微波炉。 @@ -11,35 +11,29 @@ Tulloh的故事要从他买到了一个公认很便宜的微波炉开始说起 ## 加一点新奇的小玩意 +如果把“Linux”和“微波炉”联系在一起的话,你可能会想到给微波炉加上一个智能手机式的触摸屏和网络链接,然后再通过社区做一款微波炉的“革命性”的手机应用,想到这些就像做菜想到分享食谱一样显而易见。但Tulloh的目标和他的原型远远超过这些,他做了两个新奇的功能——热感相机和称量物体质量的称重装置。 +这个热感相机提供一个可以精确到两度的八乘八像素的图像,这足够发现一杯牛奶是否加热到沸腾或者牛排是否解冻到快不能用来烹饪。不论发生哪种情况,都可以减小功率或者关掉它。而且在必要的时候会发出警报。这可能不是第一个可以检测温度的微波炉——GE在十年前就开始卖带温度探针的微波炉了——但是一个一直工作的内置传感器比一个手工探针有用多了,尤其是有一个可用的API支持的时候。 -如果你把“Linux”和“微波炉”联系在一起的话就可能想到给微波炉加上一个智能手机式的触摸屏和网络链接,然后再通过社区做一款微波炉的“革命性”的手机应用,想到这些就像做菜想到分享食谱一样显而易见。但Tulloh的目标和他的原型远远超过这些,他做了两个新奇的功能——热感相机和称量物体质量的称重装置。 +第二个新发明是一个嵌入的称重装置,它可以在加热之前称量食物(和容器)。很多食谱根据质量大小给出指导的烹饪时间,很多微波炉支持你手动输入质量以便它帮你计算。利用内置的称重装置,这一过程可以变成自动化的。在许多微波炉的转盘下面稳固地放置一个称重装置是一个机械方面的挑战,不过Tulloh觉得这个问题不难处理。相反,他对微波炉的设计是基于“平板”或者“平板挂车”的风格——在四角各放置一个传感器,这不仅在机械实现上很简单而且很好的达到了要求。 -这个热感相机提供一个精确度两自由度的八乘八像素的图片,这足够发现一杯牛奶是否加热到沸腾或者牛排是否解冻到快不能用来烹饪。不论发生哪种情况,功率都可以减小或者关掉。而且在必要的时候会发出警报。这可能不是第一个可以检测温度的微波炉——GE在十年前就开始卖带温度探针的微波炉了——但是一个一直工作的内置传感器比一个手工探针有用多了尤其是有一个可用的API支持的时候。 - -第二个新发明是一个嵌入的称重装置,它可以在加热之前称量食物(和容器)。很多食谱根据质量给出指导的烹饪时间,很多微波炉支持你手动输入质量以便它帮你计算。利用内置的称重装置,这一过程可以变成自动化的。在许多微波炉的转盘下面稳固地放置一个称重装置是一个机械方面的挑战不过Tulloh觉得这个问题不难处理。反而他对微波炉的设计是基于“平板”或者“平板挂车”的风格——在四角各放置一个传感器,这不仅在机械实现上很简单而且很好的达到了要求。 - - -[用户界面] -一旦你有了这些额外添加的并与逻辑引擎相连的质量温度传感器,你可以去尝试更多好玩的可能。一杯刚从冰箱里拿出来的冰牛奶的质量温度分布可能会有适度误差。Tulloh发现这种情况可以被检测到而且提供一些有关的像“煮沸”或者“加热”的选项也是容易做到的(下面有一个模拟的界面,可点击操作的版本请点击右边链接 [here](http://mwgui.tulloh.id.au/)) +一旦你有了这些额外添加的并与逻辑引擎相连的质量温度传感器,你可以去尝试更多好玩的可能。一杯刚从冰箱里拿出来的冰牛奶的质量温度分布可能会有适度误差。Tulloh发现可以监检测到这种情况,而且提供一些有关的像“煮沸”或者“加热”的选项也是容易做到的(下面有一个模拟的界面,可点击操作的版本请点击右边链接 [here](http://mwgui.tulloh.id.au/)) ![](https://static.lwn.net/images/2016/lca-ovengui-sm.png) ## 改造陈旧的东西 -除了才开发出来的新功能,Tulloh还想要提升那些原本就提供的功能。可能不是所有微波炉的门把手都像Tulloh那个廉价的一样僵硬,但是很少有微波炉将把手设计的让残疾人也能轻松使用。这些缺陷都是可调整的,尤其是在美国,微波炉应该在仓门关闭的时候给出一个确定关闭的提示。这种确认必须是可靠的以预防那些伪劣产品,所以在仓门闭合时固定的槽位里添加一个短杆以确认仓门开闭状态,不误使微波炉在仓门开着的时候工作。事实上,必须要两个相互联系的机关,如果他们提供的结果不一致, -保险丝必须断开以便启动一个呼叫服务。Tulloh认为提供一个磁力门闩有更大的灵活性(包含简单的软件控制)并且像磁控也同样用于[磁性钥匙锁](https://en.wikipedia.org/wiki/Magnetic_keyed_lock),它可以让磁力门闩确认微波炉门是否关闭。 +除了才开发出来的新功能,Tulloh还想要提升那些原本就提供的功能。可能不是所有微波炉的门把手都像Tulloh那个廉价的一样僵硬,但是很少有微波炉将把手设计的让残疾人也能轻松使用。这些缺陷都是可调整的,尤其是在美国,微波炉应该在仓门关闭的时候给出一个确定关闭的提示。这种确认必须是可靠的以预防那些伪劣产品,所以在仓门闭合时固定的槽位里添加一个短杆以确认仓门开闭状态,不至于误使微波炉在仓门开着的时候工作。事实上,必须要有两个相互联系的机关,如果他们提供的结果不一致,保险丝必须断开以便启动一个呼叫服务。Tulloh认为提供一个磁力门闩有更大的灵活性(包含简单的软件控制)并且像磁控也同样用于[磁性钥匙锁](https://en.wikipedia.org/wiki/Magnetic_keyed_lock),它可以让磁力门闩确认微波炉门是否关闭。 微波炉的另一个痛点是它会发出令人厌烦的声音。Tulloh去掉了蜂鸣器并且使用香蕉派(类似于树莓派的单片机开发板)控制他的微波炉。这可以通过一个把文本转换成语音的系统来用令人愉悦而且可配置的警报来提示和引导使用者。显然,下一步就是装上一个用来控制声音的扩音器。 - -许多微波炉除了定时和设置功率档位之外还可以做更多的事情——它们为烹饪,加热,化冻等提供一系列的功率谱。加上一个精确的温度测量装置感觉会为这个图表大大扩展它们的序列。Andrew Tridgell对一个问题很好奇,加热巧克力——一个需要非常精确的温度控制的过程——是否是可能的。Tulloh没有过这方面的经验,他不敢保证这个一定可以,但是这个实验结果的确值得期待。即使没做成这件事,它也显出了潜在价值——社区接下来可以更进一步去做这件事。 +许多微波炉除了定时和设置功率档位之外还可以做更多的事情——它们为烹饪,加热,化冻等提供一系列的功率档位。加上一个精确的温度测量装置感觉会为这个图表大大扩展这个列表。Andrew Tridgell对一个问题很好奇,加热巧克力——一个需要非常精确的温度控制的过程——是否是可能的。Tulloh没有过这方面的经验,他不敢保证这个一定可以,但是这个实验结果的确值得期待。即使没做成这件事,它也显出了潜在价值——社区接下来可以更进一步去做这件事。 ## 实用性怎么样? Tulloh十分乐意向全世界分享这个linux驱动的微波炉,他希望看到(因为这件事)形成一个社区并且想看到它接下来的走势。买一个现成的微波炉并且替换掉里面的电子元件看起来不是一个可行的点子。最后的结果可能会很糟,而买一个小巧智能的微波炉必然要花掉(比自己改造)更多的钱,但是潜在的顾客不想在他们的厨房里看到乱七八糟又不协调的东西。 -许多零件都是现成的可以买到的(磁电管,处理器板,热传感器等等),像USB接口的热传感器,而且都很容易安装。软件原型当然也开源在[GitHub](https://github.com/lod?tab=repositories)。这个样例和微波炉门有不小的挑战性并且很可能要定制。Tulloh想要通过提供左侧开仓门的微波炉和颜色多样化的选项来转逆境为机遇。 +许多零件都是现成的可以买到的(磁电管,处理器板,热传感器等等),像USB接口的热传感器,而且都很容易安装。软件原型当然也开源在[GitHub](https://github.com/lod?tab=repositories)。微波炉的舱室和门有不小的挑战性并且很可能要定制。Tulloh想要通过提供左侧开仓门的微波炉和颜色多样化的选项来转逆境为机遇。 一个对读者的快速调查:很少有人会贸然承诺他会为了一个全新的升级过的烤箱付出1000澳大利亚元。当然,很难知道是否会有充足的时间和足够多的读者来完成这个调查。这整个项目看起来很有趣。所以Tulloh的[博客](http://david.tulloh.id.au/category/microwave/) (点击这里)也很值得一看。 @@ -48,8 +42,6 @@ Tulloh十分乐意向全世界分享这个linux驱动的微波炉,他希望看 via: https://lwn.net/Articles/674877/ 作者:Neil Brown -译者:yuba0604(https://github.com/yuba0604) - -译者水平有限,敬请指正。(lizhengyu@gmail.com) +译者:[yuba0604](https://github.com/yuba0604) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20160218 Getting to Know Linux File Permissions.md b/published/201606/20160218 Getting to Know Linux File Permissions.md similarity index 62% rename from translated/tech/20160218 Getting to Know Linux File Permissions.md rename to published/201606/20160218 Getting to Know Linux File Permissions.md index 2730806d61..99390b69ae 100644 --- a/translated/tech/20160218 Getting to Know Linux File Permissions.md +++ b/published/201606/20160218 Getting to Know Linux File Permissions.md @@ -1,34 +1,29 @@ -初识Linux文件权限 +初识 Linux 文件权限 ================================================================================ -在 Linux 中最基本的任务之一就是设置文件权限。理解如何实现是你进入 LInux 世界的第一步。如您所料,这一基本操作在类 UNIX 操作系统中大同小异。实际上,Linux 文件权限系统就直接取自于 UNIX 文件权限(甚至使用许多相同的工具)。 +在 Linux 中最基本的任务之一就是设置文件权限。理解如何实现是你进入 Linux 世界的第一步。如您所料,这一基本操作在类 UNIX 操作系统中大同小异。实际上,Linux 文件权限系统就直接取自于 UNIX 文件权限(甚至使用许多相同的工具)。 ![](http://www.linux.com/images/stories/66866/files_a.png) 但不要以为理解文件权限需要长时间的学习。事实上会很简单,让我们一起来看看你需要了解哪些内容以及如何使用它们。 -##基础概念 +###基础概念 -你要明白的第一件事是文件权限适用于什么。你做的更有效的就是设置一个分组的权限。当你将其分解,那这个概念就真的简单多了。那到底什么是权限,什么是分组呢? +你要明白的第一件事是文件权限可以用来干什么。当你设置一个分组的权限时发生了什么?让我们将其展开来说,这个概念就真的简单多了。那到底什么是权限,什么是分组呢? 你可以设置的3种权限: -- 读 — 允许该组读文件(用`r`表示) +- 读 — 允许该分组读文件(用`r`表示) +- 写 — 允许该分组写文件(用`w`表示) +- 执行 — 允许该分组执行(运行)文件(用`x`表示) -- 写 — 允许该组写文件(用`w`表示) - -- 执行 — 允许该组执行(运行)文件(用`x`表示) - -为了更好地解释这如何应用于一个分组,例如,你允许一个分组读和写一个文件,但不能执行。或者,你可以允许一个组读和执行一个文件,但不能写。甚至你可以允许一组有读、写、执行全部的权限,也可以删除全部权限来剥夺组权限。 +为了更好地解释这如何应用于一个分组,例如,你允许一个分组可以读写一个文件,但不能执行。或者,你可以允许一个分组读和执行一个文件,但不能写。甚至你可以允许一个分组有读、写、执行全部的权限,也可以删除全部权限来去除该组的权限。 现在,什么是分组呢,有以下4个: - user — 文件实际的拥有者 - -- group — 用户所在的组 - +- group — 用户所在的用户组 - others — 用户组外的其他用户 - - all — 所有用户 大多数情况,你只会对前3组进行操作,`all` 这一组只是作为快捷方式(稍后我会解释)。 @@ -37,103 +32,98 @@ 如果你打开一个终端并运行命令 `ls -l`,你将会看到逐行列出当前工作目录下所有的文件和文件夹的列表(如图1). -你会留意到最左边那列是像 `-rw-rw-r--` 这样的。 +你会留意到最左边那列是像是 `-rw-rw-r--` 这样的。 -实际上这列表该这样看的: +实际上这列表应该这样看: ->rw- rw- r-- +> rw- rw- r-- 正如你所见,列表将其分为如下3部分: - rw- - - rw- - - r-- -权限和组的顺序都很重要,顺序总是: +权限和组的顺序都很重要,顺序总是: - 所属者 所属组 其他人 — 分组 - - 读 写 执行 — 权限 -在我们上面示例的权限列表中,所属者拥有读/写权限,所属组拥有读/写权限,其他人用户仅拥有读权限。这些分组中赋予执行权限的话,就用一个x表示。 +在我们上面示例的权限列表中,所属者拥有读/写权限,所属组拥有读/写权限,其他人用户仅拥有读权限。这些分组中赋予执行权限的话,就用一个 x 表示。 -## 等效数值 +### 等效数值 接下来我们让它更复杂一些,每个权限都可以用一个数字表示。这些数字是: - 读 — 4 - - 写 — 2 - - 执行— 1 数值代替不是一个一个的替换,你不能像这样: ->-42-42-4-- +> -42-42-4-- 你该把每个分组的数值相加,给用户读和写权限,你该用 4 + 2 得到 6。给用户组相同的权限,也是使用相同的数值。假如你只想给其他用户读的权限,那就设置它为4。现在用数值表示为: ->664 +> 664 -如果你想给一个文件664权限,你可以使用chmod命令,如: +如果你想给一个文件664权限,你可以使用 `chmod` 命令,如: ->chmod 664 FILENAME + chmod 664 FILENAME FILENAME 处为文件名。 -## 更改权限 +### 更改权限 -既然你已经理解了文件权限,那是时候学习如何更改这些权限了。就是使用chmod命令来实现。第一步你要知道你能否更改文件权限,你必须是文件的所有者或者有权限编辑文件(或者使用su或sudo进行操作)。正因为这样,你不能随意切换目录和更改文件权限。 +既然你已经理解了文件权限,那是时候学习如何更改这些权限了。就是使用 `chmod` 命令来实现。第一步你要知道你能否更改文件权限,你必须是文件的所有者或者有权限编辑文件(或者通过 `su` 或 `sudo` 得到权限)。正因为这样,你不能随意切换目录和更改文件权限。 继续用我们的例子 (`-rw-rw-r--`)。假设这个文件(命名为 script.sh)实际是个shell脚本,需要被执行,但是你只想让自己有权限执行这个脚本。这个时候,你可能会想:“我需要是文件的权限如 `-rwx-rw-r--`”。为了设置 `x` 权限位,你可以这样使用 `chmod` 命令: ->chmod u+x script.sh + chmod u+x script.sh 这时候,列表中显示的应该是 -rwx-rw-r-- 。 如果你想同时让用户及其所属组同时拥有执行权限,命令应该这样: ->chmod ug+x script.sh + chmod ug+x script.sh 明白这是怎么工作的了吗?下面我们让它更有趣些。不管什么原因,你不小心给了所有分组对文件的执行权限(列表中是这样的 `-rwx-rwx-r-x`)。 如果你想去除其他用户的执行权限,只需运行命令: ->chmod o-x script.sh + chmod o-x script.sh 如果你想完全删除文件的可执行权限,你可以用两种方法: ->chmod ugo-x script.sh + chmod ugo-x script.sh 或者 ->chmod a-x script.sh + chmod a-x script.sh -以上就是所有内容,能使操作更有效率。我希望能避免哪些可能会导致一些问题的操作(例如你不小心对 script.sh 使用 `a-rwx` 这样的chmod命令)。 +以上就是所有内容,能使操作更有效率。我希望能避免哪些可能会导致一些问题的操作(例如你不小心对 script.sh 使用 `a-rwx` 这样的 `chmod` 命令)。 -## 目录权限 +### 目录权限 你也可以对一个目录执行 `chmod` 命令。当你作为用户创建一个新的目录,通常新建目录具有这样的权限: ->drwxrwxr-x +> drwxrwxr-x 注:开头的 `d` 表示这是一个目录。 -正如你所见,用户及其所在组都对文件夹具有操作权限,但这并不意味着在这文件夹中出创建的问价也具有与其相同的权限(创建的文件使用默认系统的权限 `-rw-rw-r--`)。但如果你想在新文件夹中创建文件,并且移除用户组的写权限,你不用切换到该目录下并对所有文件使用chmod命令。你可以用加上参数R(意味着递归)的 `chmod` 命令,同时更改该文件夹及其目录下所有的文件的权限。 +正如你所见,用户及其所在组都对文件夹具有操作权限,但这并不意味着在这文件夹中出创建的文件也具有与其相同的权限(创建的文件使用默认系统的权限 `-rw-rw-r--`)。但如果你想在新文件夹中创建文件,并且移除用户组的写权限,你不用切换到该目录下并对所有文件使用 `chmod` 命令。你可以用加上参数 R(意味着递归)的 `chmod` 命令,同时更改该文件夹及其目录下所有的文件的权限。 现在,假设有一文件夹 TEST,里面有一些脚本,所有这些(包括 TEST 文件夹)拥有权限 `-rwxrwxr-x`。如果你想移除用户组的写权限,你可以运行命令: ->chmod -R g-w TEST + chmod -R g-w TEST 运行命令 `ls -l`,你讲看到列出的 TEST 文件夹的权限信息是 `drwxr-xr-x`。用户组被去除了写权限(其目录下的所有文件也如此)。 -## 总结 +### 总结 -现在,你应该对基本的Linux文件权限有了深入的理解。对于更高级的东西学起来会很轻松,像`setid`,`setuid` 和 `ACLs` 这些。没有良好的基础,你很快就会混淆不清概念的。 +现在,你应该对基本的 Linux 文件权限有了深入的理解。对于更高级的东西学起来会很轻松,像 setgid、setuid 和 ACL 这些。没有良好的基础,你很快就会混淆不清概念的。 -Linux 文件权限从早期到现在没有太大变化,而且很可能以后也不会。 +Linux 文件权限从早期到现在没有太大变化,而且很可能以后也不会变化。 ------------------------------------------------------------------------------ @@ -141,7 +131,7 @@ via: http://www.linux.com/learn/tutorials/885268-getting-to-know-linux-file-perm 作者:[Jack Wallen][a] 译者:[ynmlml](https://github.com/ynmlml) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20160218 Linux Systems Patched for Critical glibc Flaw.md b/published/201606/20160218 Linux Systems Patched for Critical glibc Flaw.md similarity index 55% rename from translated/tech/20160218 Linux Systems Patched for Critical glibc Flaw.md rename to published/201606/20160218 Linux Systems Patched for Critical glibc Flaw.md index 284ac99fa0..e6a008f7bc 100644 --- a/translated/tech/20160218 Linux Systems Patched for Critical glibc Flaw.md +++ b/published/201606/20160218 Linux Systems Patched for Critical glibc Flaw.md @@ -1,39 +1,41 @@ 修补 Linux 系统 glibc 严重漏洞 ================================================= -**谷歌揭露的一个严重漏洞影响主流的 Linux 发行版。glibc 的漏洞可能导致远程代码执行。** +**谷歌披露的一个严重漏洞影响到了主流的 Linux 发行版。glibc 的漏洞可能导致远程代码执行。** -Linux 用户今天都竞相给一个可以使系统暴露在远程代码执行风险中的核心 glibc 开放源码库的严重漏洞打补丁。glibc 的漏洞被确定为 CVE-2015-7547,题为“getaddrinfo 基于堆栈的缓冲区溢出”。 +编者按:这个消息并不是一个新闻,基于技术的原因,我们还是分享给大家。 + +Linux 用户都在竞相给一个可以使系统暴露在远程代码执行风险中的核心 glibc 开放源码库的严重漏洞打补丁。这个 glibc 的漏洞编号被确定为 CVE-2015-7547,题为“getaddrinfo 基于堆栈的缓冲区溢出”。 glibc,或 GNU C 库,是一个开放源码的 C 和 C++ 编程语言库的实现,是每一个主流 Linux 发行版的一部分。谷歌工程师们在他们试图连接到某个主机系统时发生了一个段错误导致连接崩溃,偶然发现了 CVE-2015-7547 问题。进一步的研究表明, glibc 有缺陷而且该崩溃可能实现任意远程代码执行的条件。 -谷歌在一篇博客文章中写道, “当 getaddrinfo() 库函数被使用时,glibc 的 DNS 客户端解析器易受基于堆栈缓冲区溢出的攻击,使用该功能的软件可能被利用为攻击者控制的域名,攻击者控制的 DNS[域名系统] 服务器,或通过中间人攻击。” +谷歌在一篇博客文章中写道, “当 getaddrinfo() 库函数被使用时,glibc 的 DNS 客户端解析器易受基于堆栈缓冲区溢出的攻击,使用该功能的软件可能通过攻击者控制的域名、攻击者控制的 DNS [域名系统] 服务器,或通过中间人攻击方式(MITM)进行破坏。” 其实利用 CVE-2015-7547 问题并不简单,但它是可能的。为了证明这个问题能被利用,谷歌发布了论证一个终端用户或系统是否易受攻击的概念验证(POC)代码到 GitHub 上。 -GitHub 上的 POC 网页声明“服务器代码触发漏洞,因此会使客户端代码崩溃”。 +GitHub 上的 POC 网页说“服务器代码会触发漏洞,因此会使客户端代码崩溃”。 -Duo Security 公司的高级安全研究员 Mark Loveless 解释说 CVE-2015-7547 的主要风险在于 Linux 上依赖于 DNS 响应的基于客户端的应用程序。 +Duo Security 公司的高级安全研究员 Mark Loveless 解释说 CVE-2015-7547 的主要风险在于依赖于 DNS 响应的基于 Linux 客户端的应用程序。 Loveless 告诉 eWEEK “需要一些特定的条件,所以不是每个应用程序都会受到影响,但似乎一些命令行工具,包括流行的 SSH[安全 Shell] 客户端都可能触发该漏洞,我们认为这是严重的,主要是因为对 Linux 系统存在的风险,但也因为潜在的其他问题。” -其他问题可能包括一种触发调用易受攻击的 glibc 库 getaddrinfo() 的基于电子邮件攻击的风险。另外值得注意的是,该漏洞被发现之前已存在于代码之中多年。 +其他问题可能包括一种通过电子邮件触发调用易受攻击的 glibc 库 getaddrinfo() 攻击的风险。另外值得注意的是,该漏洞被发现之前已存在于代码之中多年。 -谷歌的工程师不是第一或唯一发现 glibc 中的安全风险的团体。这个问题于 2015 年 7 月 13 日首先被报告给了 glibc 的 bug[跟踪系统](https://sourceware.org/bugzilla/show_bug.cgi?id=1866)。该缺陷的根源可以更进一步追溯到在 2008 五月发布的 glibc 2.9 的代码提交时首次引入缺陷。 +谷歌的工程师不是第一或唯一发现这个 glibc 安全风险的团体。这个问题于 2015 年 7 月 13 日首先被报告给了 glibc 的 bug[跟踪系统](https://sourceware.org/bugzilla/show_bug.cgi?id=1866)。该缺陷的根源可以更进一步追溯到在 2008 五月发布的 glibc 2.9 的代码提交时首次引入缺陷。 -Linux 厂商红帽也独立找到了 glibc 中的这个 bug,而且在 2016 年 1 月 6 日,谷歌和红帽开发人员证实,他们作为最初与上游 glibc 的维护者私下讨论的部分人员,已经独立在为同一个漏洞工作。 +Linux 厂商红帽也独立找到了 glibc 中的这个 bug,而且是在 2016 年 1 月 6 日,谷歌和红帽开发人员证实,他们作为最初与上游 glibc 的维护者私下讨论的部分人员,已经独立在为同一个漏洞工作。 -红帽产品安全首席软件工程师 Florian Weimer 告诉 eWEEK “一旦确认了两个团队都在为同一个漏洞工作,我们合作进行可能的修复,缓解措施和回归测试,我们还共同努力,使测试覆盖尽可能广,捕捉代码中的任何相关问题,以帮助避免今后更多问题。” +红帽产品安全首席软件工程师 Florian Weimer 告诉 eWEEK “一旦确认了两个团队都在为同一个漏洞工作,我们会合作进行可能的修复,缓解措施和回归测试,我们还会共同努力,使测试覆盖尽可能广,捕捉代码中的任何相关问题,以帮助避免今后更多问题。” 由于缺陷不明显或不易立即显现,我们花了几年时间才发现 glibc 代码有一个安全问题。 -Weimer 说“要诊断一个网络组件的漏洞,如 DNS 解析器,当遇到问题时通常要看被抓数据包的踪迹,在这种情况下这样的抓包不适用,所以需要一些实验来重现触发这个 bug 的确切场景。” +Weimer 说“要诊断一个网络组件的漏洞,如 DNS 解析器,当遇到问题时通常要看抓到的数据包的踪迹,在这种情况下这样的抓包不适用,所以需要一些实验来重现触发这个 bug 的确切场景。” -Weimer 补充说,一旦可以抓取数据包,大量精力投入到验证修复程序中,最终导致回归测试套件一系列的改进,有助于上游 glibc 项目。 +Weimer 补充说,一旦可以抓取数据包,就会投入大量精力到验证修复程序中,最终完成回归测试套件一系列的改进,有助于上游 glibc 项目。 -在许多情况下,安全增强式 Linux (SELinux) 的强制访问安全控制可以减少潜在漏洞风险,除了这个 glibc 的新问题。 +在许多情况下,安全增强式 Linux (SELinux) 的强制访问安全控制可以减少潜在漏洞风险,但是这个 glibc 的新问题例外。 -Weimer 说“由于攻击者提供的任意代码的执行,风险是重要系统功能的一个妥协。一个合适的 SELinux 策略可以遏制一些攻击者可能会做的损害,并限制他们访问系统,但是 DNS 被许多应用程序和系统组件使用,所以 SELinux 策略只提供了针对此问题有限的遏制。” +Weimer 说“由于攻击者提供的任意代码的执行,会对很多重要系统功能带来风险。一个合适的 SELinux 策略可以遏制一些攻击者可能会做的损害,并限制他们访问系统,但是 DNS 被许多应用程序和系统组件使用,所以 SELinux 策略只提供了针对此问题有限的遏制。” 在揭露漏洞的今天,现在有一个可用的补丁来减少 CVE-2015-7547 的潜在风险。 @@ -43,7 +45,7 @@ via: http://www.eweek.com/security/linux-systems-patched-for-critical-glibc-flaw 作者:[Michael Kerner][a] 译者:[robot527](https://github.com/robot527) -校对:[校对者 ID](https://github.com/校对者 ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux 中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20160218 Top 4 open source issue tracking tools.md b/published/201606/20160218 Top 4 open source issue tracking tools.md similarity index 68% rename from translated/tech/20160218 Top 4 open source issue tracking tools.md rename to published/201606/20160218 Top 4 open source issue tracking tools.md index 1ffd21a759..03435d1913 100644 --- a/translated/tech/20160218 Top 4 open source issue tracking tools.md +++ b/published/201606/20160218 Top 4 open source issue tracking tools.md @@ -1,13 +1,13 @@ -开源问题跟踪管理工具Top4 +4 个顶级的开源问题跟踪管理工具 ======================================== 生活充满了bug。 -无论怎样小心计划,无论花多少时间去设计,在执行阶段当轮胎压在路上,任何工程都会有未知的问题。也无妨。也许对于任何一个组织的最佳弹性衡量不是他们如何一切都按计划运行地处理事情,而是,当出现磕磕碰碰时他们如何驾驭。 +无论怎样小心计划,无论花多少时间去设计,在执行阶段实际执行时,任何工程都会有未知的问题。也无妨。也许对于任何一个组织的最佳弹性衡量不是他们如何一切都按计划运行地处理事情,而是,当出现磕磕碰碰时他们如何驾驭。 对任何一个项目管理流程来说,特别是在软件开发领域,都需要一个关键工具——问题跟踪管理系统。其基本功能很简单:可以对bug进行查看、追踪,并以协作的方式解决bug,有了它,我们更容易跟随整个过程的进展。除了基本功能,还有很多专注于满足特定需求的选项及功能,使用场景不仅限于软件开发。你可能已经熟悉某些托管版本的工具,像 [GitHub Issues](https://guides.github.com/features/issues/)或者[Launchpad](https://launchpad.net/),其中一些工具已经有了自己的开源社区。 -接下来,这四个bug问题跟踪管理软件的极佳备选,全部开源、易于下载,自己就可以部署。先说好,我们可能没有办法在这里列出每一个问题跟踪工具;相反,我们列出这四个,基于的是其丰富的功能和项目背后的社区规模。当然,肯定还有其他类似软件,如果你喜欢的没有列在这里,如果你有一个好的理由,一定要让我们知道,在下面的评论中使它脱颖而出吧。 +接下来,这四个bug问题跟踪管理软件的极佳备选,全部开源、易于下载,自己就可以部署。先说好,我们可能没有办法在这里列出每一个问题跟踪工具;相反,我们列出这四个,原因基于是其丰富的功能和项目背后的社区规模。当然,肯定还有其他类似软件,如果你喜欢的没有列在这里,如果你有一个好的理由,一定要让我们知道,在下面的评论中使它脱颖而出吧。 ## Redmine @@ -15,7 +15,7 @@ Redmine的设置相当灵活,支持多种数据库后端和几十种语言,还是可定制的,可以向问题(issue)、用户、工程等添加自定义字段。通过社区创建的插件和主题它可以进一步定制。 -如果你想试一试,一个[在线演示](http://demo.redmine.org/)可提供使用。Redmine在开源[GPL版本2](http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)下许可;开源代码可以在工程的[svn仓库](https://svn.redmine.org/redmine)或在[GitHub](https://github.com/redmine/redmine)镜像上找到。 +如果你想试一试,一个[在线演示](http://demo.redmine.org/)可提供使用。Redmine采用[GPL版本2](http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)许可证;开源代码可以在工程的[svn仓库](https://svn.redmine.org/redmine)或在[GitHub](https://github.com/redmine/redmine)镜像上找到。 ![](https://opensource.com/sites/default/files/images/business-uploads/issues-redmine.png) @@ -25,7 +25,7 @@ Redmine的设置相当灵活,支持多种数据库后端和几十种语言, 从通知到重复bug检测再到搜索共享,Bugzilla拥有许多高级工具,是一个功能更丰富的选项。Bugzilla拥有一套高级搜索系统以及全面的报表工具,具有生成图表和自动化按计划生成报告的能力。像Redmine一样,Bugzilla是可扩展和可定制的,除了字段本身,还能针对bug创建自定义工作流。它也支持多种后端数据库,和自带的多语言支持。 -Bugzilla在[Mozilla公共许可证](https://en.wikipedia.org/wiki/Mozilla_Public_License)下许可,你可以读取他们的[未来路线图](https://www.bugzilla.org/status/roadmap.html)还有在官网尝试一个[示例服务](https://landfill.bugzilla.org/) +Bugzilla采用[Mozilla公共许可证](https://en.wikipedia.org/wiki/Mozilla_Public_License),你可以读取他们的[未来路线图](https://www.bugzilla.org/status/roadmap.html)还有在官网尝试一个[示例服务](https://landfill.bugzilla.org/) ![](https://opensource.com/sites/default/files/images/business-uploads/issues-bugzilla.png) @@ -35,7 +35,7 @@ Bugzilla在[Mozilla公共许可证](https://en.wikipedia.org/wiki/Mozilla_Public 由python编写的Trac,将其漏洞跟踪能力与它的wiki系统和版本控制系统轻度整合。项目管理能力突出,如生成里程碑和路线图,一个可定制的报表系统,大事记,支持多资源库,内置的垃圾邮件过滤,还可以使用很多通用语言。如其他我们已经看到的漏洞追踪软件,有很多插件可进一步扩展其基本特性。 -Trac是在改进的[BSD许可](http://trac.edgewall.org/wiki/TracLicense)下获得开放源码许可,虽然更老的版本发布在GPL下。你可以在一个[自托管仓库](http://trac.edgewall.org/browser)预览Trac的源码或者查看他们的[路线图](http://trac.edgewall.org/wiki/TracRoadmap)对未来的规划。 +Trac以[改进的BSD许可协议](http://trac.edgewall.org/wiki/TracLicense)开源,虽然更老的版本发布在GPL下。你可以在一个[自托管仓库](http://trac.edgewall.org/browser)预览Trac的源码或者查看他们的[路线图](http://trac.edgewall.org/wiki/TracRoadmap)对未来的规划。 ![](https://opensource.com/sites/default/files/images/business-uploads/issues-trac.png) @@ -43,11 +43,11 @@ Trac是在改进的[BSD许可](http://trac.edgewall.org/wiki/TracLicense)下获 [Mantis](https://www.mantisbt.org/)是这次合集中我们将看到的最后一个工具,基于PHP,且有16年历史。作为另一个支持多种不同版本控制系统和事件驱动通知系统的bug跟踪管理软件,Mantis有一个与其他工具类似的功能设置。虽然它不本身包含wiki,但它整合了很多流行的wiki平台且本地化到多种语言。 -Mantis在[GPL版本2](http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)下获得开源许可证书;你可以在[GitHub](https://github.com/mantisbt/mantisbt)浏览他的源代码或查看自托管[路线图](https://www.mantisbt.org/bugs/roadmap_page.php?project=mantisbt&version=1.3.x)对未来的规划。一个示例,你可以查看他们的内部[漏洞跟踪](https://www.mantisbt.org/bugs/my_view_page.php)。 +Mantis使用[GPL版本2](http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)开源许可证书;你可以在[GitHub](https://github.com/mantisbt/mantisbt)浏览他的源代码或查看自托管[路线图](https://www.mantisbt.org/bugs/roadmap_page.php?project=mantisbt&version=1.3.x)对未来的规划。一个示例,你可以查看他们的内部[漏洞跟踪](https://www.mantisbt.org/bugs/my_view_page.php)。 ![](https://opensource.com/sites/default/files/images/business-uploads/issues-mantis.png) -正如我们指出的,这四个不是唯一的选项。想要探索更多?[Apache Bloodhound](https://issues.apache.org/bloodhound/),[Fossil](http://fossil-scm.org/index.html/doc/trunk/www/index.wiki),[The Bug Genie](http://www.thebuggenie.com/),还有很多可替换品都有专注的追随者,每个都有不同的优点和缺点。另外,一些工具在我们[项目管理](https://opensource.com/business/15/1/top-project-management-tools-2015)摘要有问题跟踪功能。所以,哪个是你首选的跟踪和碾压bug的工具? +正如我们指出的,这四个不是唯一的选项。想要探索更多?[Apache Bloodhound](https://issues.apache.org/bloodhound/),[Fossil](http://fossil-scm.org/index.html/doc/trunk/www/index.wiki),[The Bug Genie](http://www.thebuggenie.com/),还有很多可替换品都有自己的忠实追随者,每个都有不同的优点和缺点。另外,一些工具在我们[项目管理](https://opensource.com/business/15/1/top-project-management-tools-2015)摘要有问题跟踪功能。所以,哪个是你首选的跟踪和碾压bug的工具? ------------------------------------------------------------------------------ diff --git a/translated/tech/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md b/published/201606/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md similarity index 64% rename from translated/tech/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md rename to published/201606/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md index 6711f5d4a6..d31ad4eaea 100644 --- a/translated/tech/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md +++ b/published/201606/20160221 NXP unveils a tiny 64-bit ARM processor for the Internet of Things.md @@ -1,21 +1,20 @@ -NXP 揭幕了一块超小型物联网64位ARM处理器 +NXP 发布了一块超小型物联网64位 ARM 处理器 ========================================================================= -**标签**:[ARM][1], [物联网][2], [NXP][3], [NXP 半导体][4] ![](http://1u88jj3r4db2x4txp44yqfj1.wpengine.netdna-cdn.com/wp-content/uploads/2016/02/nxp-930x556.jpg) -[NXP 半导体][5]揭幕了一块声称世界上最小的用于物联网(IoT)的低功耗64位ARM处理器。 +[NXP 半导体][5]发布了一块声称世界上最小的用于物联网(IoT)的低功耗64位ARM处理器。 -这片小型的QorIQ LS1012A为电池供电,大小受限的应用提供了网络级的安全和性能加速。这包括了运行物联网应用,或者任何智能及可连接的设备。如果物联网能在2020达到1.7万亿美金的潜力(由IDC研究员估算市场得出),那么它将需要像NXP这样的处理器,该处理器在德国纽伦堡的Embedded World 2016 上揭开了什么的面纱。 +这片小型的QorIQ LS1012A为电池供电、大小受限的应用提供了网络级的安全和性能加速。这包括了运行物联网应用,或者任何智能及可连接的设备。如果物联网能在2020达到1.7万亿美金的潜力(由IDC研究员估算市场得出),那么它将需要像NXP这样的处理器,该处理器在德国纽伦堡的Embedded World 2016 上揭开了什么的面纱。 该芯片带有64位ARMv8芯片,拥有网络包加速及内置的安全。它占用9.6平方毫米的空间,并且大约消耗1瓦特的电力。潜在的应用包括下一代的物联网网关、可携带娱乐平台、高性能可携带存储应用、移动硬盘、相机的移动存储、平板及其他可充电的设备。 -除此之外,LS1012A是第一款为新起的基于对象的存储方案设计的处理器,基于对象存储基于智能硬盘,它直接连接到以太网数据中心。处理器必须足够小才能直接集成在硬盘的集成电路上。 +除此之外,LS1012A是第一款为最新兴起的基于对象的存储方案设计的处理器,基于对象存储通过智能硬盘直接连接到以太网数据中心。处理器必须足够小才能直接集成在硬盘的集成电路上。 -NXP的高级副总裁及数字网络部的经理Tareq Bustami说:“低功耗、占用空间小及网络级性能这些突破性组合的NXP LS1012处理器是消费者、物联网相关应用的理想选择。独有的混合能力解放了物联网设计者及开发者使得他们可以在这个高增长的市场中想象并创造更多创新产品。” +NXP的高级副总裁及数字网络部的经理Tareq Bustami说:“突破性组合了低功耗、占用空间小及网络级性能的NXP LS1012处理器是消费者、物联网相关应用的理想选择。独有地将这些能力结合到一起解放了物联网设计者及开发者,使得他们可以在这个高增长的市场中设计并创造更多创新产品。” -NXP说这是唯一一个能够结合全面的高速外围在一个芯片中的1瓦特、64位处理器,这意味着低系统功耗。归功于创新的封装,该处理器可以在低成本的电路板中布线。 +NXP说这是唯一一个1瓦特功耗、64位的、并将这些高速外设综合到一个芯片中的处理器,这意味着更低的系统级功耗。归功于创新性的封装,该处理器可以运用在低成本的电路板中。 NXP的LS1012A可以在2016年4月开始发货,并且现在可以订货。NXP在全球35个国家拥有超过4,5000名员工。 @@ -25,7 +24,7 @@ via: http://venturebeat.com/2016/02/21/nxp-unveils-a-small-and-tiny-64-bit-arm-p 作者:[DEAN TAKAHASHI][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201606/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md b/published/201606/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md new file mode 100644 index 0000000000..a2084582b3 --- /dev/null +++ b/published/201606/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md @@ -0,0 +1,48 @@ +新的 Docker 数据中心管理套件使容器化变得更加井然有序 +=============================================================================== + +![](https://tctechcrunch2011.files.wordpress.com/2016/02/shutterstock_119411227.jpg?w=738) + +[Docker][1]之前发布了一个新的容器控制中心,称为Docker数据中心(DDC),其设计目的是用于大型和小型企业创建、管理和分发容器的一个集成管理控制台。 + +DDC是由包括Docker Universal Control Plane(也是同时发布的)和Docker Trusted Registry等不同的商业组件组成。它也包括了开源组件比如Docker Engine。这个产品让企业在一个中心管理界面中就可以管理整个Docker化程序的生命周期。 + +负责产品管理的SVP Scott Johnston告诉TechCrunch:“客户催使了这个新工具的产生。公司不仅喜欢Docker给他们带来的敏捷性,它们也希望在创建和分发容器的过程中可以进行对安全和管理进行控制。” + +Johnston说:“公司称这个为容器即服务(Caas),主要是因为当客户来询问这种类型的管理面板时,他们是这样描述的。” + +![](https://tctechcrunch2011.files.wordpress.com/2016/02/screen-shot-2016-02-23-at-7-56-54-am.png?w=680&h=401) + +像许多开源项目那样,Docker首先获得了许多开发者的追随,但是随着它的流行,那些开发者们所在的公司也希望积极推进它的使用。 + +这就是DDC设计的目的。它给开发者创建容器化应用的敏捷性,也让运维变得井井有条。 + +实际上这意味着开发者可以创建一系列容器化的组件,将其部署后就可以访问一个完全认证的镜像库。这可以让开发人员用碎片组成应用而不必每次重新发明轮子。这可以加速应用的开发和部署(理论上提升了容器提供的灵活性)。 + +这方面吸引了Beta客户ADP。这个服务特别喜欢将这个提供给开发人员的中心镜像仓库。 + +ADP的CTO Keith Fulton在声明中称:“作为我们将关键业务微服务化倡议的一部分,ADP正在研究能够让开发人员可以利用IT审核过的中央库和安全的核心服务进行快速迭代的方案。” + +Docker在2010年由dotcloud的Solomon Hykes发布。他在2013年将公司的重心移到Docker上,并在[8月出售了dotCloud][2],2014年完全聚焦在Docker上。 + +根据CrunchBase的消息,公司几年来在5轮融资后势如破竹般获得了1亿8000万美元融资(自从成为Docker后获得了1亿6千8百万美元)。吸引投资者关注的是Docker提供了一种称为容器的现在分发应用的方式,可以构建、管理何分发分布式应用。 + +容器化可以让开发者创建由多个分布在不同服务器上的碎片组成的分布式应用,而不是一个运行在一个单独服务器上的独立应用。 + +DDC服务每月每节点150美金起。 + +-------------------------------------------------------------------------------- + +via: http://techcrunch.com/2016/02/23/new-docker-data-center-admin-suite-should-bring-order-to-containerization/ + +作者:[Ron Miller][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://techcrunch.com/author/ron-miller/ +[1]: https://www.docker.com/ +[2]: http://techcrunch.com/2014/08/04/docker-sells-dotcloud-to-cloudcontrol-to-focus-on-core-container-business/ + + diff --git a/published/201606/20160228 Two Outstanding All-in-One Linux Servers.md b/published/201606/20160228 Two Outstanding All-in-One Linux Servers.md new file mode 100644 index 0000000000..48370eca23 --- /dev/null +++ b/published/201606/20160228 Two Outstanding All-in-One Linux Servers.md @@ -0,0 +1,97 @@ +两个出色的一体化 Linux 服务器软件 +================================================ + +回到2000年那时,微软发布小型商务服务器(SBS:Small Business Server)。这个产品改变了很多人们对科技在商务领域的看法。你可以部署一个单独的服务器,它能处理邮件,日历,文件共享,目录服务,VPN,以及更多,而不是很多机器处理不同的任务。对很多小型公司来说,这是实实在在的好处,但是对于一些公司来说 Windows SMB 是昂贵的。对于另外一些人,根本不会考虑使用这种微软设计的单一服务器的想法。 + +对于后者也有替代方案。事实上,在 Linux 和开源领域里,你可以选择许多稳定的平台,它可以作为一站式服务商店服务于你的小型企业。如果你的小型企业有10到50员工,一体化服务器也许是你所需的理想方案。 + +这里,我将要展示两个 Linux 一体化服务器,你可以看看它们哪个能完美适用于你的公司。 + +记住,这些服务器不适用于(不管是哪种方式)大型商务或企业。大公司无法依靠一体化服务器,那是因为一台服务器不能负担得起企业所需的期望。也就是说,Linux 一体化服务器适合于小型企业。 + +### ClearOS + +[ClearOS][1] 最初发布于 2009 年,那时名为 ClarkConnect,是一个路由和网关的发行版。从那以后,ClearOS 增加了所有一体化服务器必要的特性。CearOS 提供的不仅仅是一个软件,你可以购买一个 [ClearBox 100][2] 或 [ClearBox 300][3]。这些服务器搭载了完整的 ClearOS,作为一个 IT 设备被销售。在[这里][4]查看特性比对/价格矩阵。 + +如果你已经有响应的硬件,你可以下载这些之一: + +- [ClearOS 社区版][5] — 社区(免费)版的 ClearOS +- [ClearOS 家庭版][6] — 理想的家庭办公室(详细的功能和订阅费用,见[这里][12]) +- [ClearOS商务][7] — 理想的小型企业(详细的功能和订阅费用,见[这里][13]) + +使用 ClearOS 能给你你带来什么?你得到了一个商业级的服务器,带有单一的精美 Web 界面。是什么让 ClearOS 从标准的服务器所提供的一大堆功能中脱颖而出?除了那些基础的部分,你可以从 [Clear 市场][8] 中增加功能。在这个市场里,你可以安装免费或付费的应用来扩展 ClearOS 服务器的特性。这里你可以找到支持 Windows 服务器活动目录,OpenLDAP,Flexshares,Antimalware,云,Web 访问控制,内容过滤等等很多的补充插件。你甚至可以找到一些第三方组件,比如谷歌应用同步,Zarafa 合作平台,卡巴斯基杀毒。 + +ClearOS 的安装就像其他的 Linux 发行版一样(基于红帽的 Anaconda 安装程序)。安装完成后,系统将提示您设置网络接口,这个地址用来供你的浏览器(需要与 ClearOS 服务器在同一个网络里)访问。地址格式如下: + + https://IP_OF_CLEAROS_SERVER:81 + +IP_OF_CLEAROS_SERVER 就是服务器的真实 IP 地址。注:当你第一次在浏览器访问这个服务器时,你将收到一个“Connection is not private”的警告。继续访问,以便你可以继续设置。 + +当浏览器最终连接上之后,就会提示你 root 用户认证(在初始化安装中你设置的 root 用户密码)。一通过认证,你将看到 ClearOS 的安装向导(图1) + +![](http://www.linux.com/images/stories/66866/jack-clear_a.png) + +*图1: ClearOS安装向导。* + +点击下一步按钮,开始设置你的 ClearOS 服务器。这个向导无需加以说明,在最后还会问你想用那个版本的 ClearOS。点击“社区”,“家庭”,或者“商业”。选择之后,你就被要求注册一个账户。创建了一个账户并注册了你的服务器后,你可以开始更新服务器,配置服务器,从市场添加模块(图2)。 + +![](http://www.linux.com/images/stories/66866/jack-clear_b.png) + +*图2: 从市场安装模块。* + +此时,一切准备就绪,可以开始深入挖掘配置你的 ClearOS 小型商务服务器了。 + +### Zentyal + +[Zentyal][10] 是一个基于 Ubuntu 的小型商务服务器,有段时期的名字是 eBox。Zentyal 提供了大量的服务器/服务来适应你的小型商务需求: + +- 电子邮件 — 网页邮件;支持原生的微软 Exchange 协议和活动目录;日历和通讯录;手机设备电子邮件同步;反病毒/反垃圾;IMAP,POP,SMTP,CalDAV,和 CardDAV 支持。 +- 域和目录 — 中央域目录管理;多个组织部门;单点登录身份验证;文件共享;ACL,高级域管理,打印机管理。 +- 网络和防火墙 — 支持静态和 DHCP 接口;对象和服务;包过滤;端口转发。 +- 基础设施 — DNS;DHCP;NTP;认证中心;VPN。 +- 防火墙 + +安装 Zentyal 很像Ubuntu服务器的安装,基于文本界面而且很简单:从安装镜像启动,做一些简单的选择,然后等待安装完成。当这个最初的基于文本的安装完成之后,就会显示桌面 GUI,提供选择软件包的向导程序。你可以选择所有你想安装的包,让安装程序继续完成这些工作。 + +最终,你可以通过网页界面来访问 Zentyal 服务器(浏览器访问 https://IP_OF_SERVER:8443 - 这里 IP_OF_SERVER是你的 Zentyal 服务器的局域网地址)或使用独立的桌面 GUI 程序来管理服务器(Zentyal 包括一个可以快速访问管理员和用户控制台的 Zentyal 管理控制台)。当真系统已经保存并启动,你将看到 Zentyal 面板(图3)。 + +![](http://www.linux.com/images/stories/66866/jack-zentyal_a.png) + +*图3: Zentyal活动面板。* + +这个面板允许你控制服务器所有方面,比如更新,管理服务器/服务,获取服务器的敏捷状态更新。您也可以进入组件区域,然后安装在部署过程中没有选择的组件或更新当前的软件包列表。点击“软件管理” > “系统更新”并选择你想更新的(图4),然后在屏幕最底端点击“更新”按钮。 + +![](http://www.linux.com/images/stories/66866/jack-zentyal_b.png) + +*图4: 更新你的Zentyal服务器很简单。* + +### 那个服务器适合你? + +回答这个问题要看你有什么需求。Zentyal 是一个不可思议的服务器,它可以很好的胜任你的小型商务网络。如果你需要更多,如群件,我觉得你可以试试 ClearOS。如果你不需要群件,其它的服务器也不错。 + +我强烈建议你安装一下这两个一体化的服务器,看看哪个更适合你的小公司。 + +------------------------------------------------------------------------------ + +via: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers + +作者:[Jack Wallen][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linux.com/community/forums/person/93 +[1]: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers#clearfoundation-overview +[2]: https://www.clearos.com/products/hardware/clearbox-100-series +[3]: https://www.clearos.com/products/hardware/clearbox-300-series +[4]: https://www.clearos.com/products/hardware/clearbox-overview +[5]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso +[6]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso +[7]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso +[8]: https://www.clearos.com/products/purchase/clearos-marketplace-overview +[9]: https://ip_of_clearos_server:81/ +[10]: http://www.zentyal.org/server/ +[11]: https://ip_of_server:8443/ +[12]: https://www.clearos.com/products/clearos-editions/clearos-7-home +[13]: https://www.clearos.com/products/clearos-editions/clearos-7-business diff --git a/translated/tech/20160304 Image processing at NASA with open source tools.md b/published/201606/20160304 Image processing at NASA with open source tools.md similarity index 81% rename from translated/tech/20160304 Image processing at NASA with open source tools.md rename to published/201606/20160304 Image processing at NASA with open source tools.md index a26b96936c..983a5bf50e 100644 --- a/translated/tech/20160304 Image processing at NASA with open source tools.md +++ b/published/201606/20160304 Image processing at NASA with open source tools.md @@ -1,10 +1,10 @@ -# 在NASA中使用开源工具进行图像处理 +在 NASA 使用开源工具进行图像处理 +================== -关键词:NASA,图像处理,Node.js,OpenCV ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/nasa_spitzer_space_pink_spiral.jpg?itok=3XEUstkl) -在刚结束的这个夏天里,我是 [NASA 格伦中心][1] [GVIS][2] 实验室的实习生,我将我对开源的热情带到了那里。我的任务是改进我们实验室对 Dan Schroeder 开发的一个开源流体动力学模拟器的贡献。原本的模拟器可以显示用户用鼠标绘制的障碍物,并建立计算流体动力学模型。我们团队的贡献是加入图像处理的代码,分析实况视频的每一帧以显示特定的物体如何与液体相互作用。而且,我们还要做更多事情。 +在刚结束的这个夏天里,我是 [NASA 格伦中心][1] [GVIS][2] 实验室的实习生,我将我对开源的热情带到了那里。我的任务是改进我们实验室对 Dan Schroeder 开发的一个开源流体动力学模拟器的贡献。原本的[模拟器][3]可以显示用户用鼠标绘制的障碍物,并建立计算流体动力学模型。我们团队的贡献是加入图像处理的代码,分析实况视频的每一帧以显示特定的物体如何与液体相互作用。而且,我们还要做更多事情。 我们想要让图像处理部分更加健壮,所以我致力于改善图像处理库。 @@ -16,33 +16,33 @@ 2. 找寻物体的质心 3. 能对物体中心进行相关的精确转换 -我的导师建议我安装 [Node.js](http://nodejs.org/) 、 [OpenCV](http://opencv.org/) 和 [Node.js bindings for OpenCV](https://github.com/peterbraden/node-opencv)。在等待软件安装的过程中,我查看了 OpenCV 的 [GitHub 主页][3]上的示例源码。我发现示例源码使用 JavaScript 写的,而我还不懂 JavaScript ,所以我在 Codecademy 上学了一些课程。两天后,我对 JavaScript 依旧生疏,不过我还是开始了我的项目…它包含了更多的 JavaScript 。 +我的导师建议我安装 [Node.js](http://nodejs.org/) 、 [OpenCV](http://opencv.org/) 和 [Node.js bindings for OpenCV](https://github.com/peterbraden/node-opencv)。在等待软件安装的过程中,我查看了 OpenCV 的 [GitHub 主页][4]上的示例源码。我发现示例源码使用 JavaScript 写的,而我还不懂 JavaScript ,所以我在 Codecademy 上学了一些课程。两天后,我对 JavaScript 依旧生疏,不过我还是开始了我的项目……它包含了更多的 JavaScript 。 检测轮廓的示例代码工作得很好。事实上,它使得我用几个小时就完成了第一个目标!获取一幅图片的轮廓,它看起来像这样: ![](https://opensource.com/sites/default/files/resize/image_processing_nasa_1-520x293.jpg) -> 包括所有轮廓的原始图, +*包括所有轮廓的原始图* 检测轮廓的示例代码工作得有点好过头了。不仅物体的轮廓被检测到了,整个图片中的轮廓都检测到了。这会导致模拟器要与那些没用的轮廓打交道。这是一个严重的问题,因为它会返回错误的数据。为了避免模拟器接触到不想要的轮廓,我加了一个区域约束。轮廓要位于一定的区域范围内才会被画出来。区域约束使得轮廓变干净了。 ![](https://opensource.com/sites/default/files/resize/image_processing_nasa_2-520x293.jpg) -> 过滤后的轮廓,包含了阴影轮廓 +*过滤后的轮廓,包含了阴影轮廓* 虽然无关的轮廓没有了,但是图像还有个问题。图像本该只有一个轮廓,但是它来回绕了自己两次,没有完整地圈起来。区域在这里不能作为决定因素,所以必须试试其他方式。 -这一次,我不是直接去找寻轮廓,而是先将图片转换成二值图。二值图是转换之后只有黑白像素的图片。为了获取到二值图我先把彩色图转成灰度图。转换之后我再用阈值函数对图片进行处理。阈值函数遍历图片每个像素点的值,如果值小于 30 ,像素的颜色就会改成黑色。否则则反。在原始图片转换成二值图之后,结果变成这样: +这一次,我不是直接去找寻轮廓,而是先将图片转换成二值图。二值图是转换之后只有黑白像素的图片。为了获取到二值图我先把彩色图转成灰度图。转换之后我再用阈值函数对图片进行处理。阈值函数遍历图片每个像素点的值,如果值小于 30 ,像素的颜色就会改成黑色。否则取反。在原始图片转换成二值图之后,结果变成这样: ![](https://opensource.com/sites/default/files/resize/image_processing_nasa_3-520x293.jpg) -> 二值图。 +*二值图* 然后我获取了二值图的轮廓,结果是一个更干净的轮廓,没有了阴影轮廓。 ![](https://opensource.com/sites/default/files/image_processing_nasa_4.jpg) -> 最后的干净轮廓。 +*最后的干净轮廓* 这个时候,我可以获取干净的轮廓、计算质心了。可惜的是,我没有足够的时间去完成质心的相关变换。由于我的实习时间只剩下几天了,我开始考虑我在这段有限时间内能做的其它事情。其中一个就是边界矩形。边界矩形是包含了图片轮廓的最小四边形。边界矩形很重要,因为它是在页面上缩放轮廓的关键。虽然很遗憾我没时间利用边界矩形做更多事情,但是我仍然想学习它,因为它是个很有用的工具。 @@ -50,7 +50,7 @@ ![](https://opensource.com/sites/default/files/resize/image_processing_nasa_5-521x293.jpg) -> 最终图像,红色的边界矩形和质心。 +*最终图像,红色的边界矩形和质心* 当这些图像处理代码写完之后,我用我的代码替代了模拟器中的老代码。令我意外的是,它可以工作。 @@ -60,11 +60,11 @@ ( Youtube 演示视频) -程序有内存泄露,每 1/10 秒泄露 100MB 。我很高兴原因不是我的代码。坏消息是我并不能修复它。好消息是仍然有解决方法。它并非最理想的,但我可以使用。这个方法是不断检查模拟器使用的内存,当使用内存超过 1GB 时,重新启动模拟器。 +程序有内存泄露,每 1/10 秒泄露 100MB 。我很高兴不是因为我的代码。坏消息是我并不能修复它。另一个好消息是仍然有解决方法,虽然并非最理想的,但我可以使用。这个方法是不断检查模拟器使用的内存,当使用内存超过 1GB 时,重新启动模拟器。 在 NASA 实验室,我们会使用很多的开源软件,没有这些开源软件的帮助,我不可能完成这些工作。 -* * * +------- via: [https://opensource.com/life/16/3/image-processing-nasa](https://opensource.com/life/16/3/image-processing-nasa) @@ -76,4 +76,5 @@ via: [https://opensource.com/life/16/3/image-processing-nasa](https://opensource [1]: http://www.nasa.gov/centers/glenn/home/index.html [2]: https://ocio.grc.nasa.gov/gvis/ -[3]: https://github.com/peterbraden/node-opencv +[3]: http://physics.weber.edu/schroeder/fluids/ +[4]: https://github.com/peterbraden/node-opencv diff --git a/published/201606/20160510 65% of companies are contributing to open source projects.md b/published/201606/20160510 65% of companies are contributing to open source projects.md new file mode 100644 index 0000000000..46939a4e3a --- /dev/null +++ b/published/201606/20160510 65% of companies are contributing to open source projects.md @@ -0,0 +1,63 @@ +65% 的企业正致力于开源项目 +========================================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_openseries.png?itok=s7lXChId) + +今年 Black Duck 和 North Bridge 发布了第十届年度开源软件前景调查,来调查开源软件的发展趋势。今年这份调查的亮点在于,当前主流社会对开源软件的接受程度以及过去的十年中人们对开源软件态度的变化。 + +[2016 年的开源软件前景调查][1],分析了来自约3400位专家的反馈。今年的调查中,开发者发表了他们的看法,大约 70% 的参与者是开发者。数据显示,安全专家的参与人数呈指数级增长,增长超过 450% 。他们的参与表明,开源社区开始逐渐关注开源软件中存在的安全问题,以及当新的技术出现时确保它们的安全性。 + +Black Duck 的[年度开源新秀奖][2] 涉及到一些新出现的技术,如容器方面的 Docker 和 Kontena。容器技术这一年有了巨大的发展 ———— 76% 的受访者表示,他们的企业有一些使用容器技术的规划。而 59% 的受访者正准备使用容器技术完成大量的部署,从开发与测试,到内部与外部的生产环境部署。开发者社区已经把容器技术作为一种简单快速开发的方法。 + +调查显示,几乎每个组织都有开发者致力于开源软件,这一点毫不惊讶。当像微软和苹果这样的大公司将它们的一些解决方案开源时,开发者就获得了更多的机会来参与开源项目。我非常希望这样的趋势会延续下去,让更多的软件开发者无论在工作中,还是工作之余都可以致力于开源项目。 + +### 2016 年调查结果中的一些要点 + +#### 商业价值 + +* 开源软件是发展战略中的一个重要元素,超过 65% 的受访者使用开源软件来加速软件开发的进度。 +* 超过 55% 的受访者在生产环境中使用开源软件。 + +#### 创新的原动力 + +* 受访者表示,开源软件的使用让软件开发更加快速灵活,从而推进了创新;同时加速了软件推向市场的时间,也极大地减少了与上司沟通的时间。 +* 开源软件的优质解决方案,富有竞争力的特性,技术能力,以及可定制化的能力,也促进了更多的创新。 + +#### 开源商业模式与投资的激增 + +* 更多不同商业模式的出现给开源企业带来了前所未有的价值。这些价值并不依赖于云服务和技术支持。 +* 开源的私募融资在过去的五年内,已增长了将近四倍。 + +#### 安全和管理 + +一流的开源安全与管理实践的发展,也没有跟上人们使用开源不断增长的步伐。尽管备受关注的开源项目近年来爆炸式地增长,调查结果却指出: + +* 50% 的企业在选择和批准开源代码这方面没有出台正式的政策。 +* 47% 的企业没有正式的流程来跟踪开源代码,这就限制了它们对开源代码的了解,以及控制开源代码的能力。 +* 超过三分之一的企业没有用于识别、跟踪和修复重大开源安全漏洞的流程。 + +#### 不断增长的开源参与者 + +调查结果显示,一个活跃的企业开源社区,激励创新,提供价值,共享情谊: + +* 67% 的受访者表示,它们积极鼓励开发者参与开源项目。 +* 65% 的企业正致力于开源项目。 +* 约三分之一的企业有专门为开源项目设置的全职岗位。 +* 59% 的受访者参与开源项目以获得竞争优势。 + +Black Duck 和 North Bridge 从今年的调查中了解到了很多,如安全,政策,商业模式等。我们很兴奋能够分享这些新发现。感谢我们的合作者,以及所有参与我们调查的受访者。这是一个伟大的十年,我很高兴我们可以肯定地说,开源的未来充满了无限可能。 + +想要了解更多内容,可以查看完整的[调查结果][3]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/16/5/2016-future-open-source-survey + +作者:[Haidee LeClair][a] +译者:[Cathon](https://github.com/Cathon) +校对:[wxy](https://github.com/wxy) + +[a]: https://opensource.com/users/blackduck2016 +[1]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results +[2]: https://info.blackducksoftware.com/OpenSourceRookies2015.html +[3]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results%C2%A0 diff --git a/translated/tech/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md b/published/201606/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md similarity index 53% rename from translated/tech/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md rename to published/201606/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md index 1ed524c4d4..12012c9045 100644 --- a/translated/tech/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md +++ b/published/201606/20160513 How to Set Up 2-Factor Authentication for Login and sudo.md @@ -1,12 +1,11 @@ -如何为登录和 sudo 设置双重认证 +如何为登录和 sudo 设置双因子认证 ========================================================== ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/auth_crop.png?itok=z_cdYZZf) ->[Used with permission][1] -安全就是一切。我们生活的当今世界,数据具有令人难以置信的价值,而你也一直处于数据丢失的风险之中。因此,你必须想尽办法保证你桌面系统和服务器中东西的安全。结果,管理员和用户就会创建极其复杂的密码、使用密码管理器甚至其它更复杂的东西。但是,如果我告诉你你可以只需要一步-至多两步就能登录到你的 Linux 服务器或桌面系统中呢?多亏了 [Google Authenticator][2],现在你可以做到了。在这之上配置也极其简单。 +安全就是一切。我们生活的当今世界,数据具有令人难以置信的价值,而你也一直处于数据丢失的风险之中。因此,你必须想尽办法保证你桌面系统和服务器中数据的安全。结果,管理员和用户就会创建极其复杂的密码、使用密码管理器甚至其它更复杂的东西。但是,如果我告诉你你可以只需要一步,至多两步就能登录到你的 Linux 服务器或桌面系统中呢?多亏了 [Google 身份验证器][2],现在你可以做到了。并且,配置也极其简单。 -我会给你简要介绍为登录和 sudo 设值双重认证的步骤。我基于 Ubuntu 16.04 桌面系统进行介绍,但这些步骤也适用于其它服务器。为了做到双重认证,我会使用 Google Authenticator。 +我会给你简要介绍为登录和 sudo 设置双因子认证的步骤。我基于 Ubuntu 16.04 桌面系统进行介绍,但这些步骤也适用于其它服务器。为了实现双因子认证,我会使用 Google 身份验证器。 这里有个非常重要的警告:一旦你设置了认证,没有一个从认证器中获得的由 6 个数字组成的验证码你就不可能登录账户(或者执行 sudo 命令)。这也给你增加了一步额外的操作,因此如果你不想每次登录到 Linux 服务器(或者使用 sudo)的时候都要拿出你的智能手机,这个方案就不适合你。但你也要记住,这额外的一个步骤也给你带来一层其它方法无法给予的保护。 @@ -14,38 +13,28 @@ ### 安装必要的组件 -安装 Google 认证,首先要解决两个问题。一是安装智能机应用。下面是如何从 Google 应用商店安装的方法: +安装 Google 身份验证器(Google Authenticator),首先要解决两个问题。一是安装智能机应用。下面是如何从 Google 应用商店安装的方法: 1. 在你的安卓设备中打开 Google 应用商店 - -2. 搜索 google 认证 - -3. 找到并点击有 Google 标识的应用 - +2. 搜索 google 身份验证器 +3. 找到并点击有 Google Inc. 标识的应用 4. 点击安装 - -5. 点击 接受 - +5. 点击“接受” 6. 等待安装完成 -接下来,我们继续在你的 Linux 机器上安装认证。步骤如下: +接下来,我们继续在你的 Linux 机器上安装这个认证器。步骤如下: 1. 打开一个终端窗口 - 2. 输入命令 sudo apt-get install google-authenticator - 3. 输入你的 sudo 密码并敲击回车 - 4. 如果有弹窗提示,输入 y 并敲击回车 - 5. 等待安装完成 接下来配置使用 google-authenticator 进行登录。 ### 配置 -要为登录和 sudo 添加两阶段认证只需要编辑一个文件。也就是 /etc/pam.d/common-auth。打开并找到如下一行: -Just one file must be edited to add two-step authentication for both login and sudo usage. The file is /etc/pam.d/common-auth. Open it and look for the line +要为登录和 sudo 添加双因子认证只需要编辑一个文件,即 /etc/pam.d/common-auth。打开并找到如下一行: ``` auth [success=1 default=ignore] pam_unix.so nullok_secure @@ -59,57 +48,53 @@ auth required pam_google_authenticator.so 保存并关闭文件。 -下一步就是为系统中的每个用户设置 google-authenticator(否则会不允许他们登录)。为了简单起见,我们假设你的系统中有两个用户:jack 和 olivia。首先为 jack 设置(我们假设这是我们一直使用的账户)。 +下一步就是为系统中的每个用户设置 google-authenticator(否则他们就不能登录了)。为了简单起见,我们假设你的系统中有两个用户:jack 和 olivia。首先为 jack 设置(我们假设这是我们一直使用的账户)。 打开一个终端窗口并输入命令 google-authenticator。之后会问你一系列的问题(每个问题你都应该用 y 回答)。问题包括: * 是否允许更新你的 "/home/jlwallen/.google_authenticator" 文件 (y/n) y - * 是否禁止多个用户使用同一个认证令牌?这会限制你每 30 秒内只能登录一次,但能增加你注意到甚至防止中间人攻击的可能 (y/n) +* 默认情况下令牌时长为 30 秒即可,为了补偿客户端和服务器之间可能出现的时间偏差,我们允许使用当前时间之前或之后的其它令牌。如果你无法进行时间同步,你可以把这个时间窗口由默认的 1:30 分钟增加到 4 分钟。是否希望如此 (y/n) +* 如果你尝试登录的计算机没有针对暴力破解进行加固,你可以为验证模块启用速率限制。默认情况下,限制攻击者每 30 秒不能尝试登陆超过 3 次。是否启用速率限制 (y/n) -* 默认情况下令牌时长为 30 秒即可,为了补偿客户端和服务器之间可能出现的时间偏差,我们允许添加一个当前时间之前或之后的令牌。如果你无法进行时间同步,你可以把时间窗口由默认的 1:30 分钟增加到 4 分钟。是否希望如此 (y/n) - -* 如果你尝试登陆的计算机没有针对蛮力登陆进行加固,你可以为验证模块启用速率限制。默认情况下,限制攻击者每 30 秒不能尝试登陆超过 3 次。是否启用速率限制 (y/n) - -一旦完成了问题回答,你就会看到你的密钥、验证码以及 5 个紧急刮码。把刮码输出保存起来。你可以在无法使用手机的时候使用它们(每个刮码仅限使用一次)。密钥用于你在 Google Authenticator 上设置账户,验证码是你能立即使用(如果需要)的一次性验证码。 +一旦完成了问题回答,你就会看到你的密钥、验证码以及 5 个紧急刮码(emergency scratch code)。把这些刮码打印出来并保存。你可以在无法使用手机的时候使用它们(每个刮码仅限使用一次)。密钥用于你在 Google 身份验证器上设置账户,验证码是你能当下就能够立即使用(如果需要)的一次性验证码。 ### 设置应用 -现在你已经配置好了用户 jack。在设置用户 olivia 之前,你需要在 Google Authenticator 应用上为 jack 添加账户。在主屏幕上打开应用,点击 菜单 按钮(右上角三个竖排点)。点击添加账户然后输入提供的密钥。在下一个窗口(示意图1),你需要输入你运行 google-authenticator 应用时提供的 16 个数字的密钥。给账户取个名字(以便你记住这用于哪个账户),然后点击添加。 +现在你已经配置好了用户 jack。在设置用户 olivia 之前,你需要在 Google 身份验证器应用上为 jack 添加账户(LCTT 译注:实际操作情形中,是为 jack 的手机上安装的该应用创建一个账户)。在打开应用,点击“菜单”按钮(右上角三个竖排点)。点击“添加账户”然后点击“输入提供的密钥”。在下一个窗口(图1),你需要输入你运行 google-authenticator 应用时提供的 16 个数字的密钥。给账户取个名字(以便你记住这用于哪个账户),然后点击“添加”。 ![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/auth_a.png?itok=xSMkd-Mf) ->Figure 1: 在 Google Authenticator 应用上新建账户 + +*图1: 在 Google Authenticator 应用上新建账户* + +(LCTT 译注:Google 身份验证器也可以扫描你在服务器上设置时显示的二维码,而不用手工输入密钥) 添加完账户之后,你就会看到一个 6 个数字的密码,你每次登录或者使用 sudo 的时候都会需要这个密码。 最后,在系统上设置其它账户。正如之前提到的,我们会设置一个叫 olivia 的账户。步骤如下: 1. 打开一个终端窗口 - 2. 输入命令 sudo su olivia - -3. 在智能机上打开 Google Authenticator - -4. 在终端窗口(示意图2)中输入(应用提供的) 6 位数字验证码并敲击回车 - +3. 在智能机上打开 Google 身份验证器 +4. 在终端窗口(图2)中输入(应用提供的) 6 位数字验证码并敲击回车 5. 输入你的 sudo 密码并敲击回车 - 6. 以新用户输入命令 google-authenticator,回答问题并记录生成的密钥和验证码。 -成功为 olivia 用户设置好之后,用 google-authenticator 命令,在 Google Authenticator 应用上根据用户信息(和之前为第一个用户添加账户相同)添加一个新的账户。现在你在 Google Authenticator 应用上就会有 jack 和 olivia 两个账户了。 +成功为 olivia 用户设置好之后,用 google-authenticator 命令,在 Google 身份验证器应用上根据用户信息(和之前为第一个用户添加账户相同)添加一个新的账户。现在你在 Google 身份验证器应用上就会有 jack 和 olivia 两个账户了。(LCTT 译注:在实际操作情形中,通常是为 jack 和 olivia 两个人的手机分别设置。) ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/auth_b.png?itok=FH36V1r0) ->Figure 2: 为 sudo 输入 6位数字验证码 -好了,就是这些。每次你尝试登陆系统(或者使用 sudo) 的时候,在你输入用户密码之前,都会要求你输入提供的 6 位数字验证码。现在你的 Linux 机器就比添加双重认证之前安全多了。虽然有些人会认为这非常麻烦,我仍然推荐使用,尤其是那些保存了敏感数据的机器。 +*图2: 为 sudo 输入 6位数字验证码* + +好了,就是这些。每次你尝试登录系统(或者使用 sudo) 的时候,在你输入用户密码之前,都会要求你输入提供的 6 位数字验证码。现在你的 Linux 机器就比添加双因子认证之前安全多了。虽然有些人会认为这非常麻烦,我仍然推荐使用,尤其是那些保存了敏感数据的机器。 -------------------------------------------------------------------------------- -via: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/auth_b.png?itok=FH36V1r0 +via: https://www.linux.com/learn/how-set-2-factor-authentication-login-and-sudo 作者:[JACK WALLEN][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) [a]: https://www.linux.com/users/jlwallen [1]: https://www.linux.com/licenses/category/used-permission diff --git a/published/201606/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md b/published/201606/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md new file mode 100644 index 0000000000..ea47ad605b --- /dev/null +++ b/published/201606/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md @@ -0,0 +1,53 @@ +NodeOS:Node 爱好者的 Linux 发行版 +================================================ + +![](http://itsfoss.com/wp-content/uploads/2016/05/node-os-linux.jpg) + +[NodeOS][1] 是一款基于 [Node.js][2] 的操作系统,自去年其首个[发布候选版][3]之后正朝着它的1.0版本进发。 + +如果你之前不知道的话,NodeOS 是首个架构在 [Linux][5] 内核之上的由 Node.js 和 [npm][4] 驱动的操作系统。[Jacob Groundwater][6] 在2013年中期介绍了这个项目。该操作系统中用到的主要技术是: + +- **Linux 内核**: 这个系统建造在 Linux 内核上 +- **Node.js 运行时**: Node 作为主要的运行时环境 +- **npm 包管理**: npm 作为包管理 + +NodeOS 源码托管在 [Github][7] 上,因此,任何感兴趣的人都可以轻松贡献或者报告 bug。用户可以从源码构建或者使用[预编译镜像][8]。构建过程及快速起步指南可以在项目仓库中找到。 + +NodeOS 背后的思想是提供足够 npm 运行的环境,剩余的功能就可以让 npm 包管理来完成。因此,用户可以使用多达大约 250,000 个软件包,并且这个数目每天都还在增长。所有的都是开源的,你可以根据你的需要很容易地打补丁或者增加更多的包。 + +NodeOS 核心开发被分离成了不同的层面,基本的结构包含: + +- **barebones** – 带有可以启动到 Node.js REPL 的 initramfs 的自定义内核 +- **initramfs** – 用于挂载用户分区以及启动系统的 initram 文件系统 +- **rootfs** – 存放 linux 内核及 initramfs 文件的只读分区 +- **usersfs** – 多用户文件系统(如传统系统一样) + +NodeOS 的目标是可以在任何平台上运行,包括: **实际的硬件(用户计算机或者 SoC)**、**云平台、虚拟机、PaaS 提供商,容器**(Docker 和 Vagga)等等。如今看来,它做得似乎不错。在3.3号,NodeOS 的成员 [Jesús Leganés Combarro][9] 在 Github上[宣布][10]: + +>**NodeOS 不再是一个玩具系统了**,它现在开始可以用在有实际需求的生产环境中了。 + +因此,如果你是 Node.js 的死忠或者乐于尝试新鲜事物,这或许值得你一试。在相关的文章中,你应该了解这些[Linux 发行版的具体用法][11] + + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/nodeos-operating-system/ + +作者:[Munif Tanjim][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://itsfoss.com/author/munif/ +[1]: http://node-os.com/ +[2]: https://nodejs.org/en/ +[3]: https://github.com/NodeOS/NodeOS/releases/tag/v1.0-RC1 +[4]: https://www.npmjs.com/ +[5]: http://itsfoss.com/tag/linux/ +[6]: https://github.com/groundwater +[7]: https://github.com/nodeos/nodeos +[8]: https://github.com/NodeOS/NodeOS/releases +[9]: https://github.com/piranna +[10]: https://github.com/NodeOS/NodeOS/issues/216 +[11]: http://itsfoss.com/weird-ubuntu-based-linux-distributions/ diff --git a/translated/tech/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md b/published/201606/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md similarity index 60% rename from translated/tech/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md rename to published/201606/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md index f3045322f3..f4760a61d5 100644 --- a/translated/tech/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md +++ b/published/201606/20160516 4 BEST MODERN OPEN SOURCE CODE EDITORS FOR LINUX.md @@ -3,17 +3,17 @@ Linux 上四个最佳的现代开源代码编辑器 ![](http://itsfoss.com/wp-content/uploads/2015/01/Best_Open_Source_Editors.jpeg) -在寻找 **Linux 上最好的代码编辑器**?如果你问那些老派的 Linux 用户,他们的答案肯定是 Vi,Vim,Emacs,Nano 等等。但我不讨论它们。我要讨论的是最新的,美观,优美强大,功能丰富,能够提高你编程体验的,**最好的 Linux 开源代码编辑器**。 +在寻找 **Linux 上最好的代码编辑器**?如果你问那些老派的 Linux 用户,他们的答案肯定是 Vi,Vim,Emacs,Nano 等等。但我不讨论它们。我要讨论的是崭新、先进、优美、强大、功能丰富,能够提高你编程体验的**最好的 Linux 开源代码编辑器**。 ### Linux 上最佳的现代开源代码编辑器 -我使用 Ubuntu 作为我的主力系统,因此提供的安装说明适用于基于 Ubuntu 的发行版。但这并不会让这个列表变成 **Ubuntu 上的最佳文本编辑器**,因为这些编辑器对所有 Linux 发行版都适用。多说一句,这个清单没有任何先后顺序。 +我使用 Ubuntu 作为我的主力系统,因此提供的安装说明适用于基于 Ubuntu 的发行版。但这并不会让这个列表变成 **Ubuntu 上的最佳文本编辑器**,因为这些编辑器对所有 Linux 发行版都适用。多说一句,这个清单的排名没有任何先后顺序。 ### BRACKETS ![](http://itsfoss.com/wp-content/uploads/2015/01/brackets_UI.jpeg) -[Brackets][1] 是 [Adobe][2] 的一个开源代码编辑器。Brackets 专注于 web 设计师的需求,内置 HTML,CSS 和 JavaScript 支持。它很轻量,也很强大。它提供了行内编辑和实时预览。还有无数可用的插件,进一步加强你在 Brackets 上的体验。 +[Brackets][1] 是来自 [Adobe][2] 的一个开源代码编辑器。Brackets 专注于 web 设计师的需求,内置 HTML、CSS 和 JavaScript 支持。它很轻量,也很强大。它提供了行内编辑和实时预览。还有无数可用的插件,可以进一步加强你在 Brackets 上的体验。 在 Ubuntu 以及基于 Ubuntu 的发行版(比如 Linux Mint)上[安装 Brackets][3] 的话,你可以用这个非官方的 PPA: @@ -25,52 +25,48 @@ sudo apt-get install brackets 至于其它 Linux 发行版,你可以从它的网站上获取到适用于 Linux,OS X 和 Windows 源码和二进制文件。 -[下载 Brackets 源码和二进制包](https://github.com/adobe/brackets/releases) +- [下载 Brackets 源码和二进制包](https://github.com/adobe/brackets/releases) ### ATOM ![](http://itsfoss.com/wp-content/uploads/2014/08/Atom_Editor.jpeg) -[Atom][4] 是另一个给程序员的开源代码编辑器,现代而且美观。Atom 是由 Github 开发的,宣称是“21世纪的可定制文本编辑器”。Atom 的外观看起来类似 Sublime Text,一个在程序员中很流行但是闭源的文本编辑器。 +[Atom][4] 是另一个给程序员的开源代码编辑器,现代而且美观。Atom 是由 Github 开发的,宣称是“面向21世纪的可魔改文本编辑器”。Atom 的外观看起来类似 Sublime Text,那是一个在程序员中很流行但是闭源的文本编辑器。 Atom 最近发布了 .deb 和 .rpm 包,所以你可以轻而易举地在基于 Debian 和 Fedora 的 Linux 发行版上安装它。当然,它也提供了源代码。 - -[下载 Atom .deb](https://atom.io/download/deb) - -[下载 Atom .rpm](https://atom.io/download/rpm) - -[获取 Atom 源码](https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md) +- [下载 Atom .deb](https://atom.io/download/deb) +- [下载 Atom .rpm](https://atom.io/download/rpm) +- [获取 Atom 源码](https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md) ### LIME TEXT ![](http://itsfoss.com/wp-content/uploads/2014/08/LimeTextEditor.jpeg) -你喜欢 Sublime Text 但是你对它是闭源的这一事实感觉不是很舒服?别担心,我们有 [Sublime Text 的开源克隆版][5],叫做 [Lime Text][6]。它是基于 Go,HTML 和 QT 的。克隆 Sublime Text 的原因是 Sublime Text 2 中有无数 bug,而 Sublime Text 3 看起来会永远处于 beta 之中。它的开发过程并不透明,也就无从得知 bug 是否被修复了。 +你喜欢 Sublime Text 但是你对它是闭源的这一事实感觉不是很舒服?别担心,我们有 [Sublime Text 的开源克隆版][5],叫做 [Lime Text][6]。它是基于 Go、HTML 和 QT 的。克隆 Sublime Text 的原因是 Sublime Text 2 中有无数 bug,而 Sublime Text 3 看起来会永远处于 beta 之中,而它的开发过程并不透明,也就无从得知 bug 是否被修复了。 所以开源爱好者们,开心地去下面这个链接下载 Lime Text 的源码吧: -[获取 Lime Text 源码](https://github.com/limetext/lime) +- [获取 Lime Text 源码](https://github.com/limetext/lime) ### LIGHT TABLE ![](http://itsfoss.com/wp-content/uploads/2015/01/Light_Table.jpeg) -[Light Table][7] 是另一个外观现代,功能丰富的开源代码编辑器,标榜“下一代代码编辑器”,它更像一个 IDE 而不仅仅是个文本编辑器。它还有无数扩展用以加强它的功能。也许你会喜欢它的行内求值。你得用用它才会相信 Light Table 有多好用。 +[Light Table][7] 是另一个外观现代、功能丰富的开源代码编辑器,标榜为“下一代代码编辑器”,它更像一个 IDE 而不仅仅是个文本编辑器。它还有无数可以加强它的功能的扩展。也许你会喜欢它的行内求值。你得用用它才会相信 Light Table 有多好用。 -[在 Ubuntu 上安装 Light Table](http://itsfoss.com/install-lighttable-ubuntu/) +- [在 Ubuntu 上安装 Light Table](http://itsfoss.com/install-lighttable-ubuntu/) ### 你的选择是? -不,我们的选择没有限制在这四个 Linux 代码编辑器之中。这个清单只是关于程序员的现代编辑器。当然,你还有很多选择,比如 [Notepad++ 的替代选择 Notepadqq][8] 或 [SciTE][9] 以及更多。那么,上面四个中,在 Linux 上而言你最喜欢哪个代码编辑器? - +不,我们的选择没有限制在这四个 Linux 代码编辑器之中。这个清单只是关于程序员的现代编辑器。当然,你还有很多选择,比如 [Notepad++ 的替代选择 Notepadqq][8] 或 [SciTE][9] 等等。那么,上面四个中,在 Linux 上而言你最喜欢哪个代码编辑器? ---------- -via: http://itsfoss.com/best-modern-open-source-code-editors-for-linux/?utm_source=newsletter&utm_medium=email&utm_campaign=offline_and_portable_linux_apps_and_other_linux_stories +via: http://itsfoss.com/best-modern-open-source-code-editors-for-linux/ 作者:[Abhishek Prakash][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201606/20160518 Android Use Apps Even Without Installing Them.md b/published/201606/20160518 Android Use Apps Even Without Installing Them.md new file mode 100644 index 0000000000..40cf217035 --- /dev/null +++ b/published/201606/20160518 Android Use Apps Even Without Installing Them.md @@ -0,0 +1,70 @@ +安卓的下一场革命:不安装即可使用应用! +=================================================================== + +谷歌安卓的一项新创新将可以让你无需安装即可在你的设备上使用应用程序。现在已经初具雏形。 + +还记得那时候吗,某人发给你了一个链接,要求你通过安装一个应用才能查看。 + +是否要安装这个应用就为了看一下链接,这种进退两难的选择一定让你感到很沮丧。而且,安装应用这个事也会消耗你不少宝贵的时间。 + +上述场景可能大多数人都经历过,或者说大多数现代科技用户都经历过。尽管如此,我们都接受,认为这是天经地义的事情。 + +事实真的如此吗? + +针对这个问题谷歌的安卓部门给出了一个全新的、开箱即用的答案: + +### Android Instant Apps (AIA) + +Android Instant Apps 声称可以从一开始就帮你摆脱这样的两难境地,让你简单地点击链接(见打开链接的示例)然后直接开始使用这个应用。 + +另一个真实生活场景的例子,如果你想停车但是没有停车码表的相应应用,有了 Instant Apps 在这种情况下就方便多了。 + +根据谷歌提供的信息,你可以简单地将你的手机和码表触碰,停车应用就会直接显示在你的屏幕上,并且准备就绪可以使用。 + +#### 它是怎么工作的? + +Instant Apps 和你已经熟悉的应用基本相同,只有一个不同——这些应用为了满足你完成某项任务的需要,只提供给你已经经过**裁剪和模块化**的应用必要部分。 + +例如,展开打开链接的场景作为例子,为了查看一个链接,你不需要拥有一个可以写、发送,做咖啡或其它特性的全功能应用。你所需要的全部就是查看功能——而这就是你所会获取到的部分。 + +这样应用就可以快速打开,让你可以完成你的目标任务。 + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/AIA-demo.jpg) + +*AIA 示例* + +![](https://4.bp.blogspot.com/-p5WOrD6wVy8/VzyIpsDqULI/AAAAAAAADD0/xbtQjurJZ6EEji_MPaY1sLK5wVkXSvxJgCKgB/s800/B%2526H%2B-%2BDevice%2B%2528Final%2529.gif) + +*B&H 图片(通过谷歌搜索)* + +![](https://2.bp.blogspot.com/-q5ApCzECuNA/VzyKa9l0t2I/AAAAAAAADEI/nYhhMClDl5Y3qL5-wiOb2J2QjtGWwbF2wCLcB/s800/BuzzFeed-Device-Install%2B%2528Final%2529.gif) + +*BuzzFeedVideo(通过一个共享链接)* + +![](https://2.bp.blogspot.com/-mVhKMMzhxms/VzyKg25ihBI/AAAAAAAADEM/dJN6_8H7qkwRyulCF7Yr2234-GGUXzC6ACLcB/s800/Park%2Band%2BPay%2B-%2BDevice%2Bwith%2BMeter%2B%2528Final%2529.gif) + +*停车与支付(例)(通过 NFC)* + + +听起来很棒,不是吗?但是其中还有很多技术方面的问题需要解决。 + +比如,从安全的观点来说:从理论上来说,如果任何应用都能在你的设备上运行,甚至你都不用安装它——你要怎么保证设备远离恶意软件攻击? + +因此,为了消除这类威胁,谷歌还在这个项目上努力,目前只有少数合作伙伴,未来将逐步扩展。 + +谷歌最终计划在明年发布 AIA(Android Instant Apps)。 + +相关:[介绍 Android Instant Apps][1] + +-------------------------------------------------------------------------------- + +via: http://www.iwillfolo.com/androids-next-revolution-use-apps-even-without-installing-them/ + +作者:[iwillfolo][a] +译者:[alim0x](https://github.com/alim0x) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.iwillfolo.com +[1]: http://android-developers.blogspot.co.il/2016/05/android-instant-apps-evolving-apps.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+blogspot/hsDu+%28Android+Developers+Blog%29 diff --git a/translated/tech/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md b/published/201606/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md similarity index 62% rename from translated/tech/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md rename to published/201606/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md index cc212e7a4c..4f31e9ed63 100644 --- a/translated/tech/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md +++ b/published/201606/20160520 ORB - NEW GENERATION OF LINUX APPS ARE HERE.md @@ -5,19 +5,19 @@ ORB:新一代 Linux 应用 我们之前讨论过[在 Ubuntu 上离线安装应用][1]。我们现在要再次讨论它。 -[Orbital Apps][2] 给我们带来了新的软件包类型,**ORB**,它带有便携软件,交互式安装向导支持,以及离线使用的能力。 +[Orbital Apps][2] 给我们带来了一种新的软件包类型 **ORB**,它具有便携软件、交互式安装向导支持,以及离线使用的能力。 -便携软件很方便。主要是因为它们能够无需任何管理员权限直接运行,也能够带着所有的设置和数据随U盘存储。而交互式的安装向导也能让我们轻松地安装应用。 +便携软件很方便。主要是因为它们能够无需任何管理员权限直接运行,也能够带着所有的设置和数据随 U 盘存储。而交互式的安装向导也能让我们轻松地安装应用。 -### 开放可运行包 OPEN RUNNABLE BUNDLE (ORB) +### 开放式可运行的打包(OPEN RUNNABLE BUNDLE) (ORB) -ORB 是一个免费和开源的包格式,它和其它包格式在很多方面有所不同。ORB 的一些特性: +ORB 是一个自由开源的包格式,它和其它包格式在很多方面有所不同。ORB 的一些特性: -- **压缩**:所有的包经过压缩,使用 squashfs,体积最多减少 60%。 -- **便携模式**:如果一个便携 ORB 应用是从可移动设备运行的,它会把所有设置和数据存储在那之上。 +- **压缩**:所有的包都经过 squashfs 压缩,体积最多可减少 60%。 +- **便携模式**:如果一个便携 ORB 应用是在可移动设备上运行的,它会把所有设置和数据存储在那之上。 - **安全**:所有的 ORB 包使用 PGP/RSA 签名,通过 TLS 1.2 分发。 - **离线**:所有的依赖都打包进软件包,所以不再需要下载依赖。 -- **开放包**:ORB 包可以作为 ISO 镜像挂载。 +- **开放式软件包**:ORB 软件包可以作为 ISO 镜像挂载。 ### 种类 @@ -26,77 +26,69 @@ ORB 应用现在有两种类别: - 便携软件 - SuperDEB -#### 1. 便携 ORB 软件 +### 1. 便携 ORB 软件 -便携 ORB 软件可以立即运行而不需要任何的事先安装。那意味着它不需要管理员权限和依赖!你可以直接从 Orbital Apps 网站下载下来就能使用。 +便携 ORB 软件可以立即运行而不需要任何的事先安装。这意味着它不需要管理员权限,也没有依赖!你可以直接从 Orbital Apps 网站下载下来就能使用。 -并且由于它支持便携模式,你可以将它拷贝到U盘携带。它所有的设置和数据会和它一起存储在U盘。只需将U盘连接到任何运行 Ubuntu 16.04 的机器上就行了。 +并且由于它支持便携模式,你可以将它拷贝到 U 盘携带。它所有的设置和数据会和它一起存储在 U 盘。只需将 U 盘连接到任何运行 Ubuntu 16.04 的机器上就行了。 -##### 可用便携软件 +#### 可用便携软件 目前有超过 35 个软件以便携包的形式提供,包括一些十分流行的软件,比如:[Deluge][3],[Firefox][4],[GIMP][5],[Libreoffice][6],[uGet][7] 以及 [VLC][8]。 完整的可用包列表可以查阅 [便携 ORB 软件列表][9]。 -##### 使用便携软件 +#### 使用便携软件 按照以下步骤使用便携 ORB 软件: - 从 Orbital Apps 网站下载想要的软件包。 -- 将其移动到想要的位置(本地磁盘/U盘)。 +- 将其移动到想要的位置(本地磁盘/U 盘)。 - 打开存储 ORB 包的目录。 -![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-1-1024x576.jpg) - + ![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-1-1024x576.jpg) - 打开 ORB 包的属性。 -![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-2.jpg) ->给 ORB 包添加运行权限 + ![给 ORB 包添加运行权限](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-2.jpg) - 在权限标签页添加运行权限。 - 双击打开它。 等待几秒,让它准备好运行。大功告成。 -#### 2. SuperDEB +### 2. SuperDEB 另一种类型的 ORB 软件是 SuperDEB。SuperDEB 很简单,交互式安装向导能够让软件安装过程顺利得多。如果你不喜欢从终端或软件中心安装软件,superDEB 就是你的菜。 最有趣的部分是你安装时不需要一个互联网连接,因为所有的依赖都由安装向导打包了。 -##### 可用的 SuperDEB +#### 可用的 SuperDEB 超过 60 款软件以 SuperDEB 的形式提供。其中一些流行的有:[Chromium][10],[Deluge][3],[Firefox][4],[GIMP][5],[Libreoffice][6],[uGet][7] 以及 [VLC][8]。 完整的可用 SuperDEB 列表,参阅 [SuperDEB 列表][11]。 -##### 使用 SuperDEB 安装向导 +#### 使用 SuperDEB 安装向导 - 从 Orbital Apps 网站下载需要的 SuperDEB。 - 像前面一样给它添加**运行权限**(属性 > 权限)。 - 双击 SuperDEB 安装向导并按下列说明操作: -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-1.png) ->点击 OK + ![点击 OK](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-1.png) -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-2.png) ->输入你的密码并继续 + ![输入你的密码并继续](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-2.png) -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-3.png) ->它会开始安装… + ![它会开始安装…](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-3.png) -![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-4.png) ->一会儿他就完成了… + ![一会儿它就完成了…](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-4.png) - 完成安装之后,你就可以正常使用了。 ### ORB 软件兼容性 -从 Orbital Apps 可知,它们完全适配 Ubuntu 16.04 [64 bit]。 +从 Orbital Apps 可知,它们完全适配 Ubuntu 16.04 [64 位]。 ->阅读建议:[如何在 Ubuntu 获知你的是电脑 32 位还是 64 位的][12]。 - -至于其它发行版兼容性不受保证。但我们可以说,它在所有 Ubuntu 16.04 衍生版(UbuntuMATE,UbuntuGNOME,Lubuntu,Xubuntu 等)以及基于 Ubuntu 16.04 的发行版(比如即将到来的 Linux Mint 18)上都适用。我们现在还不清楚 Orbital Apps 是否有计划拓展它的支持到其它版本 Ubuntu 或 Linux 发行版上。 +至于其它发行版兼容性则不受保证。但我们可以说,它在所有 Ubuntu 16.04 衍生版(UbuntuMATE,UbuntuGNOME,Lubuntu,Xubuntu 等)以及基于 Ubuntu 16.04 的发行版(比如即将到来的 Linux Mint 18)上都适用。我们现在还不清楚 Orbital Apps 是否有计划拓展它的支持到其它版本 Ubuntu 或 Linux 发行版上。 如果你在你的系统上经常使用便携 ORB 软件,你可以考虑安装 ORB 启动器。它不是必需的,但是推荐安装它以获取更佳的体验。最简短的 ORB 启动器安装流程是打开终端输入以下命令: @@ -116,11 +108,11 @@ wget -O - https://www.orbital-apps.com/orb.sh | bash ---------------------------------- -via: http://itsfoss.com/orb-linux-apps/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 +via: http://itsfoss.com/orb-linux-apps/ 作者:[Munif Tanjim][a] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201606/20160523 Driving cars into the future with Linux.md b/published/201606/20160523 Driving cars into the future with Linux.md new file mode 100644 index 0000000000..fad7ca99f2 --- /dev/null +++ b/published/201606/20160523 Driving cars into the future with Linux.md @@ -0,0 +1,106 @@ +与 Linux 一同驾车奔向未来 +=========================================== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY) + +当我驾车的时候并没有这么想过,但是我肯定喜欢一个配有这样系统的车子,它可以让我按下几个按钮就能与我的妻子、母亲以及孩子们语音通话。这样的系统也可以让我选择是否从云端、卫星广播、以及更传统的 AM/FM 收音机收听音乐流媒体。我也会得到最新的天气情况,以及它可以引导我的车载 GPS 找到抵达下一个目的地的最快路线。[车载娱乐系统(In-vehicle infotainment)][1],业界常称作 IVI,它已经普及出现在最新的汽车上了。 + +前段时间,我乘坐飞机跨越了数百英里,然后租了一辆汽车。令人愉快的是,我发现我租赁的汽车上配置了类似我自己车上同样的 IVI 技术。毫不犹豫地,我就通过蓝牙连接把我的联系人上传到了系统当中,然后打电话回家给我的家人,让他们知道我已经安全抵达了,然后我的主机会让他们知道我正在去往他们家的路上。 + +在最近的[新闻综述][2]中,Scott Nesbitt 引述了一篇文章,说福特汽车公司因其开源的[智能设备连接(Smart Device Link)][3](SDL)从竞争对手汽车制造商中得到了足够多的回报,这个中间件框架可以用于支持移动电话。 SDL 是 [GENIVI 联盟][4]的一个项目,这个联盟是一个非营利性组织,致力于建设支持开源车载娱乐系统的中间件。据 GENIVI 的执行董事 [Steven Crumb][5] 称,他们的[成员][6]有很多,包括戴姆勒集团、现代、沃尔沃、日产、本田等等 170 个企业。 + +为了在同行业间保持竞争力,汽车生产企业需要一个中间设备系统,以支持现代消费者所使用的各种人机界面技术。无论您使用的是 Android、iOS 还是其他设备,汽车 OEM 厂商都希望自己的产品能够支持这些。此外,这些的 IVI 系统必须有足够适应能力以支持日益变化的移动技术。OEM 厂商希望提供有价值的服务,并可以在他们的 IVI 之上增加服务,以满足他们客户的各种需求。 + +### 步入 Linux 和开源软件 + +除了 GENIVI 在努力之外,[Linux 基金会][7]也赞助支持了[车载 Linux(Automotive Grade Linux)][8](AGL)工作组,这是一个致力于为汽车应用寻求开源解决方案的软件基金会。虽然 AGL 初期将侧重于 IVI 系统,但是未来他们希望发展到不同的方向,包括[远程信息处理(telematics)][9]、抬头显示器(HUD)及其他控制系统等等。 现在 AGL 已经有超过 50 名成员,包括捷豹、丰田、日产,并在其[最近发布的一篇公告][10]中宣称福特、马自达、三菱、和斯巴鲁也加入了。 + +为了了解更多信息,我们采访了这一新兴领域的两位领导人。具体来说,我们想知道 Linux 和开源软件是如何被使用的,并且它们是如何事实上改变了汽车行业的面貌。首先,我们将与 [Alison Chaiken][11] 谈谈,她是一位任职于 Peloton Technology 的软件工程师,也是一位在车载 Linux 、网络安全和信息透明化方面的专家。她曾任职于 [Alison Chaiken][11] 公司、诺基亚和斯坦福直线性加速器。然后我们和 [Steven Crumb][12] 进行了交谈,他是 GENIVI 执行董事,他之前从事于高性能计算环境(超级计算机和早期的云计算)的开源工作。他说,虽然他再不是一个程序员了,但是他乐于帮助企业解决在使用开源软件时的实际业务问题。 + +### 采访 Alison Chaiken (by [Deb Nicholson][13]) + +#### 你是如何开始对汽车软件领域感兴趣的? + +我曾在诺基亚从事于手机上的 [MeeGo][14] 产品,2009 年该项目被取消了。我想,我下一步怎么办?其时,我的一位同事正在从事于 [MeeGo-IVI][15],这是一个早期的车载 Linux 发行版。 “Linux 在汽车方面将有很大发展,” 我想,所以我就朝着这个方向努力。 + +#### 你能告诉我们你在这些日子里工作在哪些方面吗? + +我目前正在启动一个高级巡航控制系统的项目,它用在大型卡车上,使用实时 Linux 以提升安全性和燃油经济性。我喜欢在这方面的工作,因为没有人会反对提升货运的能力。 + +#### 近几年有几则汽车被黑的消息。开源代码方案可以帮助解决这个问题吗? + +我恰好针对这一话题准备了一次讲演,我会在南加州 Linux 2016 博览会上就 Linux 能否解决汽车上的安全问题做个讲演 ([讲演稿在此][16])。值得注意的是,GENIVI 和车载 Linux 项目已经公开了他们的代码,这两个项目可以通过 Git 提交补丁。(如果你有补丁的话),请给上游发送您的补丁!许多眼睛都盯着,bug 将无从遁形。 + +#### 执法机构和保险公司可以找到很多汽车上的数据的用途。他们获取这些信息很容易吗? + +好问题。IEEE-1609 专用短程通信标准(Dedicated Short Range Communication Standard)就是为了让汽车的 WiFi 消息可以安全、匿名地传递。不过,如果你从你的车上发推,那可能就有人能够跟踪到你。 + +#### 开发人员和公民个人可以做些什么,以在汽车技术进步的同时确保公民自由得到保护? + +电子前沿基金会( Electronic Frontier Foundation)(EFF)在关注汽车问题方面做了出色的工作,包括对哪些数据可以存储在汽车 “黑盒子”里通过官方渠道发表了看法,以及 DMCA 规定 1201 如何应用于汽车上。 + +#### 在未来几年,你觉得在汽车方面会发生哪些令人激动的发展? + +可以拯救生命的自适应巡航控制系统和防撞系统将取得长足发展。当它们大量进入汽车里面时,我相信这会使得(因车祸而导致的)死亡人数下降。如果这都不令人激动,我不知道还有什么会更令人激动。此外,像自动化停车辅助功能,将会使汽车更容易驾驶,减少汽车磕碰事故。 + +#### 我们需要做什么?人们怎样才能参与? + +车载 Linux 开发是以开源的方式开发,它运行在每个人都能买得起的廉价硬件上(如树莓派 2 和中等价位的 Renesas Porter 主板)。 GENIVI 汽车 Linux 中间件联盟通过 Git 开源了很多软件。此外,还有很酷的 [OSVehicle 开源硬件][17]汽车平台。 + +只需要不太多的预算,人们就可以参与到 Linux 软件和开放硬件中。如果您感兴趣,请加入我们在 Freenode 上的IRC #automotive 吧。 + +### 采访 Steven Crumb (by Don Watkins) + +#### GENIVI 在 IVI 方面做了哪些巨大贡献? + +GENIVI 率先通过使用自由开源软件填补了汽车行业的巨大空白,这包括 Linux、非安全关键性汽车软件(如车载娱乐系统(IVI))等。作为消费者,他们很期望在车辆上有和智能手机一样的功能,对这种支持 IVI 功能的软件的需求量成倍地增长。不过不断提升的软件数量也增加了建设 IVI 系统的成本,从而延缓了其上市时间。 + +GENIVI 使用开源软件和社区开发的模式为汽车制造商及其软件提供商节省了大量资金,从而显著地缩短了产品面市时间。我为 GENIVI 而感到激动,我们有幸引导了一场革命,在缓慢进步的汽车行业中,从高度结构化和专有的解决方案转换为以社区为基础的开发方式。我们还没有完全达成目标,但是我们很荣幸在这个可以带来实实在在好处的转型中成为其中的一份子。 + +#### 你们的主要成员怎样推动了 GENIVI 的发展方向? + +GENIVI 有很多成员和非成员致力于我们的工作。在许多开源项目中,任何公司都可以通过通过技术输出而发挥影响,包括简单地贡献代码、补丁、花点时间测试。前面说过,宝马、奔驰、现代汽车、捷豹路虎、标致雪铁龙、雷诺/日产和沃尔沃都是 GENIVI 积极的参与者和贡献者,其他的许多 OEM 厂商也在他们的汽车中采用了 IVI 解决方案,广泛地使用了 GENIVI 的软件。 + +#### 这些贡献的代码使用了什么许可证? + +GENIVI 采用了一些许可证,包括从(L)GPLv2 到 MPLv2 和 Apache2.0。我们的一些工具使用的是 Eclipse 许可证。我们有一个[公开许可策略][18],详细地说明了我们的许可证偏好。 + +#### 个人或团体如何参与其中?社区的参与对于这个项目迈向成功有多重要? + +GENIVI 的开发完全是开放的([projects.genivi.org][19]),因此,欢迎任何有兴趣在汽车中使用开源软件的人参加。也就是说,公司可以通过成员的方式[加入该联盟][20],联盟以开放的方式资助其不断进行开发。GENIVI 的成员可以享受各种各样的便利,在过去六年中,已经有多达 140 家公司参与到这个全球性的社区当中。 + +社区对于 GENIVI 是非常重要的,没有一个活跃的贡献者社区,我们不可能在这些年开发和维护了这么多有价值的软件。我们努力让参与到 GENIVI 更加简单,现在只要加入一个[邮件列表][21]就可以接触到各种软件项目中的人们。我们使用了许多开源项目采用的标准做法,并提供了高品质的工具和基础设施,以帮助开发人员宾至如归而富有成效。 + +无论你是否熟悉汽车软件,都欢迎你加入我们的社区。人们已经对汽车改装了许多年,所以对于许多人来说,在汽车上修修改改是自热而然的做法。对于汽车来说,软件是一个新的领域,GENIVI 希望能为对汽车和开源软件有兴趣的人打开这扇门。 + +------------------------------- +via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb + +作者:[Don Watkins][a] +译者:[erlinux](https://github.com/erlinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: https://en.wikipedia.org/wiki/In_car_entertainment +[2]: https://opensource.com/life/16/1/weekly-news-jan-9 +[3]: http://projects.genivi.org/smartdevicelink/home +[4]: http://www.genivi.org/ +[5]: https://www.linkedin.com/in/stevecrumb +[6]: http://www.genivi.org/genivi-members +[7]: http://www.linuxfoundation.org/ +[8]: https://www.automotivelinux.org/ +[9]: https://en.wikipedia.org/wiki/Telematics +[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and +[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3 +[12]: https://www.linkedin.com/in/stevecrumb +[13]: https://opensource.com/users/eximious +[14]: https://en.wikipedia.org/wiki/MeeGo +[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/ +[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf +[17]: https://www.osvehicle.com/ +[18]: http://projects.genivi.org/how +[19]: http://projects.genivi.org/ +[20]: http://genivi.org/join +[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects diff --git a/published/201606/20160524 Test Fedora 24 Beta in an OpenStack cloud.md b/published/201606/20160524 Test Fedora 24 Beta in an OpenStack cloud.md new file mode 100644 index 0000000000..68c147c5e8 --- /dev/null +++ b/published/201606/20160524 Test Fedora 24 Beta in an OpenStack cloud.md @@ -0,0 +1,77 @@ +在 OpenStack 云中测试 Fedora 24 Beta +=========================================== + +![](https://major.io/wp-content/uploads/2012/01/fedorainfinity.png) + +虽然离 [Fedora 24][1] 还有几周,你现在可以就测试Fedora 24 Beta了。这是一个[窥探新特性][2]的好机会,并且可以帮助他们找出仍需要修复的 bug。 + +[Fedora Cloud][3] 镜像可以从你常用的[本地镜像][4]或者 [Fedora 的服务器][5]中下载。本篇文章我将向你展示如何将这个镜像导入 OpenStack 环境并且测试 Fedora 24 Beta。 + +最后说一下:这还是 beta 软件。目前对我来说是可靠的,但是你的体验可能会不同。我建议你等到正式版发布再在上面部署关键的应用。 + +### 导入镜像 + +旧版的 glance 客户端(版本1)允许你在 OpenStack 环境中导入一个可通过 URL 访问的镜像。由于我 OpenStack 云的连接速度(1 Gbps)比我家 (大约 20 mbps 上传速度)快,这个功能对我很有用。然而,从 URL 导入的功能[在 glance v2 中被移除了]。[OpenStackClient][7] 也不支持这个功能。 + +现在由两个选择: + +- 安装旧版的 glance 客户端 +- 使用 Horizon (网页面板) + +获取旧版本的 glance 是有挑战性的。OpenStack liberty 版本的需求文件[对 glance 客户端没有最高版本上限][8],并且很难找到让旧版客户端工作的依赖文件。 + +让我们使用 Horizon,这就是写这篇文章的原因。 + +### 在 Horizon 中添加一个镜像 + +登录 Horizon 面板,点击 Compute->Image。点击页面右上方的“+ Create Image”,一个新的窗口会显示出来。并且窗口中有这些信息: + +- **Name**: Fedora 24 Cloud Beta +- **Image Source**: 镜像位置 +- **Image Location**: http://mirrors.kernel.org/fedora/releases/test/24_Beta/CloudImages/x86_64/images/Fedora-Cloud-Base-24_Beta-1.6.x86_64.qcow2 +- **Format**: QCOW2 – QEMU 模拟器 +- **Copy Data**: 确保勾选了 + +完成后,你会看到这个: + +![](https://major.io/wp-content/uploads/2016/05/horizon_image.png) + +点击“创建镜像(Creat Image)”,接着镜像列表会显示一段时间的 Saving 信息。一旦切换到 Active,你就可以构建一个实例了。 + +### 构建实例 + +既然我们在使用 Horizon,我们可以在此完成构建过程。 + +在镜像列表页面,找出我们上传的镜像并且点击右边的启动实例(Launch Instance)。一个新的窗口会显示出来。镜像名(Image Name)下拉框中应该已经选择了 Fedora 24 Beta 的镜像。在这里,选择一个实例名,选择一个安全组和密钥对(在 Access & Security 标签中)和网络(在 Networking 标签)。确保选择有足够容量的存储(m1.tiny 不太够)。 + +点击启动(Launch)并且等待实例启动。 + +一旦实例构建完成,你能以用户 fedora 通过 ssh 连接到实例。如果你的[安全组允许连接][9]并且你的密钥对正确配置了,你应该进入到 Fedora 24 Beta 中了! + +还不确定接下来做什么?有下面几点建议: + +- 升级所有的包并且重启(确保你测试的是最新的更新) +- 安装一些熟悉的应用并且验证它们可以正常工作 +- 测试你已有的自动化或者配置管理工具 +- 打开 bug 报告 + +-------------------------------------------------------------------------------- + +via: https://major.io/2016/05/24/test-fedora-24-beta-openstack-cloud/ + +作者:[major.io][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://major.io/about-the-racker-hacker/ +[1]: https://fedoraproject.org/wiki/Releases/24/Schedule +[2]: https://fedoraproject.org/wiki/Releases/24/ChangeSet +[3]: https://getfedora.org/en/cloud/ +[4]: https://admin.fedoraproject.org/mirrormanager/mirrors/Fedora/24/x86_64 +[5]: https://getfedora.org/en/cloud/download/ +[6]: https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability +[7]: http://docs.openstack.org/developer/python-openstackclient/ +[8]: https://github.com/openstack/requirements/blob/stable/liberty/global-requirements.txt#L159 +[9]: https://major.io/2016/05/16/troubleshooting-openstack-network-connectivity/ diff --git a/translated/tech/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md b/published/201606/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md similarity index 57% rename from translated/tech/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md rename to published/201606/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md index 24cb248921..57f7b5a314 100644 --- a/translated/tech/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md +++ b/published/201606/20160526 BEST LINUX PHOTO MANAGEMENT SOFTWARE IN 2016.md @@ -1,4 +1,4 @@ -2016年最佳 Linux 图像管理软件 +2016 年最佳 Linux 图像管理软件 ============================================= ![](http://itsfoss.com/wp-content/uploads/2016/05/Best-Linux-Photo-Management-Software.jpg) @@ -11,119 +11,111 @@ 这个列表和我们先前的 [最佳图像程序应用][1] 有些差别,上次我们介绍了图像编辑软件,绘图软件等,而这次的介绍主要集中在图像管理软件上。 -好,下面我们开始介绍。我会详细说明在 Ubuntu 下的安装命令以及衍生的命令,我们只需要打开终端运行这些命令。 +好,下面我们开始介绍。我会详细说明在 Ubuntu 及其衍生版下的安装命令,我们只需要打开终端运行这些命令。 -### [GTHUMB](https://wiki.gnome.org/Apps/gthumb) +### [gThumb](https://wiki.gnome.org/Apps/gthumb) ![](http://itsfoss.com/wp-content/uploads/2016/05/gThumb-1-1024x540.jpg) ->gThumb 图像编辑器 -gThumb 是在 GNOME 桌面环境下的一个轻量级的图像管理应用,它涵盖了基本图像管理功能,编辑图片以及更加高级的操作,gThumb 主要功能如下: +*gThumb 图像编辑器* -- 图片查看:支持所有主流的图片格式(包括gif)和元数据(EXIF, XMP 等)。 +gThumb 是在 GNOME 桌面环境下的一个轻量级的图像管理应用,它涵盖了基本图像管理功能,比如编辑图片以及更加高级的操作等,gThumb 主要功能如下: -- 图片浏览:所有基础的浏览操作(缩略图,移动,复制,删除等)以及书签支持。 +- 图片查看:支持所有主流的图片格式(包括 gif)和元数据(EXIF、 XMP 等)。 +- 图片浏览:所有基础的浏览操作(缩略图、移动、复制、删除等)以及书签支持。 +- 图片管理:使用标签、目录和库来组织图片。从数码相机导入图片,集成了网络相册(Picasa,Flickr,Facebook等)。 +- 图片编辑:基本图像编辑操作、滤镜、格式转换等。 -- 图片管理:使用标签操作图片,目录和图片库。从数码相机,网络相册(Picasa,Flickr,Facebook等)整合,导入图片。 +更多功能请参考官方 [gThumb 功能][2] 列表。如果你使用的是 GNOME 或者基于 GNOME 的桌面环境(如 MATE),那么你一定要试用一下。 -- 图片编辑:基本图像编辑操作,滤镜,格式转换等。 - -- 更多功能请参考官方 [gThumb功能][2] 列表。如果你使用的是 GNOME 或者基于 GNOME 的桌面环境(如 MATE),那么你一定要试用一下。 - -#### GTHUMB 安装 +#### gThumb 安装 ``` sudo apt-get install gthumb ``` -### [DIGIKAM][3] +### [digiKam][3] ![](http://itsfoss.com/wp-content/uploads/2016/05/digiKam-1-1024x540.png) ->digiKam + +*digiKam* digiKam 主要为 KDE 而设计,在其他桌面环境下也可以使用。它有很多很好的图像界面功能,主要功能如下所示: -- 图片管理:相册,子相册,标签,评论,元数据,排序支持。 - -- 图片导入:支持从数码相机,USB设备,网络相册(包括 Picasa 和 Facebook)导入,以及另外一些功能。 - -- 图片输出:支持输出至很多网络在线平台,以及各式转换。 - +- 图片管理:相册、子相册、标签、评论、元数据、排序支持。 +- 图片导入:支持从数码相机、USB设备、网络相册(包括 Picasa 和 Facebook)导入,以及另外一些功能。 +- 图片输出:支持输出至很多网络在线平台,以及格式转换。 - 图片编辑:支持很多图像编辑的操作。 -digiKam 是众多优秀图像管理软件之一。 +毫无疑问,digiKam 如果不是最好的图像管理软件,也是之一。 -#### DIGIKAM 安装 +#### digiKam 安装 ``` sudo apt-get install digikam ``` -### [SHOTWELL][4] +### [Shotwell][4] ![](http://itsfoss.com/wp-content/uploads/2016/05/Shotwell-1-1024x540.png) ->Shotwell + +*Shotwell* Shotwell 图像管理也是为 GNOME 桌面环境设计,虽然功能不及 gThumb 多,但满足了基本需求。主要功能如下: - 从磁盘或数码相机导入图片。 - -- 项目,标签和文件夹管理。 - +- 事件、标签和基于文件夹的图片管理方式。 - 基本图片编辑功能和格式转换。 - - 支持上传至网络平台(Facebook,Flickr,Tumblr 等)。 如果你想要一款功能相对简单的应用,你可以尝试一下这个。 -#### SHOTWELL 安装 +#### Shotwell 安装 ``` sudo apt-get install shotwell ``` -### [KPHOTOALBUM][5] +### [KPhotoAlbum][5] ![](http://itsfoss.com/wp-content/uploads/2016/05/KPhotoAlbum-1-1024x540.png) ->KPhotoAlbum -KPhotoAlbum 是一款在 KDE 桌面环境下的图像管理应用。它有一些独特的功能:分类和基于时间浏览。你可以基于人物,地点,时间分类;另外在用户图形界面底部会显示时间栏。 +*KPhotoAlbum* + +KPhotoAlbum 是一款在 KDE 桌面环境下的图像管理应用。它有一些独特的功能:分类和基于时间浏览。你可以基于人物、地点、时间分类;另外在用户图形界面底部会显示时间栏。 KPhotoAlbum 有很多图像管理和编辑功能,主要功能包括: -- 高级图片操作(目录,子目录,标签,元数据,注释等)。 - +- 高级图片操作(分类、子分类、标签、元数据、注释等等)。 - 图片导入导出功能(包括主流图片分享平台)。 - - 众多编辑功能(包括批量处理)。 -这些高级的功能有它们的缺点,就是用户需要手工操作。但如果你是KDE爱好者,这是个好的选择。它完美适用 KDE,但是你也可以在非 KDE 桌面环境下使用 KPhotoAlbum。 +这些高级的功能有一些缺点,就是用户大多需要手工操作。但如果你是 KDE 爱好者,这是个好的选择。它完美适用 KDE,但是你也可以在非 KDE 桌面环境下使用 KPhotoAlbum。 -#### KPHOTOALBUM 安装 +#### KPhotoAlbum 安装 ``` sudo apt-get install kphotoalbum ``` -### [DARKTABLE][7] +### [Darktable][7] ![](http://itsfoss.com/wp-content/uploads/2016/05/darktable-1-1024x540.png) ->Darktable -Darktable 相较于图像管理更偏向于图像编辑。Darktable 有良好的用户图形界面,对桌面环境没有特殊的要求,以及图像编辑功能。它的基本功能如下: +*Darktable* + +Darktable 与其说是图像管理工具,不如说是图像编辑软件。Darktable 有良好的用户图形界面,对桌面环境没有特殊的要求,这也不会影响到它的图像编辑功能。它的基本功能如下: - 基本图片管理。 - - 众多高级的图片编辑功能。 - - 支持导出至 Picasa 和 Flickr 和格式转换。 如果你喜欢照片编辑和润色,Darktable 是个好的选择。 -> 推荐阅读:[怎样在Ubuntu下通过PPA安装Darktable 2.0][8] +> 推荐阅读:[怎样在 Ubuntu 下通过 PPA 安装 Darktable 2.0][8] -#### DARKTABLE 安装 +#### Darktable 安装 ``` sudo add-apt-repository ppa:pmjdebruijn/darktable-release @@ -133,7 +125,7 @@ sudo apt-get install darktable ### 其它 -如果你想要功能简单的应用,比如从便携设备(相机,手机,便携设备等)中导入照片并存入磁盘,使用 [Rapid Photo Downloader][9],它很适合从便携设备中导入和备份图片,而且安装配置过程简单。 +如果你想要功能简单的应用,比如从便携设备(相机、手机、便携设备等)中导入照片并存入磁盘,毫无疑问该使用 [Rapid Photo Downloader][9],它很适合从便携设备中导入和备份图片,而且安装配置过程简单。 在 Ubuntu 上安装 Rapid Photo Downloade,打开终端输入命令: @@ -142,18 +134,19 @@ sudo apt-get install rapid-photo-downloader ``` 如果你想尝试更多的选择: -- [GNOME Photos][10] (GNOME桌面环境下的图像查看器) -- [Gwenview][11] (KDE桌面环境下的图像查看器) + +- [GNOME Photos][10] (GNOME 桌面环境下的图像查看器) +- [Gwenview][11] (KDE 桌面环境下的图像查看器) - [Picty][12] (开源图像管理器) -那么,你正在使用,或者打算使用其中一款应用吗?你有其它更好的推荐吗?你有最喜欢的 Linux 图像管理软件吗?分享你的观点。 +那么,你正在使用,或者打算使用其中一款应用吗?在 Ubuntu 或其它 Linux 上你有其它更好的推荐吗?你有最喜欢的 Linux 图像管理软件吗?分享你的观点给我们。 ---------- -via: http://itsfoss.com/linux-photo-management-software/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 +via: http://itsfoss.com/linux-photo-management-software/ 作者:[Munif Tanjim][a] 译者:[sarishinohara](https://github.com/sarishinohara) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201606/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md b/published/201606/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md new file mode 100644 index 0000000000..c74b0eba1b --- /dev/null +++ b/published/201606/20160529 LAMP Stack Installation Guide on Ubuntu Server 16.04 LTS.md @@ -0,0 +1,159 @@ +在 Ubuntu Server 16.04 LTS 上安装 LAMP +========================================================= + +LAMP 方案是一系列自由和开源软件的集合,包含了 **Linux**、Web 服务器 (**Apache**)、 数据库服务器 (**MySQL / MariaDB**) 和 **PHP** (脚本语言)。LAMP 是那些需要安装和构建动态网页应用的基础平台,比如WordPress、Joomla、OpenCart 和 Drupal。 + +在这篇文章中,我将描述如何在 Ubuntu Server 16.04 LTS 上安装 LAMP,众所周知 Ubuntu 是一个基于 Linux 的操作系统,因此它构成了 LAMP 的第一个部分,在接下来的操作中,我将默认你已经安装了 Ubuntu Server 16.04。 + +### Apache2 web 服务器的安装 : + +在 Ubuntu linux 中,web 服务器是 Apache2,我们可以利用下面的命令来安装它: + +``` +linuxtechi@ubuntu:~$ sudo apt update +linuxtechi@ubuntu:~$ sudo apt install apache2 -y +``` + +当安装 Apache2 包之后,Apache2 相关的服务是启用的,并在重启后自动运行。在某些情况下,如果你的 Apache2 服务并没有自动运行和启用,你可以利用如下命令来启动和启用它。 + +``` +linuxtechi@ubuntu:~$ sudo systemctl start apache2.service +linuxtechi@ubuntu:~$ sudo systemctl enable apache2.service +linuxtechi@ubuntu:~$ sudo systemctl status apache2.service +``` + +如果你开启了 Ubuntu 的防火墙(ufw),那么你可以使用如下的命令来解除 web 服务器的端口(80和443)限制 + +``` +linuxtechi@ubuntu:~$ sudo ufw status +Status: active +linuxtechi@ubuntu:~$ sudo ufw allow in 'Apache Full' +Rule added +Rule added (v6) +linuxtechi@ubuntu:~$ +``` + +### 现在开始访问你的 web 服务器 : + +打开浏览器并输入服务器的IP地址或者主机名(http://IP\_Address\_OR\_Host\_Name),在我的例子中我的服务器 IP是‘192.168.1.13’ + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/Apache2-Ubuntu-server-16.04-1024x955.jpg) + +### 数据库服务器的安装 (MySQL Server 5.7) : + +MySQL 和 MariaDB 都是 Ubuntu 16.04 中的数据库服务器。 MySQL Server 和 MariaDB Server的安装包都可以在Ubuntu 的默认软件源中找到,我们可以选择其中的一个来安装。通过下面的命令来在终端中安装mysql服务器。 + +``` +linuxtechi@ubuntu:~$ sudo apt install mysql-server mysql-client +``` + +在安装过程中,它会要求你设置 mysql 服务器 root 帐户的密码。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/Enter-root-password-mysql-server-ubuntu-16-04.jpg) + +确认 root 帐户的密码,并点击确定。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/confirm-root-password-mysql-server-ubuntu-16-04.jpg) + +MySQL 服务器的安装到此已经结束了, MySQL 服务会自动启动并启用。我们可以通过如下的命令来校验 MySQL 服务的状态。 + +``` +linuxtechi@ubuntu:~$ sudo systemctl status mysql.service +``` + +### MariaDB Server的安装 : + +在终端中使用如下的命令来安装 Mariadb 10.0 服务器。 + +``` +linuxtechi@ubuntu:~$ sudo apt install mariadb-server +``` + +运行如下的命令来设置 MariaDB root 帐户的密码,还可以用来关闭某些选项,比如关闭远程登录功能。 + +``` +linuxtechi@ubuntu:~$ sudo mysql_secure_installation +``` + +### PHP 脚本语言的安装: + +PHP 7 已经存在于 Ubuntu 的软件源中了,在终端中执行如下的命令来安装 PHP 7: + +``` +linuxtechi@ubuntu:~$ sudo apt install php7.0-mysql php7.0-curl php7.0-json php7.0-cgi php7.0 libapache2-mod-php7.0 +``` + +创建一个简单的 php 页面,并且将它移动到 apache 的文档根目录下 (/var/ww/html) + +``` +linuxtechi@ubuntu:~$ vi samplepage.php + +``` + +在 vi 中编辑之后,保存并退出该文件。 + +``` +linuxtechi@ubuntu:~$ sudo mv samplepage.php /var/www/html/ +``` + +现在你可以从 web 浏览器中访问这个页面, 输入 : “http:///samplepage.php” ,你可以看到如下页面。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/Sample-PHP-Page-Ubuntu-Server-16-04.jpg) + +以上的页面向我们展示了 PHP 已经完全安装成功了。 + +### phpMyAdmin 的安装: + +phpMyAdmin 可以让我们通过它的 web 界面来执行所有与数据库管理和其他数据库操作相关的任务,这个安装包已经存在于 Ubuntu 的软件源中。 + +利用如下的命令来在 Ubuntu server 16.04 LTS 中安装 phpMyAdmin。 + +``` +linuxtechi@ubuntu:~$ sudo apt install php-mbstring php7.0-mbstring php-gettext +linuxtechi@ubuntu:~$ sudo systemctl restart apache2.service +linuxtechi@ubuntu:~$ sudo apt install phpmyadmin +``` + +在以下的安装过程中,它会提示我们选择 phpMyAdmin 运行的目标服务器。 + +选择 Apache2 并点击确定。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/Web-Server-for-phpMyAdmin-Ubuntu-Server-16-04.jpg) + +点击确定来配置 phpMyAdmin 管理的数据库。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/configure-database-for-phpmyadmin-ubuntu-server-16-04.jpg) + +指定 phpMyAdmin 向数据库服务器注册时所用的密码。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/Select-Password-for-phpMyadmin-ubuntu-16-04-1024x433.jpg) + +确认 phpMyAdmin 所需的密码,并点击确认。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/confirm-password-for-phpmyadmin-ubuntu-server-16-04.jpg) + +现在可以开始尝试访问 phpMyAdmin,打开浏览器并输入 : “http://Server\_IP\_OR\_Host\_Name/phpmyadmin” + +使用我们安装时设置的 root 帐户和密码。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/phpMyAdmin-Ubuntu-Server-16-04-1024x557.jpg) + +当我们点击“Go”的时候,将会重定向到如下所示的 ‘phpMyAdmin’ web界面。 + +![](http://www.linuxtechi.com/wp-content/uploads/2016/05/phpMyAdmin-portal-overview-ubuntu-server-16-04-1024x557.jpg) + +到现在,LAMP 方案已经被成功安装并可以使用了,欢迎分享你的反馈和评论。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxtechi.com/lamp-stack-installation-on-ubuntu-server-16-04/ + +作者:[Pradeep Kumar][a] +译者:[陆建波](https://github.com/lujianbo) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ diff --git a/published/201606/20160601 Apps to Snaps.md b/published/201606/20160601 Apps to Snaps.md new file mode 100644 index 0000000000..458960f555 --- /dev/null +++ b/published/201606/20160601 Apps to Snaps.md @@ -0,0 +1,61 @@ +将 Linux 软件打包成 Snap 软件包 +================ + +![](https://insights.ubuntu.com/wp-content/uploads/27eb/app-snap.png) + +在 Linux 分发应用不总是那么容易。有各种不同的包格式、基础系统、可用库,随着发行版的一次次发布,所有的这些都让人头疼。然而,现在我们有了更简单的东西:Snap。 + +Snap 是开发者打包他们应用的新途径,它相对于传统包格式,如 .deb,.rpm 等带来了许多优点。Snap 安全,彼此隔离,宿主系统使用了类似 AppArmor 的技术,它们跨平台且自足的,让开发者可以准确地将应用所需要的依赖打包到一起。沙盒隔离也加强了安全,并允许应用和整个基于 snap 的系统,在出现问题的时候可以回滚。Snap 确实是 Linux 应用打包的未来。 + +创建一个 snap 包并不困难。首先,你需要一个 snap 基础运行环境,能够让你的桌面环境认识并运行 snap 软件包,这个工具叫做 snapd ,默认内置于所有 Ubuntu 16.04 系统中。接着你需要创建 snap 的工具 Snapcraft,它可以通过一个简单的命令安装: + +``` +$ sudo apt-get install snapcraft +``` + +这个环境安装好了之后就可以 snap 起来了。 + +Snap 使用一个特定的 YAML 格式的文件 snapcraft.yaml,它定义了应用是如何打包的以及它需要的依赖。用一个简单的应用来演示一下,下面的 YAML 文件是个如何 snap 一个 moon-buggy 游戏的实际例子,该游戏在 Ubuntu 源中提供。 + +``` +name: moon-buggy +version: 1.0.51.11 +summary: Drive a car across the moon +description: | +A simple command-line game where you drive a buggy on the moon +apps: + play: + command: usr/games/moon-buggy +parts: + moon-buggy: + plugin: nil + stage-packages: [moon-buggy] + snap: + – usr/games/moon-buggy +``` + +上面的代码出现了几个新概念。第一部分是关于如何让你的应用可以在商店找到的信息,设置软件包的元数据名称、版本号、摘要、以及描述。apps 部分实现了 play 命令,指向了 moon-buggy 可执行文件位置。parts 部分告诉 snapcraft 用来构建应用所需要的插件以及依赖的包。在这个简单的例子中我们需要的所有东西就是来自 Ubuntu 源中的 moon-buggy 应用本身,snapcraft 负责剩下的工作。 + +在你的 snapcraft.yaml 所在目录下运行 snapcraft ,它会创建 moon-buggy_1.0.51.11_amd64.snap 包,可以通过以下命令来安装它: + +``` +$ snap install moon-buggy_1.0.51.11_amd64.snap +``` + +想了解更复杂一点的 snap 打包操作,比如基于 Electron 的 Simplenote 可以[看这里][1],在线教程在[这里][2],相应的代码在 [Github][3]。更多的例子可以在 Ubuntu 开发者[站点][4]找到。 + +-------------------------------------------------------------------------------- + +via: https://insights.ubuntu.com/2016/06/01/apps-to-snaps/ + +作者:[Jamie][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://insights.ubuntu.com/author/jamiebennett/ +[1]: http://www.simplenote.com/ +[2]: http://www.linuxuk.org/post/20160518_snapping_electron_based_applications_simplenote/ +[3]: https://github.com/jamiedbennett/snaps/tree/master/simplenote +[4]: https://developer.ubuntu.com/en/desktop/get-started/ diff --git a/published/201606/20160601 scp command in Linux.md b/published/201606/20160601 scp command in Linux.md new file mode 100644 index 0000000000..0a299cbed0 --- /dev/null +++ b/published/201606/20160601 scp command in Linux.md @@ -0,0 +1,160 @@ +在 Linux 下使用 scp 命令 +======================= + +![](https://www.unixmen.com/wp-content/uploads/2016/05/SCP-LOGO-1.jpg) + +scp 是安全拷贝协议 (Secure Copy Protocol)的缩写,和众多 Linux/Unix 使用者所熟知的拷贝(cp)命令一样。scp 的使用方式类似于 cp 命令,cp 命令将一个文件或文件夹从本地操作系统的一个位置(源)拷贝到目标位置(目的),而 scp 用来将文件或文件夹从网络上的一个主机拷贝到另一个主机当中去。 + +scp 命令的使用方法如下所示,在这个例子中,我将一个叫 “importantfile” 的文件从本机(10.10.16.147)拷贝到远程主机(10.0.0.6)中。在这个命令里,你也可以使用主机名字来替代IP地址。 + +``` +[root@localhost ~]# scp importantfile admin@10.0.0.6:/home/admin/ +The authenticity of host '10.0.0.6 (10.0.0.6)' can't be established. +RSA key fingerprint is SHA256:LqBzkeGa6K9BfWWKgcKlQoE0u+gjorX0lPLx5YftX1Y. +RSA key fingerprint is MD5:ed:44:42:59:3e:dd:4c:12:43:4a:89:b1:5d:bd:9e:20. +Are you sure you want to continue connecting (yes/no)? yes +Warning: Permanently added '10.0.0.6' (RSA) to the list of known hosts. +admin@10.0.0.6's password: +importantfile 100% 0 0.0KB/s 00:00 +[root@localhost ~]# +``` + +类似的,如果你想从一个远程主机中取得文件,你可以利用如下的 scp 命令。 + +``` +[root@localhost ~]# scp root@10.10.16.137:/root/importantfile /home/admin/ +The authenticity of host '10.10.16.137 (10.10.16.137)' can't be established. +RSA key fingerprint is b0:b0:a3:c3:2e:94:13:0c:29:2e:ba:0b:d3:d6:12:8f. +Are you sure you want to continue connecting (yes/no)? yes +Warning: Permanently added '10.10.16.137' (RSA) to the list of known hosts. +root@10.10.16.137's password: +importantfile 100% 0 0.0KB/s 00:00 +[root@localhost ~]# +``` + +你也可以像 cp 命令一样,在 scp 命令中使用不同的选项,scp 的 man 帮助详细地阐述了不同选项的用法和用处。 + +**示例输出** + +![](https://www.unixmen.com/wp-content/uploads/2016/05/scp.jpg) + + +scp 可选参数如下所示: + + -B 采取批量模式(避免询问密码或口令) + -C 启用压缩。通过指明 -C 参数来开启压缩模式。 + -c 加密方式 + 选择在传输过程中用来加密的加密方式 这个选项会被直接传递到 ssh(1)。 + -F ssh 配置 + 给 ssh 指定一个用来替代默认配置的配置文件。这个选项会被直接传递到 ssh(1)。 + -l 限速 + 限制命令使用的带宽,默认单位是 Kbit/s。 + -P 端口 + 指定需要的连接的远程主机的端口。 + 注意,这个选项使用的是一个大写的“P”,因为小写的“-p”已经用来保留目标文件的时间和模式相关信息。(LCTT 译注:ssh 命令中使用小写的“-p”来指定目标端口。) + -p 保留文件原来的修改时间,访问时间以及权限模式。 + -q 静默模式:不显示来自 ssh(1) 命令的进度信息,警告和诊断信息。 + -r 递归拷贝整个目录。 + 注意,scp 命令在树形遍历的时候同样会跟随符号连接,复制所连接的文件。 + -v 详细模式。scp 和 ssh(1) 将会打印出处理过程中的调试信息。这可以帮助你调试连接、认证和配置方面的问题。 + +**详细模式** + +利用 scp 命令的 -v 选项,你可以得到认证、调试等的相关细节信息。 + +![](http://www.unixmen.com/wp-content/uploads/2016/05/scp-v.jpg) + +当我们使用 -v 选项的时候,一个简单的输出如下所示: + +``` +[root@localhost ~]# scp -v abc.txt admin@10.0.0.6:/home/admin +Executing: program /usr/bin/ssh host 10.0.0.6, user admin, +command scp -v -t/home/admin +OpenSSH_7.1p1, OpenSSL 1.0.2d-fips 9 Jul 2015 +debug1: Reading configuration data /etc/ssh/ssh_config +debug1: /etc/ssh/ssh_config line 56: Applying options for * +debug1: Connecting to 10.0.0.6 [10.0.0.6] port 22. +debug1: Connection established. +debug1: Server host key: ssh-rsa SHA256:LqBzkeGa6K9BfWWKgcKlQoE0u+gjorX0lPLx5YftX1Y +debug1: Next authentication method: publickey +debug1: Trying private key: /root/.ssh/id_rsa +debug1: Trying private key: /root/.ssh/id_dsa +debug1: Trying private key: /root/.ssh/id_ecdsa +debug1: Trying private key: /root/.ssh/id_ed25519 +debug1: Next authentication method: password +admin@10.0.0.6's password: +debug1: Authentication succeeded (password). +Authenticated to 10.0.0.6 ([10.0.0.6]:22). +debug1: channel 0: new [client-session] +debug1: Requesting no-more-sessions@openssh.com +debug1: Entering interactive session. +debug1: Sending environment. +debug1: Sending command: scp -v -t /home/admin +Sending file modes: C0644 174 abc.txt +Sink: C0644 174 abc.txt +abc.txt 100% 174 0.2KB/s 00:00 +Transferred: sent 3024, received 2584 bytes, in 0.3 seconds +Bytes per second: sent 9863.3, received 8428.1 +debug1: Exit status 0 +[root@localhost ~]# +``` + +当我们需要拷贝一个目录或者文件夹的时候,我们可以使用 -r 选项,它会递归拷贝整个目录。 + +![](http://www.unixmen.com/wp-content/uploads/2016/05/scp-with-r.jpg) + +**静默模式** + +如果你想要关闭进度信息以及警告和诊断信息,你可以通过使用scp命令中的-q选项. + +![](http://www.unixmen.com/wp-content/uploads/2016/05/scp-with-q.jpg) + +上一次我们仅仅使用 -r 参数,它显示了逐个文件的信息,但这一次当我们使用了 -q 参数,它就不显示进度信息。 + +利用 scp 的 -p 选项来保留目标文件的更新时间,访问时间和权限模式。 + +![](http://www.unixmen.com/wp-content/uploads/2016/05/scp-with-p.jpg) + +**通过 -P 选项来指定远程主机的连接端口** + +scp 使用 ssh 命令来在两个主机之间传输文件,因为 ssh 默认使用的是22端口号,所以 scp 也使用相同的22端口号。 + +如果我们希望改变这个端口号,我们可以使用 -P(大写的 P,因为小写的 p 用来保持文件的访问时间等)选项来指定所需的端口号。 + +举个例子,如果我们想要使用2222端口号,我们可以使用如下的命令 + +``` +[root@localhost ~]# scp -P 2222 abcd1 root@10.10.16.137:/root/ +``` + +**限制命令使用的带宽,指定的单位是 Kbit/s** + +如下所示,我们可以使用 -l 参数来指定 scp 命令所使用的带宽,在此我们将速度限制为512kbit/s。 + +![](http://www.unixmen.com/wp-content/uploads/2016/05/scp-with-l.jpg) + +**开启压缩** + +如下所示,我们可以通过开启 scp 命令的压缩模式来节省传输过程中的带宽和时间。 + +![](https://www.unixmen.com/wp-content/uploads/2016/05/scp-with-C.jpg) + +**选择加密数据的加密方式** + +scp 默认使用 AES-128 的加密方式,如果我们想要改变这个加密方式,可以通过 -c(小写的 c) 参数来指定其他的加密方式。 + +![](http://www.unixmen.com/wp-content/uploads/2016/05/scp-with-cipher.jpg) + +现在你可以利用 scp(Secure copy)命令在你所属网络中的两个节点之间安全地拷贝文件了。 + +-------------------------------------------------------------------------------- + +via: https://www.unixmen.com/scp-command-linuxunix/ + +作者:[Naga Ramesh][a] +译者:[lujianbo](https://github.com/lujianbo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.unixmen.com/author/naga/ diff --git a/published/201606/20160605 How to Add Cron Jobs in Linux and Unix.md b/published/201606/20160605 How to Add Cron Jobs in Linux and Unix.md new file mode 100644 index 0000000000..139084225a --- /dev/null +++ b/published/201606/20160605 How to Add Cron Jobs in Linux and Unix.md @@ -0,0 +1,212 @@ +如何在 Linux 及 Unix 系统中添加定时任务 +====================================== + +![](https://www.unixmen.com/wp-content/uploads/2016/05/HOW-TO-ADD-CRON-JOBS-IN-LINUX-AND-UNIX-696x334.png) + +### 导言 + +![](http://www.unixmen.com/wp-content/uploads/2016/05/cronjob.gif) + +定时任务 (cron job) 被用于安排那些需要被周期性执行的命令。利用它,你可以配置某些命令或者脚本,让它们在某个设定的时间周期性地运行。`cron` 是 Linux 或者类 Unix 系统中最为实用的工具之一。cron 服务(守护进程)在系统后台运行,并且会持续地检查 `/etc/crontab` 文件和 `/etc/cron.*/ `目录。它同样也会检查 `/var/spool/cron/` 目录。 + +### crontab 命令 + +`crontab` 是用来安装、卸载或者列出定时任务列表的命令。cron 配置文件则用于驱动 `Vixie Cron` 的 [cron(8)][1] 守护进程。每个用户都可以拥有自己的 crontab 文件,虽然这些文件都位于 `/var/spool/cron/crontabs` 目录中,但并不意味着你可以直接编辑它们。你需要通过 `crontab` 命令来编辑或者配置你自己的定时任务。 + +### 定时配置文件的类型 + +配置文件分为以下不同的类型: + +- **UNIX 或 Linux 系统的 crontab** : 此类型通常由那些需要 root 或类似权限的系统服务和重要任务使用。第六个字段(见下方的字段介绍)为用户名,用来指定此命令以哪个用户身份来执行。如此一来,系统的 `crontab` 就能够以任意用户的身份来执行操作。 + +- **用户的 crontab**: 用户可以使用 `crontab` 命令来安装属于他们自己的定时任务。 第六个字段为需要运行的命令, 所有的命令都会以创建该 crontab 任务的用户的身份运行。 + +**注意**: 这种问答形式的 `Cron` 实现由 Paul Vixie 所编写,并且被包含在许多 [Linux][2] 发行版本和类 Unix 系统(如广受欢迎的第四版 BSD)中。它的语法被各种 crond 的实现所[兼容][3]。 + +那么我该如何安装、创建或者编辑我自己的定时任务呢? + +要编辑你的 crontab 文件,需要在 Linux 或 Unix 的 shell 提示符后键入以下命令: + +``` +$ crontab -e +``` + +`crontab` 语法(字段介绍) + +语法为: + +``` +1 2 3 4 5 /path/to/command arg1 arg2 +``` + +或者 + +``` +1 2 3 4 5 /root/ntp_sync.sh +``` + +其中: + +- 第1个字段:分钟 (0-59) +- 第2个字段:小时 (0-23) +- 第3个字段:日期 (0-31) +- 第4个字段:月份 (0-12 [12 代表 December]) +- 第5个字段:一周当中的某天 (0-7 [7 或 0 代表星期天]) +- /path/to/command - 计划执行的脚本或命令的名称 + +便于记忆的格式: + +``` +* * * * * 要执行的命令 +---------------- +| | | | | +| | | | ---- 周当中的某天 (0 - 7) (周日为 0 或 7) +| | | ------ 月份 (1 - 12) +| | -------- 一月当中的某天 (1 - 31) +| ---------- 小时 (0 - 23) +------------ 分钟 (0 - 59) +``` + +简单的 `crontab` 示例: + +```` +## 每隔 5 分钟运行一次 backupscript 脚本 ## +*/5 * * * * /root/backupscript.sh + +## 每天的凌晨 1 点运行 backupscript 脚本 ## + +0 1 * * * /root/backupscript.sh + +## 每月的第一个凌晨 3:15 运行 backupscript 脚本 ## + +15 3 1 * * /root/backupscript.sh +``` + +### 如何使用操作符 + +操作符允许你为一个字段指定多个值,这里有三个操作符可供使用: + +- **星号 (*)** : 此操作符为字段指定所有可用的值。举个例子,在小时字段中,一个星号等同于每个小时;在月份字段中,一个星号则等同于每月。 + +- **逗号 (,)** : 这个操作符指定了一个包含多个值的列表,例如:`1,5,10,15,20,25`. + +- **横杠 (-)** : 此操作符指定了一个值的范围,例如:`5-15` ,等同于使用逗号操作符键入的 `5,6,7,8,9,...,13,14,15`。 + +- **分隔符 (/)** : 此操作符指定了一个步进值,例如: `0-23/` 可以用于小时字段来指定某个命令每小时被执行一次。步进值也可以跟在星号操作符后边,如果你希望命令行每 2 小时执行一次,则可以使用 `*/2`。 + +### 如何禁用邮件输出 + +默认情况下,某个命令或者脚本的输出内容(如果有的话)会发送到你的本地邮箱账户中。若想停止收到 `crontab` 发送的邮件,需要添加 `>/dev/null 2>&1` 这段内容到执行的命令的后面,例如: + +``` +0 3 * * * /root/backup.sh >/dev/null 2>&1 +``` + +如果想将输出内容发送到特定的邮件账户中,比如说 vivek@nixcraft.in 这个邮箱, 则你需要像下面这样定义一个 MAILTO 变量: + +``` +MAILTO="vivek@nixcraft.in" +0 3 * * * /root/backup.sh >/dev/null 2>&1 +``` + +访问 “[禁用 Crontab 命令的邮件提示](http://www.cyberciti.biz/faq/disable-the-mail-alert-by-crontab-command/)” 查看更多信息。 + + +### 任务:列出你所有的定时任务 + +键入以下命令: + +``` +# crontab -l +# crontab -u username -l +``` + +要删除所有的定时任务,可以使用如下命令: + +``` +# 删除当前定时任务 # +crontab -r +``` + +``` +## 删除某用户名下的定时任务,此命令需以 root 用户身份执行 ## +crontab -r -u username +``` + +### 使用特殊字符串来节省时间 + +你可以使用以下 8 个特殊字符串中的其中一个替代头五个字段,这样不但可以节省你的时间,还可以提高可读性。 + +特殊字符 |含义 +|:-- |:-- +@reboot | 在每次启动时运行一次 +@yearly | 每年运行一次,等同于 “0 0 1 1 *”. +@annually | (同 @yearly) +@monthly | 每月运行一次, 等同于 “0 0 1 * *”. +@weekly | 每周运行一次, 等同于 “0 0 * * 0”. +@daily | 每天运行一次, 等同于 “0 0 * * *”. +@midnight | (同 @daily) +@hourly | 每小时运行一次, 等同于 “0 * * * *”. + +示例: + +#### 每小时运行一次 ntpdate 命令 #### + +``` +@hourly /path/to/ntpdate +``` + +### 关于 `/etc/crontab` 文件和 `/etc/cron.d/*` 目录的更多内容 + +** /etc/crontab ** 是系统的 crontab 文件。通常只被 root 用户或守护进程用于配置系统级别的任务。每个独立的用户必须像上面介绍的那样使用 `crontab` 命令来安装和编辑自己的任务。`/var/spool/cron/` 或者 `/var/cron/tabs/` 目录存放了个人用户的 crontab 文件,它必定会备份在用户的家目录当中。 + +###理解默认的 `/etc/crontab` 文件 + +典型的 `/etc/crontab` 文件内容是这样的: + +``` +SHELL=/bin/bash +PATH=/sbin:/bin:/usr/sbin:/usr/bin +MAILTO=root +HOME=/ +# run-parts +01 * * * * root run-parts /etc/cron.hourly +02 4 * * * root run-parts /etc/cron.daily +22 4 * * 0 root run-parts /etc/cron.weekly +42 4 1 * * root run-parts /etc/cron.monthly +``` + +首先,环境变量必须被定义。如果 SHELL 行被忽略,cron 会使用默认的 sh shell。如果 PATH 变量被忽略,就没有默认的搜索路径,所有的文件都需要使用绝对路径来定位。如果 HOME 变量被忽略,cron 会使用调用者(用户)的家目录替代。 + +另外,cron 会读取 `/etc/cron.d/`目录中的文件。通常情况下,像 sa-update 或者 sysstat 这样的系统守护进程会将他们的定时任务存放在此处。作为 root 用户或者超级用户,你可以使用以下目录来配置你的定时任务。你可以直接将脚本放到这里。`run-parts`命令会通过 `/etc/crontab` 文件来运行位于某个目录中的脚本或者程序。 + +目录 |描述 +|:-- |:-- +/etc/cron.d/ | 将所有的脚本文件放在此处,并从 /etc/crontab 文件中调用它们。 +/etc/cron.daily/ | 运行需要 每天 运行一次的脚本 +/etc/cron.hourly/ | 运行需要 每小时 运行一次的脚本 +/etc/cron.monthly/ | 运行需要 每月 运行一次的脚本 +/etc/cron.weekly/ | 运行需要 每周 运行一次的脚本 + +### 备份定时任务 + +``` +# crontab -l > /path/to/file + +# crontab -u user -l > /path/to/file +``` + +-------------------------------------------------------------------------------- + +via: https://www.unixmen.com/add-cron-jobs-linux-unix/ + +作者:[Duy NguyenViet][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[FSSlc](https://github.com/FSSlc) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.unixmen.com/author/duynv/ +[1]: http://www.manpager.com/linux/man8/cron.8.html +[2]: http://www.linuxsecrets.com/ +[3]: http://www.linuxsecrets.com/linux-hardware/ \ No newline at end of file diff --git a/published/201606/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md b/published/201606/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md new file mode 100644 index 0000000000..ba14bf8742 --- /dev/null +++ b/published/201606/20160605 Will Google Replace Passwords With A New Trust-Based Authentication Method.md @@ -0,0 +1,39 @@ +谷歌会用基于信任的认证措施取代密码吗? +=========================================================================== + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/Trust-API-Google-replaces-passwords.jpg) + +一个谷歌新开发的认证措施会评估你的登录有多可靠,并且基于一个“信任分(Trust Score)”认证你的登录。 + +这个谷歌项目的名字是 Abacus,它的目标是让你摆脱讨厌的密码记忆和输入。 + +在最近的 Google I/O 开发者大会上,谷歌引入了自这个雄心勃勃的项目而来的新特性,称作“**Trust API**”。 + +“如果一切进展顺利”,这个 API(Application Programming Interface,应用程序编程接口)会在年底前供安卓开发者使用。API 会利用安卓设备上不同的传感器来识别用户,并创建一个他们称之为“信任分(Trust Score)”的结果。 + +基于这个信任分,一个需要登录认证的应用可以验证你确实可以授权登录,从而不会提示需要密码。 + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/Abacus-to-Trust-API.jpg) + +*Abacus 到 Trust API* + +### 需要思考的地方 + +尽管这个想法,明智的功能,听起来很棒——减轻了密码认证的负担。 + +但从另一面来说,这是不是谷歌又一次逼迫我们(有意或无意)为了方便使用而放弃我们的隐私? + +是否值得?这取决于你的决定... + + +-------------------------------------------------------------------------------- + +via: http://www.iwillfolo.com/will-google-replace-passwords-with-a-new-trust-based-authentication-method/ + +作者:[iWillFolo][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.iwillfolo.com/ diff --git a/published/201606/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md b/published/201606/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md new file mode 100644 index 0000000000..99c1da66ab --- /dev/null +++ b/published/201606/20160611 vlock – A Smart Way to Lock User Virtual Console or Terminal in Linux.md @@ -0,0 +1,102 @@ +vlock – 一个锁定 Linux 用户虚拟控制台或终端的好方法 +======================================================================= + +虚拟控制台是 Linux 上非常重要的功能,它们给系统用户提供了 shell 提示符,以保证用户在登录和远程登录一个未安装图形界面的系统时仍能使用。 + +一个用户可以同时操作多个虚拟控制台会话,只需在虚拟控制台间来回切换即可。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/vlock-Lock-User-Terminal-in-Linux.png) + +*用 vlock 锁定 Linux 用户控制台或终端* + +这篇使用指导旨在教会大家如何使用 vlock 来锁定用户虚拟控制台和终端。 + +### vlock 是什么? + +vlock 是一个用于锁定一个或多个用户虚拟控制台用户会话的工具。在多用户系统中 vlock 扮演着重要的角色,它让用户可以在锁住自己会话的同时不影响其他用户通过其他虚拟控制台操作同一个系统。必要时,还可以锁定所有的控制台,同时禁止在虚拟控制台间切换。 + +vlock 的主要功能面向控制台会话方面,同时也支持非控制台会话的锁定,但该功能的测试还不完全。 + +### 在 Linux 上安装 vlock + +根据你的 Linux 系统选择 vlock 安装指令: + +``` +# yum install vlock [On RHEL / CentOS / Fedora] +$ sudo apt-get install vlock [On Ubuntu / Debian / Mint] +``` + +### 在 Linux 上使用 vlock + +vlock 操作选项的常规语法: + +``` +# vlock option +# vlock option plugin +# vlock option -t plugin +``` + +#### vlock 常用选项及用法: + +1、 锁定用户的当前虚拟控制台或终端会话,如下: + +``` +# vlock --current +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-User-Terminal-Session-in-Linux.png) + +*锁定 Linux 用户终端会话* + +选项 -c 或 --current,用于锁定当前的会话,该参数为运行 vlock 时的默认行为。 + +2、 锁定所有你的虚拟控制台会话,并禁用虚拟控制台间切换,命令如下: + +``` +# vlock --all +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-All-Linux-Terminal-Sessions.png) + +*锁定所有 Linux 终端会话* + +选项 -a 或 --all,用于锁定所有用户的控制台会话,并禁用虚拟控制台间切换。 + +其他的选项只有在编译 vlock 时编入了相关插件支持和引用后,才能发挥作用: + +3、 选项 -n 或 --new,调用时后,会在锁定用户的控制台会话前切换到一个新的虚拟控制台。 + +``` +# vlock --new +``` + +4、 选项 -s 或 --disable-sysrq,在禁用虚拟控制台的同时禁用 SysRq 功能,只有在与 -a 或 --all 同时使用时才起作用。 + +``` +# vlock -sa +``` + +5、 选项 -t 或 --timeout ,用以设定屏幕保护插件的 timeout 值。 + +``` +# vlock --timeout 5 +``` + +你可以使用 `-h` 或 `--help` 和 `-v` 或 `--version` 分别查看帮助消息和版本信息。 + +我们的介绍就到这了,提示一点,你可以将 vlock 的 `~/.vlockrc` 文件包含到系统启动中,并参考入门手册[添加环境变量][1],特别是 Debian 系的用户。 + +想要找到更多或是补充一些这里没有提及的信息,可以直接在写在下方评论区。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/vlock-lock-user-virtual-console-terminal-linux/ + +作者:[Aaron Kili][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/set-path-variable-linux-permanently/ diff --git a/published/201606/20160621 Building Serverless App with Docker.md b/published/201606/20160621 Building Serverless App with Docker.md new file mode 100644 index 0000000000..0b5409fc89 --- /dev/null +++ b/published/201606/20160621 Building Serverless App with Docker.md @@ -0,0 +1,98 @@ +用 Docker 创建 serverless 应用 +====================================== + +当今世界会时不时地出现一波波科技浪潮,将以前的技术拍死在海滩上。针对 serverless 应用的概念我们已经谈了很多,它是指将你的应用程序按功能来部署,这些功能在被用到时才会启动。你不用费心去管理服务器和程序规模,因为它们会在需要的时候在一个集群中启动并运行。 + +但是 serverless 并不意味着没有 Docker 什么事儿,事实上 Docker 就是 serverless 的。你可以使用 Docker 来容器化这些功能,然后在 Swarm 中按需求来运行它们。serverless 是一项构建分布式应用的技术,而 Docker 是它们完美的构建平台。 + +### 从 servers 到 serverless + +那如何才能写一个 serverless 应用呢?来看一下我们的例子,[5个服务组成的投票系统][1]: + +![](https://blog.docker.com/wp-content/uploads/Picture1.png) + +投票系统由下面5个服务组成: + +- 两个 web 前端 +- 一个后台处理投票的进程 +- 一个计票的消息队列 +- 一个数据库 + +后台处理投票的进程很容易转换成 serverless 构架,我们可以使用以下代码来实现: + +``` +import dockerrun +client = dockerrun.from_env() +client.run("bfirsh/serverless-record-vote-task", [voter_id, vote], detach=True) +``` + +这个投票处理进程和消息队列可以用运行在 Swarm 上的 Docker 容器来代替,并实现按需自动部署。 + +我们也可以用容器替换 web 前端,使用一个轻量级 HTTP 服务器来触发容器响应一个 HTTP 请求。Docker 容器代替长期运行的 HTTP 服务器来挑起响应请求的重担,这些容器可以自动扩容来支撑更大访问量。 + +新的架构就像这样: + +![](https://blog.docker.com/wp-content/uploads/Picture2.png) + +红色框内是持续运行的服务,绿色框内是按需启动的容器。这个架构里需要你来管理的长期运行服务更少,并且可以自动扩容(最大容量由你的 Swarm 决定)。 + +### 我们可以做点什么? + +你可以在你的应用中使用3种技术: + +1. 在 Docker 容器中按需运行代码。 +2. 使用 Swarm 来部署集群。 +3. 通过使用 Docker API 套接字在容器中运行容器。 + +结合这3种技术,你可以有很多方法搭建你的应用架构。用这种方法来部署后台环境真是非常有效,而在另一些场景,也可以这么玩,比如说: + +- 由于存在延时,使用容器实现面向用户的 HTTP 请求可能不是很合适,但你可以写一个负载均衡器,使用 Swarm 来对自己的 web 前端进行自动扩容。 +- 实现一个 MongoDB 容器,可以自检 Swarm 并且启动正确的分片和副本(LCTT 译注:分片技术为大规模并行检索提供支持,副本技术则是为数据提供冗余)。 + +### 下一步怎么做 + +我们提供了这些前卫的工具和概念来构建应用,并没有深入发掘它们的功能。我们的架构里还是存在长期运行的服务,将来我们需要使用 Swarm 来把所有服务都用按需扩容的方式实现。 + +希望本文能在你搭建架构时给你一些启发,但我们还是需要你的帮助。我们提供了所有的基本工具,但它们还不是很完善,我们需要更多更好的工具、库、应用案例、文档以及其他资料。 + +[我们在这里发布了工具、库和文档][3]。如果想了解更多,请贡献给我们一些你知道的资源,以便我们能够完善这篇文章。 + +玩得愉快。 + +### 更多关于 Docker 的资料 + +- New to Docker? Try our 10 min [online tutorial][4] +- Share images, automate builds, and more with [a free Docker Hub account][5] +- Read the Docker [1.12 Release Notes][6] +- Subscribe to [Docker Weekly][7] +- Sign up for upcoming [Docker Online Meetups][8] +- Attend upcoming [Docker Meetups][9] +- Watch [DockerCon EU 2015 videos][10] +- Start [contributing to Docker][11] + + +-------------------------------------------------------------------------------- + +via: https://blog.docker.com/2016/06/building-serverless-apps-with-docker/ + +作者:[Ben Firshman][a] +译者:[bazz2](https://github.com/bazz2) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.docker.com/author/bfirshman/ + +[1]: https://github.com/docker/example-voting-app +[3]: https://github.com/bfirsh/serverless-docker +[4]: https://docs.docker.com/engine/understanding-docker/ +[5]: https://hub.docker.com/ +[6]: https://docs.docker.com/release-notes/ +[7]: https://www.docker.com/subscribe_newsletter/ +[8]: http://www.meetup.com/Docker-Online-Meetup/ +[9]: https://www.docker.com/community/meetup-groups +[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv +[11]: https://docs.docker.com/contributing/contributing/ + + + diff --git a/published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md b/published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md new file mode 100644 index 0000000000..b177995009 --- /dev/null +++ b/published/20160603 How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options.md @@ -0,0 +1,138 @@ +在 Linux 上安装使用 VirtualBox 的命令行管理界面 VBoxManage +================= + +VirtualBox 拥有一套命令行工具,你可以使用 VirtualBox 的命令行界面 (CLI) 对远程无界面的服务器上的虚拟机进行管理操作。在这篇教程中,你将会学到如何在没有 GUI 的情况下使用 VBoxManage 创建、启动一个虚拟机。VBoxManage 是 VirtualBox 的命令行界面,你可以在你的主机操作系统的命令行中用它来实现对 VirtualBox 的所有操作。VBoxManage 拥有图形化用户界面所支持的全部功能,而且它支持的功能远不止这些。它提供虚拟引擎的所有功能,甚至包含 GUI 还不能实现的那些功能。如果你想尝试下不同的用户界面而不仅仅是 GUI,或者更改虚拟机更多高级和实验性的配置,那么你就需要用到命令行。 + +当你想要在 VirtualBox 上创建或运行虚拟机时,你会发现 VBoxManage 非常有用,你只需要使用远程主机的终端就够了。这对于需要远程管理虚拟机的服务器来说是一种常见的情形。 + +### 准备工作 + +在开始使用 VBoxManage 的命令行工具前,确保在运行着 Ubuntu 16.04 的服务器上,你拥有超级用户的权限或者你能够使用 sudo 命令,而且你已经在服务器上安装了 Oracle Virtual Box。 然后你需要安装 VirtualBox 扩展包,这是运行 VRDE 远程桌面环境,访问无界面虚拟机所必须的。 + +### 安装 VBoxManage + +通过 [Virtual Box 下载页][1] 这个链接,你能够获取你所需要的软件扩展包的最新版本,扩展包的版本和你安装的 VirtualBox 版本需要一致! + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/12.png) + +也可以用下面这条命令来获取 VBoxManage 扩展。 + +``` +$ wget http://download.virtualbox.org/virtualbox/5.0.20/Oracle_VM_VirtualBox_Extension_Pack-5.0.20-106931.vbox-extpack +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/21.png) + +运行下面这条命令,确认 VBoxManage 已经成功安装在你的机器上。 + +``` +$ VBoxManage list extpacks +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/31.png) + +### 在 Ubuntu 16.04 上使用 VBoxManage + +接下来我们将要使用 VBoxManage 向你展现通过命令行终端工具来新建和管理虚拟机是多么的简单。 + +运行下面的命令,新建一个将用来安装 Ubuntu 系统的虚拟机。 + +``` +# VBoxManage createvm --name Ubuntu16.04 --register +``` + +在运行了这条命令之后,VBoxMnage 将会新建一个叫 做“Ubuntu16.vbox” 的虚拟机,这个虚拟机的位置是家目录路径下的 “VirtualBox VMs/Ubuntu16/Ubuntu16.04.vbox”。在上面这条命令中,“createvm” 是用来新建虚拟机,“--name” 定义了虚拟机的名字,而 “registervm” 命令是用来注册虚拟机的。 + +现在,使用下面这条命令为虚拟机创建一个硬盘镜像。 + +``` +$ VBoxManage createhd --filename Ubuntu16.04 --size 5124 +``` + +这里,“createhd” 用来创建硬盘镜像,“--filename” 用来指定虚拟机的名称,也就是创建的硬盘镜像名称。“--size” 表示硬盘镜像的空间容量,空间容量的单位总是 MB。我们指定了 5Gb,也就是 5124 MB。 + +接下来我们需要设置操作系统类型,如果要安装 Linux 系的系统,那么用下面这条命令指定系统类型为 Linux 或者 Ubuntu 或者 Fedora 之类的。 + +``` +$ VBoxManage modifyvm Ubuntu16.04 --ostype Ubuntu +``` + +用下面这条命令来设置虚拟系统的内存大小,也就是从主机中分配到虚拟机系统的内存。 + +``` +$ VBoxManage modifyvm Ubuntu10.10 --memory 512 +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/52.png) + +现在用下面这个命令为虚拟机创建一个存储控制器。 + +``` +$ VBoxManage storagectl Ubuntu16.04 --name IDE --add ide --controller PIIX4 --bootable on +``` + +这里的 “storagect1” 是给虚拟机创建存储控制器的,“--name” 指定了虚拟机里需要创建、更改或者移除的存储控制器的名称。“--add” 选项指明存储控制器所需要连接到的系统总线类型,可选的选项有 ide / sata / scsi / floppy。“--controller” 选择主板的类型,主板需要根据需要的存储控制器选择,可选的选项有 LsiLogic / LSILogicSAS / BusLogic / IntelAhci / PIIX3 / PIIX4 / ICH6 / I82078。最后的 “--bootable” 表示控制器是否可以引导系统。 + +上面的命令创建了叫做 IDE 的存储控制器。之后虚拟介质就能通过 “storageattach” 命令连接到该控制器。 + +然后运行下面这个命令来创建一个叫做 SATA 的存储控制器,它将会连接到之后的硬盘镜像上。 + +``` +$ VBoxManage storagectl Ubuntu16.04 --name SATA --add sata --controller IntelAhci --bootable on +``` + +将之前创建的硬盘镜像和 CD/DVD 驱动器加载到 IDE 控制器。将 Ubuntu 的安装光盘插到 CD/DVD 驱动器上。然后用 “storageattach” 命令连接存储控制器和虚拟机。 + +``` +$ VBoxManage storageattach Ubuntu16.04 --storagectl SATA --port 0 --device 0 --type hdd --medium "your_iso_filepath" +``` + +这将把 SATA 存储控制器及介质(比如之前创建的虚拟磁盘镜像)连接到 Ubuntu16.04 虚拟机中。 + +运行下面的命令添加像网络连接,音频之类的功能。 + +``` +$ VBoxManage modifyvm Ubuntu10.10 --nic1 nat --nictype1 82540EM --cableconnected1 on +$ VBoxManage modifyvm Ubuntu10.10 --vram 128 --accelerate3d on --audio alsa --audiocontroller ac97 +``` + +通过指定你想要启动虚拟机的名称,用下面这个命令启动虚拟机。 + +``` + $ VBoxManage startvm Ubuntu16.04 +``` + +然后会打开一个新窗口,新窗口里虚拟机通过关联文件中引导。 + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/62.png) + +你可以用接下来的命令来关掉虚拟机。 + +``` +$ VBoxManage controlvm Ubuntu16.04 poweroff +``` + +“controlvm” 命令用来控制虚拟机的状态,可选的选项有 pause / resume / reset / poweroff / savestate / acpipowerbutton / acpisleepbutton。controlvm 有很多选项,用下面这个命令来查看它支持的所有选项。 + +``` +$VBoxManage controlvm +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/81.png) + +###完结 + +从这篇文章中,我们了解了 Oracle Virtual Box 中一个十分实用的工具 VBoxManage,文章包含了 VBoxManage 的安装和在 Ubuntu 16.04 系统上的使用,包括通过 VBoxManage 中实用的命令来创建和管理虚拟机。希望这篇文章对你有帮助,另外别忘了分享你的评论或者建议。 + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-and-use-command-line-tool-vboxmanage-on-ubuntu-16-04/ + +作者:[Kashif][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxpitstop.com/author/kashif/ +[1]: https://www.virtualbox.org/wiki/Downloads diff --git a/published/20160606 Basic Git Commands You Must Know.md b/published/20160606 Basic Git Commands You Must Know.md new file mode 100644 index 0000000000..57aa6662e2 --- /dev/null +++ b/published/20160606 Basic Git Commands You Must Know.md @@ -0,0 +1,216 @@ +你应该知道的基础 Git 命令 +===================================== + +![](http://itsfoss.com/wp-content/uploads/2016/06/Download-Git-Sheet.jpg) + +*简介:这个快速指南将向你展示所有的基础 Git 命令以及用法。你可以下载这些命令作为快速参考。* + +我们在早先一篇文章中已经快速介绍过 [Vi 速查表][1]了。在这篇文章里,我们将会介绍开始使用 Git 时所需要的基础命令。 + +### Git + +[Git][2] 是一个分布式版本控制系统,它被用在大量开源项目中。它是在 2005 年由 Linux 创始人 [Linus Torvalds][3] 写就的。这个程序允许非线性的项目开发,并且能够通过存储在本地服务器高效处理大量数据。在这个教程里,我们将要和 Git 愉快玩耍并学习如何开始使用它。 + +我在这个教程里使用 Ubuntu,但你可以使用你选择的任何发行版。除了安装以外,剩下的所有命令在任何 Linux 发行版上都是一样的。 + +### 安装 Git + +要安装 git 执行以下命令: + +``` +sudo apt-get install git-core +``` + +在它完成下载之后,你就安装好了 Git 并且可以使用了。 + +### 设置 Git + +在 Git 安装之后,不论是从 apt-get 还是从源码安装,你需要将你的用户名和邮箱地址复制到 gitconfig 文件。你可以访问 ~/.gitconfig 这个文件。 + +全新安装 Git 之后打开它会是完全空白的: + +``` +sudo vim ~/.gitconfig +``` + +你也可以使用以下命令添加所需的信息。将‘user’替换成你的用户名,‘user@example.com’替换成你的邮箱。 + +``` +git config --global user.name "User" +git config --global user.email user@example.com +``` + +然后你就完成设置了。现在让我们开始 Git。 + +### 仓库 + +创建一个新目录,打开它并运行以下命令: + +``` +git init +``` + +![](http://itsfoss.com/wp-content/uploads/2016/05/Playing-around-git-1-1024x173.png) + +这个命令会创建一个新的 Git 仓库(repository)。你的本地仓库由三个 Git 维护的“树”组成。 + +第一个是你的工作目录(Working Directory),保存实际的文件。第二个是索引,实际上扮演的是暂存区(staging area),最后一个是 HEAD,它指向你最后一个 commit 提交。使用 git clone /path/to/repository 签出你的仓库(从你刚创建的仓库或服务器上已存在的仓库)。 + +### 添加文件并提交 + +你可以用以下命令添加改动: + +``` +git add +``` + +这会添加一个新文件到暂存区以提交。如果你想添加每个新文件,输入: + +``` +git add --all +``` + +添加文件之后可以使用以下命令检查状态: + +``` +git status +``` + +![](http://itsfoss.com/wp-content/uploads/2016/05/Playing-around-git-2-1024x219.png) + +正如你看到的,那里已经有一些变化但还没有提交。现在你需要提交这些变化,使用: + +``` +git commit -m "提交信息" +``` + +![](http://itsfoss.com/wp-content/uploads/2016/05/playing-around-git-3-1024x124.png) + +你也可以这么做(首选): + +``` +git commit -a +``` + +然后写下你的提交信息。现在你的文件提交到了 HEAD,但还不在你的远程仓库中。 + +### 推送你的改动 + +你的改动在你本地工作副本的 HEAD 中。如果你还没有从一个已存在的仓库克隆,或想将你的仓库连接到远程服务器,你需要先添加它: + +``` +git remote add origin <服务器地址> +``` + +现在你可以将改动推送到指定的远程服务器。要将改动发送到远程服务器,运行: + +``` +git push -u origin master +``` + +### 分支 + +分支用于开发特性,分支之间是互相独立的。主分支 master 是你创建一个仓库时的“默认”分支。使用其它分支用于开发,在完成时将它合并回主分支。 + +创建一个名为“mybranch”的分支并切换到它之上: + +``` +git checkout -b mybranch +``` + +![](http://itsfoss.com/wp-content/uploads/2016/05/Playing-around-Git-4--1024x145.jpg) + +你可以使用这个命令切换回主分支: + +``` +git checkout master +``` + +如果你想删除这个分支,执行: + +``` +git branch -d mybranch +``` + +![](http://itsfoss.com/wp-content/uploads/2016/05/Playing-around-Git-5-1-1024x181.jpg) + +除非你将分支推送到远程服务器上,否则该分支对其他人是不可用的,所以只需把它推送上去: + +``` +git push origin <分支名> +``` + +### 更新和合并 + +要将你本地仓库更新到最新的提交上,运行: + +``` +git pull +``` + +在你的工作目录获取并合并远程变动。要合并其它分支到你的活动分支(如 master),使用: + +``` +git merge <分支> +``` + +在这两种情况下,git 会尝试自动合并(auto-merge)改动。不幸的是,这不总是可能的,可能会导致冲突。你需要通过编辑 git 所显示的文件,手动合并那些冲突。改动之后,你需要用以下命令将它们标记为已合并: + +``` +git add <文件名> +``` + +在合并改动之前,你也可以使用以下命令预览: + +``` +git diff <源分支> <目标分支> +``` + +### Git 日志 + +你可以这么查看仓库历史: + +``` +git log +``` + +要以每个提交一行的样式查看日志,你可以用: + +``` +git log --pretty=oneline +``` + +或者也许你想要看一个所有分支的 ASCII 艺术树,带有标签和分支名: + +``` +git log --graph --oneline --decorate --all +``` + +如果你只想看哪些文件改动过: + +``` +git log --name-status +``` + +在这整个过程中如果你需要任何帮助,你可以用 git --help。 + +Git 棒不棒?!祝贺你你已经会 Git 基础了。如果你愿意的话,你可以从下面这个链接下载这些基础 Git 命令作为快速参考: + +- [下载 Git 速查表][4] + + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/basic-git-commands-cheat-sheet/ + +作者:[Rakhi Sharma][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://itsfoss.com/author/rakhi/ +[1]: http://itsfoss.com/download-vi-cheat-sheet/ +[2]: https://git-scm.com/ +[3]: https://en.wikipedia.org/wiki/Linus_Torvalds +[4]: https://drive.google.com/open?id=0By49_3Av9sT1bXpINjhvU29VNUU diff --git a/published/20160609 How to record your terminal session on Linux.md b/published/20160609 How to record your terminal session on Linux.md new file mode 100644 index 0000000000..f30e4db2dc --- /dev/null +++ b/published/20160609 How to record your terminal session on Linux.md @@ -0,0 +1,98 @@ +如何在 Linux 上录制你的终端操作 +================================================= + +录制一个终端操作可能是一个帮助他人学习 Linux 、展示一系列正确命令行操作的和分享知识的通俗易懂方法。不管是出于什么目的,从终端复制粘贴文本需要重复很多次,而录制视频的过程也是相当麻烦,有时候还不能录制。在这次的文章中,我们将简单的了解一下以 gif 格式记录和分享终端会话的方法。 + +### 预先要求 + +如果你只是希望能记录你的终端会话,并且能在终端进行回放或者和他人分享,那么你只需要一个叫做:ttyrec 的软件。Ubuntu 用户可以通过运行这行代码进行安装: + +``` +sudo apt-get install ttyrec +``` + +如果你想将生成的视频转换成一个 gif 文件,这样能够和那些不使用终端的人分享,就可以发布到网站上去,或者你只是想做一个 gif 方便使用而不想写命令。那么你需要安装额外的两个软件包。第一个就是 imagemagick , 你可以通过以下的命令安装: + +``` +sudo apt-get install imagemagick +``` + +第二个软件包就是:tty2gif.py,访问其[项目网站][1]下载。这个软件包需要安装如下依赖: + +``` +sudo apt-get install python-opster +``` + +### 录制 + +开始录制终端操作,你需要的仅仅是键入 `ttyprec` ,然后回车。这个命令将会在后台运行一个实时的记录工具。我们可以通过键入`exit`或者`ctrl+d`来停止。ttyrec 默认会在主目录下创建一个`ttyrecord`的文件。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg) + +### 回放 + +回放这个文件非常简单。你只需要打开终端并且使用 `ttyplay` 命令打开 `ttyrecord` 文件即可。(在这个例子里,我们使用 ttyrecord 作为文件名,当然,你也可以改成你用的文件名) + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg) + +然后就可以开始播放这个文件。这个视频记录了所有的操作,包括你的删除,修改。这看起来像一个拥有自我意识的终端,但是这个命令执行的过程并不是只是为了给系统看,而是为了更好的展现给人。 + +注意一点,播放这个记录是完全可控的,你可以通过点击 `+` 或者 `-` 进行加速减速,或者 `0`和 `1` 暂停和恢复播放。 + +### 导出成 GIF + +为了方便,我们通常会将视频记录转换为 gif 格式,并且,这个非常容易做到。以下是方法: + +将之前下载的 tty2gif.py 这个文件拷贝到 ttyprecord 文件(或者你命名的那个视频文件)相同的目录,然后在这个目录下打开终端,输入命令: + +``` +python tty2gif.py typing ttyrecord +``` + +如果出现了错误,检查一下你是否有安装 python-opster 包。如果还是有错误,使用如下命令进行排除。 + +``` +sudo apt-get install xdotool +export WINDOWID=$(xdotool getwindowfocus) +``` + +然后重复这个命令 `python tty2gif.py` 并且你将会看到在 ttyrecord 目录下多了一些 gif 文件。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg) + +接下来的一步就是整合所有的 gif 文件,将他打包成一个 gif 文件。我们通过使用 imagemagick 工具。输入下列命令: + +``` +convert -delay 25 -loop 0 *.gif example.gif +``` + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg) + +你可以使用任意的文件名,我用的是 example.gif。 并且,你可以改变这个延时和循环时间。 Enjoy。 + +![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/ + +作者:[Bill Toulas][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/howtoforgecom +[1]: https://bitbucket.org/antocuni/tty2gif/raw/61d5596c916512ce5f60fcc34f02c686981e6ac6/tty2gif.py + + + + + + + + diff --git a/published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md b/published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md new file mode 100644 index 0000000000..a7932ee269 --- /dev/null +++ b/published/20160609 INSTALL MATE 1.14 IN UBUNTU MATE 16.04 VIA PPA.md @@ -0,0 +1,62 @@ +在 Ubuntu Mate 16.04 上通过 PPA 升级 Mate 1.14 +================================================================= + +Mate 桌面环境 1.14 现在可以在 Ubuntu Mate 16.04 ("Xenial Xerus") 上使用了。根据这个[发布版本][1]的描述,为了全面测试 Mate 1.14,所以 Mate 桌面环境 1.14 已经在 PPA 上发布 2 个月了。因此,你不太可能遇到安装的问题。 + +![](https://2.bp.blogspot.com/-v38tLvDAxHg/V1k7beVd5SI/AAAAAAAAX7A/1X72bmQ3ia42ww6kJ_61R-CZ6yrYEBSpgCLcB/s400/mate114-ubuntu1604.png) + +**现在 PPA 提供 Mate 1.14.1 包含如下改变(Ubuntu Mate 16.04 默认安装的是 Mate 1.12.x):** + +- 客户端的装饰应用现在可以正确的在所有主题中渲染; +- 触摸板配置现在支持边缘操作和双指滚动; +- 在 Caja 中的 Python 扩展可以被单独管理; +- 所有三个窗口焦点模式都是可选的; +- Mate Panel 中的所有菜单栏图标和菜单图标可以改变大小; +- 音量和亮度 OSD 目前可以启用和禁用; +- 更多的改进和 bug 修改; + +Mate 1.14 同时改进了整个桌面环境中对 GTK+ 3 的支持,包括各种 GTK+3 小应用。但是,Ubuntu MATE 的博客中提到:PPA 的发行包使用 GTK+ 2 编译是“为了确保对 Ubuntu MATE 16.04 还有各种各样的第三方 MATE 应用、插件、扩展的支持"。 + +MATE 1.14 的完整修改列表[点击此处][2]阅读。 + +### 在 Ubuntu MATE 16.04 中升级 MATE 1.14.x + +在 Ubuntu MATE 16.04 中打开终端,并且输入如下命令,来从官方的 Xenial MATE PPA 中升级最新的 MATE 桌面环境: + +``` +sudo apt-add-repository ppa:ubuntu-mate-dev/xenial-mate +sudo apt update +sudo apt dist-upgrade +``` + +**注意**: mate-netspeed 应用将会在升级中删除。因为该应用现在已经是 mate-applets 应用报的一部分,所以它依旧是可以使用的。 + +一旦升级完成,请重启你的系统,享受全新的 MATE! + +### 如何回滚这次升级 + +如果你并不满意 MATE 1.14, 比如你遭遇了一些 bug 。或者你想回到 MATE 的官方源版本,你可以使用如下的命令清除 PPA,并且下载降级包。 + +``` +sudo apt install ppa-purge +sudo ppa-purge ppa:ubuntu-mate-dev/xenial-mate +``` + +在所有的 MATE 包降级之后,重启系统。 + +via [Ubuntu MATE blog][3] + +-------------------------------------------------------------------------------- + +via: http://www.webupd8.org/2016/06/install-mate-114-in-ubuntu-mate-1604.html + +作者:[Andrew][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.webupd8.org/p/about.html +[1]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ +[2]: http://mate-desktop.com/blog/2016-04-08-mate-1-14-released/ +[3]: https://ubuntu-mate.org/blog/mate-desktop-114-for-xenial-xerus/ diff --git a/published/20160610 Getting started with ReactOS.md b/published/20160610 Getting started with ReactOS.md new file mode 100644 index 0000000000..8d04902798 --- /dev/null +++ b/published/20160610 Getting started with ReactOS.md @@ -0,0 +1,125 @@ +ReactOS 新手指南 +==================================== + +ReactOS 是一个比较年轻的开源操作系统,它提供了一个和 Windows NT 类似的图形界面,并且它的目标也是提供一个与 NT 功能和应用程序兼容性差不多的系统。这个项目在没有使用任何 Unix 架构的情况下实现了一个类似 Wine 的用户模式。它的开发者们从头实现了 NT 的架构以及对于 FAT32 的兼容,因此它也不需要负任何法律责任。这也就是说,它不是又双叒叕一个 Linux 发行版,而是一个独特的类 Windows 系统,并且是开源世界的一部分。这份快速指南是给那些想要一个易于使用的 Windows 的开源替代品的人准备的。 + +### 安装系统 + +在开始安装这个系统之前,我需要说明一下,ReactOS 的最低硬件要求是 500MB 硬盘以及仅仅 96MB 内存。我会在一个 32 位的虚拟机里面演示安装过程。 + +现在,你需要使用箭头键来选择你想要语言,而后通过回车键来确认。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_1.png) + +之后,再次敲击回车键来继续安装。你也可以选择按“R”键来修复现有的系统。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_2.png) + +在第三屏中,你将看到一个警告说这个系统还是早期开发版本。再次敲击回车键,你将看到一个需要你最后确认的配置概览。如果你认为没问题,就按回车。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_3.png) + +然后,我们就到了分区这一步,在这里,你可以使用“D”键删除高亮分区,分别使用“P”键、“E”键以及“L”键来添加一个主分区、拓展分区或逻辑分区。如果你想要自己添加一个分区,你需要输入这个分区的大小(以 MB 为单位),然后通过回车来确认。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_4.png) + +但是,如果你有未使用的硬盘空间,在分区过程直接敲击回车键可以自动在你选中的分区上安装 ReactOS。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_5.png) + +下一步是选择分区的格式,不过现在我们只能选择 FAT32。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_6.png) + +再下一步是选择安装文件夹。我就使用默认的“/ReactOS”了,应该没有问题。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_7.png) + +然后就是等待... + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_8.png) + +最后,我们要选择启动程序的安装位置。如果你是在实机上操作的话,第一个选项应该是最安全的。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_9.png) + +总地来说,我认为 ReactOS 的安装向导很直接。尽管安装程序的界面可能看起来一点也不现代、不友好,但是大多数情况下作为用户的我们只需要狂敲回车就能安个差不多。这就是说,ReactOS 的开发版安装起来也是相对简单方便的。 + +### 设置 ReactOS + +在我们重启进入新系统之后,“设置向导”会帮助你设置系统。目前,这个向导仅支持设置语言和键盘格式。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_10.png) + +我在这里选择了第二个键盘格式。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_11.png) + +我还可以设置一个改变键盘布局的快捷键。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_12.png) + +之后我添加了用户名… + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_13.png) + +…以及管理员密码… + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_14.png) + +在设置好时间之后,我们就算完成了系统设置。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_15.png) + +### ReactOS 之内 + +当我们历经千辛万苦,终于首次进入 ReactOS 的界面时,系统会检测硬件并自动帮助我们安装驱动。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_16.png) + +这是我这里被自动检测出来的三个硬件: + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_17.png) + +在上一张图片里你看到的是 ReactOS 的“应用管理器”,这东西是 Linux 的标配。不过你不会在这里找到任何与 Linux 有关系的东西。只有在这个系统里工作良好的开源软件才会在这个管理器中出现。这就导致了管理器中有的分类下挤得满满当当,有的却冷清异常。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_18.png) + +我试着通过软件中心安装了 Firefox 以及通过直接下载 exe 文件双击安装 Notepad++。这两个应用都能完美运行:它们的图标出现在了桌面上,在菜单中也出现了它们的名字,Notepad++ 也出现在了软件中心右侧的分类栏里。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_19.png) + +我没有尝试运行任何现代的 Windows 游戏,如果你想配置 Direct 3D 的话,你可以转到 “我的电脑/控制选项/WineD3D 配置”。在那里,你能看到很多 Direct3D 选项,大致与 dx 8 的选项类似。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_20.png) + +ReactOS 还有一个好的地方,就是我们可以通过“我的电脑”来操作注册表。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_21.png) + +如果你需要一个简单点的工具,你可以在应用菜单里打开注册表编辑器。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_22.png) + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_23.png) + +最后,如果你认为 ReactOS 看起来有点过时了的话,你可以在桌面右击选择“属性”,之后在“外观”那里选择你喜欢的主题和颜色。 + +![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_24.png) + +### 结论 + +老实说,我对 ReactOS 的工作方式印象深刻。它相当稳定、连贯、快速,并且真正人性化。抛开 Windows 的阴影(过时的应用菜单,不合理的菜单结构)不谈的话,ReactOS 几乎做到了尽善尽美。它可能不会有太多应用可供选择,现有的功能也可能不够强大,但是我确信它将会繁荣壮大。关于它的数据显示出了它的人气,我确定将要围绕它建立起来的社区将会很快就壮大到能把这个项目带往成功之路的地步。如今,ReactOS 的最新版本是 0.4.1。如果想要以开源的方式运行 Windows 的应用,那么它就是你的菜! + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/getting-started-with-reactos/ + +作者:[Bill Toulas][a] +译者:[name1e5s](https://github.com/name1e5s) +校对:[PurlingNayuki](https://github.com/PurlingNayuki) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.howtoforge.com/tutorial/getting-started-with-reactos/ diff --git a/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md b/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md new file mode 100644 index 0000000000..50ec85acaa --- /dev/null +++ b/published/20160616 10 Basic Linux Commands That Every Linux Newbies Should Remember.md @@ -0,0 +1,141 @@ +Linux 新手必知必会的 10 条 Linux 基本命令 +===================================================================== + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/4225072_orig.png) + + +Linux 对我们的生活产生了巨大的冲击。至少你的安卓手机使用的就是 Linux 核心。尽管如此,在第一次开始使用 Linux 时你还是会感到难以下手。因为在 Linux 中,通常需要使用终端命令来取代 Windows 系统中的点击启动图标操作。但是不必担心,这里我们会介绍 10 个 Linux 基本命令来帮助你开启 Linux 神秘之旅。 + + +### 帮助新手走出第一步的 10 个 Linux 基本命令 + +当我们谈论 Linux 命令时,实质上是在谈论 Linux 系统本身。这短短的 10 个 Linux 基本命令不会让你变成天才或者 Linux 专家,但是能帮助你轻松开始 Linux 之旅。使用这些基本命令会帮助新手们完成 Linux 的日常任务,由于它们的使用频率如此至高,所以我更乐意称他们为 Linux 命令之王! + +让我们开始学习这 10 条 Linux 基本命令吧。 + + +#### 1. sudo + +这条命令的意思是“以超级用户的身份执行”,是 SuperUserDo 的简写,它是新手将要用到的最重要的一条 Linux 命令。当一条单行命令需要 root 权限的时候,`sudo`命令就派上用场了。你可以在每一条需要 root 权限的命令前都加上`sudo`。 + +``` +$ sudo su +``` + + +#### 2. ls (list) + + +跟其他人一样,你肯定也经常想看看目录下都有些什么东西。使用列表命令,终端会把当前工作目录下所有的文件以及文件夹展示给你。比如说,我当前处在 /home 文件夹中,我想看看 /home 文件夹中都有哪些文件和目录。 + +``` +/home$ ls +``` + + +在 /home 中执行`ls`命令将会返回类似下面的内容: + +``` +imad lost+found +``` + + +#### 3. cd + +变更目录命令(cd)是终端中总会被用到的主要命令。它是最常用到的 Linux 基本命令之一。此命令使用非常简单,当你打算从当前目录跳转至某个文件夹时,只需要将文件夹键入此命令之后即可。如果你想跳转至上层目录,只需要在此命令之后键入两个点 (..) 就可以了。 +​ +举个例子,我现在处在 /home 目录中,我想移动到 /home 目录中的 usr 文件夹下,可以通过以下命令来完成操作。 + +``` +/home $ cd usr + +/home/usr $ +``` + + +#### 4. mkdir + +只是可以切换目录还是不够完美。有时候你会想要新建一个文件夹或子文件夹。此时可以使用 mkdir 命令来完成操作。使用方法很简单,只需要把新的文件夹名跟在 mkdir 命令之后就好了。 + +``` +~$ mkdir folderName +``` + + +#### 5. cp + +拷贝-粘贴(copy-and-paste)是我们组织文件需要用到的重要命令。使用 `cp` 命令可以帮助你在终端当中完成拷贝-粘贴操作。首先确定你想要拷贝的文件,然后键入打算粘贴此文件的目标位置。 + +``` +$ cp src des +``` + +注意:如果目标目录对新建文件需要 root 权限时,你可以使用 `sudo` 命令来完成文件拷贝操作。 + + +#### 6. rm + +rm 命令可以帮助你移除文件甚至目录。如果不希望每删除一个文件都提示确认一次,可以用`-f`参数来强制执行。也可以使用 `-r` 参数来递归的移除文件夹。 + +``` +$ rm myfile.txt +``` + + +#### 7. apt-get + +这个命令会依据发行版的不同而有所区别。在基于 Debian 的发行版中,我们拥有 Advanced Packaging Tool(APT)包管理工具来安装、移除和升级包。apt-get 命令会帮助你安装需要在 Linux 系统中运行的软件。它是一个功能强大的命令行,可以用来帮助你对软件执行安装、升级和移除操作。 + +在其他发行版中,例如 Fedora、Centos,都各自不同的包管理工具。Fedora 之前使用的是 yum,不过现在 dnf 成了它默认的包管理工具。 + +``` +$ sudo apt-get update + +$ sudo dnf update +``` + + +#### 8. grep + +当你需要查找一个文件,但是又忘记了它具体的位置和路径时,`grep` 命令会帮助你解决这个难题。你可以提供文件的关键字,使用`grep`命令来查找到它。 + +``` +$ grep user /etc/passwd +``` + + +#### 9. cat + +作为一个用户,你应该会经常需要浏览脚本内的文本或者代码。`cat`命令是 Linux 系统的基本命令之一,它的用途就是将文件的内容展示给你。 + +``` +$ cat CMakeLists.txt +``` + + +#### 10. poweroff + +最后一个命令是 `poweroff`。有时你需要直接在终端中执行关机操作。此命令可以完成这个任务。由于关机操作需要 root 权限,所以别忘了在此命令之前添加`sudo`。 + +``` +$ sudo poweroff +``` + + +### 总结 + +如我在文章开始所言,这 10 条命令并不会让你立即成为一个 Linux 大拿,但它们会让你在初期快速上手 Linux。以这些命令为基础,给自己设置一个目标,每天学习一到三条命令,这就是此文的目的所在。在下方评论区分享有趣并且有用的命令。别忘了跟你的朋友分享此文。 + + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember + +作者:[Commenti][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember#comments +[1]: http://linuxandubuntu.com/home/category/linux diff --git a/published/20160616 6 Amazing Linux Distributions For Kids.md b/published/20160616 6 Amazing Linux Distributions For Kids.md new file mode 100644 index 0000000000..79f3aff154 --- /dev/null +++ b/published/20160616 6 Amazing Linux Distributions For Kids.md @@ -0,0 +1,113 @@ +惊艳!6款面向儿童的 Linux 发行版 +====================================== + +毫无疑问未来是属于 Linux 和开源的。为了实现这样的未来、使 Linux 占据一席之地,人们已经着手开发尽可能简单的、面向儿童的 Linux 发行版,并尝试教导他们如何使用 Linux 操作系统。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Linux-Distros-For-Kids.png) +>面向儿童的 Linux 发行版 + +Linux 是一款非常强大的操作系统,原因之一便是它支撑了互联网上绝大多数的服务器。但出于对其用户友好性的担忧,坊间时常展开有关于 Linux 应如何取代 Mac OS X 或 Windows 的争论。而我认为用户应该接受 Linux 来见识它真正的威力。 + +如今,Linux 运行在绝大多数设备上,从智能手机到平板电脑,笔记本电脑,工作站,服务器,超级计算机,再到汽车,航空管制系统,电冰箱,到处都有 Linux 的身影。正如我在开篇所说,有了这一切, Linux 是未来的操作系统。 + +>参考阅读: [30 Big Companies and Devices Running on Linux][1] + +未来是属于孩子们的,教育要从娃娃抓起。所以,要让小孩子尽早地学习计算机,而 Linux 就是其中一个重要的部分。 + +对小孩来说,一个常见的现象是,当他们在一个适合他的环境中学习时,好奇心和早期学习的能力会使他自己养成喜好探索的性格。 + +说了这么多儿童应该学习 Linux 的原因,接下来我就列出这些令人激动的发行版。你可以把它们推荐给小孩子来帮助他们开始学习使用 Linux 。 + +### Sugar on a Stick + +Sugar on a Stick (译注:“糖棒”)是 Sugar 实验室旗下的工程,Sugar 实验室是一个由志愿者领导的非盈利组织。这一发行版旨在设计大量的免费工具来促进儿童在探索中学会技能,发现、创造,并将这些反映到自己的思想上。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Neighborhood-View.png) +>Sugar Neighborhood 界面 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Activity-Library.png) +>Sugar 应用程序 + +你既可以将 Sugar 看作是普通的桌面环境,也可以把它当做是帮助鼓励孩子学习、提高参与活动的积极性的一款应用合集。 + +访问主页: + +### Edubuntu + +Edubuntu 是基于当下最流行的发行版 Ubuntu 而开发的一款非官方发行版。主要致力于降低学校、家庭和社区安装、使用 Ubuntu 自由软件的难度。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Edubuntu-Apps.jpg) +>Edubuntu 桌面应用 + +它是由来自不同组织的学生、教师、家长、一些利益相关者甚至黑客支持的,这些人都坚信自由的学习和共享知识能够提高自己和社区的发展。 + +该项目的主要目标是通过组建一款能够降低安装、管理软件难度的操作系统来增强学习和教育水平。 + +访问主页: + +### Doudou Linux + +Doudou Linux 是专为方便孩子们在建设创新思维时使用计算机而设计的发行版。它提供了简单但是颇具教育意义的应用来使儿童在应用过程中学习发现新的知识。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Doudou-Linux.png) +>Doudou Linux + +其最引人注目的一点便是内容过滤功能,顾名思义,它能够阻止孩童访问网络上的限制性内容。如果想要更进一步的儿童保护功能,Doudou Linux 还提供了互联网用户隐私功能,能够去除网页中的特定加载内容。 + +访问主页: + +### LinuxKidX + +这是一款整合了许多专为儿童设计的教育软件的基于 Slackware Linux 发行版的 LiveCD。它使用 KDE 作为默认桌面环境并配置了诸如 Ktouch 打字指导,Kstars 虚拟天文台,Kalzium 元素周期表和 KwordQuiz 单词测试等应用。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/LinuxKidX.jpg) +>LinuxKidX + +访问主页: + +### Ubermix + +Ubermix 基于 Ubuntu 构建,同样以教学为目的。默认配备了超过60款应用,帮助学生更好地学习,同时给教师教学提供便利。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/ubermix.png) +>Ubermix Linux + +Ubermix 还具有5分钟快速安装和快速恢复等功能,可以给小孩子更好的帮助。 + +访问主页: + +### Qimo + +因为很多读者曾向我询问过 Qimo 发行版的情况,所以我把它写进这篇文章。但是截止发稿时,Qimo 儿童版的开发已经终止,不再提供更新。 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Qimo-Linux.png) +>Qimo Linux + +你仍然可以在 Ubuntu 或者其他的 Linux 发行版中找到大多数儿童游戏。正如这些开发商所说,他们并不是要终止为孩子们开发教育软件,而是在开发能够提高儿童读写能力的 android 应用。 + +如果你想进一步了解,可以移步他们的官方网站。 + +访问主页: + +以上这些便是我所知道的面向儿童的Linux发行版,或有缺漏,欢迎评论补充。 + +如果你想让我们知道你对如何想儿童介绍 Linux 或者你对未来的 Linux ,特别是桌面计算机上的 Linux,欢迎与我联系。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/best-linux-distributions-for-kids/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + + +作者:[Aaron Kili][a] +译者:[HaohongWANG](https://github.com/HaohongWANG) +校对:[Ezio](https://github.com/oska874) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/big-companies-and-devices-running-on-gnulinux/ + + + + + diff --git a/published/20160620 Monitor Linux With Netdata.md b/published/20160620 Monitor Linux With Netdata.md new file mode 100644 index 0000000000..989abf0d66 --- /dev/null +++ b/published/20160620 Monitor Linux With Netdata.md @@ -0,0 +1,118 @@ +用 Netdata 监控 Linux +======= + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/netdata-945x400.png) + +Netdata 是一个实时的资源监控工具,它拥有基于 web 的友好界面,由 [FireHQL][1] 开发和维护。通过这个工具,你可以通过图表来了解 CPU,RAM,硬盘,网络,Apache, Postfix 等软硬件的资源使用情况。它很像 Nagios 等别的监控软件;但是,Netdata 仅仅支持通过 Web 界面进行实时监控。 + +### 了解 Netdata + +目前 Netdata 还没有验证机制,如果你担心别人能从你的电脑上获取相关信息的话,你应该设置防火墙规则来限制访问。UI 很简单,所以任何人看懂图形并理解他们看到的结果,至少你会对它的快速安装印象深刻。 + +它的 web 前端响应很快,而且不需要 Flash 插件。 UI 很整洁,保持着 Netdata 应有的特性。第一眼看上去,你能够看到很多图表,幸运的是绝大多数常用的图表数据(像 CPU,RAM,网络和硬盘)都在顶部。如果你想深入了解图形化数据,你只需要下滑滚动条,或者点击在右边菜单的项目。通过每个图表的右下方的按钮, Netdata 还能让你控制图表的显示,重置,缩放。 + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-1.png) + +*Netdata 图表控制* + +Netdata 并不会占用多少系统资源,它占用的内存不会超过 40MB。因为这个软件是作者用 C 语言写的。 + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture.png) + +*Netdata 显示的内存使用情况* + +### 下载 Netdata + +要下载这个软件,你可以访问 [Netdata 的 GitHub 页面][2],然后点击页面左边绿色的 "Clone or download" 按钮 。你应该能看到以下两个选项: + +#### 通过 ZIP 文件下载 + +一种方法是下载 ZIP 文件。它包含仓库里的所有东西。但是如果仓库更新了,你需要重新下载 ZIP 文件。下载完 ZIP 文件后,你要用 `unzip` 命令行工具来解压文件。运行下面的命令能把 ZIP 文件的内容解压到 `netdata` 文件夹。 + +``` +$ cd ~/Downloads +$ unzip netdata-master.zip +``` + +![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-2.png) + +*解压 Netdata* + +没必要在 unzip 命令后加上 `-d` 选项,因为文件都是是放在 ZIP 文件的根文件夹里面。如果没有那个文件夹, unzip 会把所有东西都解压到当前目录下面(这会让文件非常混乱)。 + +#### 通过 Git 下载 + +还有一种方式是通过 git 下载整个仓库。当然,你的系统需要安装 git。Git 在 Fedora 系统是默认安装的。如果没有安装,你可以用下面的命令在命令行里安装 git。 + +``` +$ sudo dnf install git +``` + +安装好 git 后,你要把仓库 “clone” 到你的系统里。运行下面的命令。 + +``` +$ git clone https://github.com/firehol/netdata.git +``` + +这个命令会在当前工作目录克隆(或者说复制一份)仓库。 + +### 安装 Netdata + +有些软件包是你成功构造 Netdata 时候需要的。 还好,一行命令就可以安装你所需要的东西([这写在它的安装文档中][3])。在命令行运行下面的命令就能满足安装 Netdata 需要的所有依赖关系。 + +``` +$ dnf install zlib-devel libuuid-devel libmnl-devel gcc make git autoconf autogen automake pkgconfig +``` + +当所有需要的软件包都安装好了,你就 cd 到 netdata/ 目录,运行 netdata-installer.sh 脚本。 + +``` +$ sudo ./netdata-installer.sh +``` + +然后就会提示你按回车键,开始安装程序。如果要继续的话,就按下回车吧。 + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-3-600x341.png) + +*Netdata 的安装* + +如果一切顺利,你的系统上就已经安装并且运行了 Netdata。安装脚本还会在相应的文件夹里添加一个卸载脚本,叫做 `netdata-uninstaller.sh`。如果你以后不想使用 Netdata,运行这个脚本可以从你的系统里面卸载掉 Netdata。 + +你可以通过 systemctl 查看它的运行状态。 + +``` +$ sudo systemctl status netdata +``` + +### 使用 Netdata + +既然我们已经安装并且运行了 Netdata,你就能够通过 19999 端口来访问 web 界面。下面的截图是我在一个测试机器上运行的 Netdata。 + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-4-768x458.png) + +*关于 Netdata 运行时的概览* + +恭喜!你已经成功安装并且能够看到漂亮的外观和图形,以及你的机器性能的高级统计数据。无论是否是你个人的机器,你都可以向你的朋友们炫耀,因为你能够深入的了解你的服务器性能,Netdata 在任何机器上的性能报告都非常出色。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/monitor-linux-netdata/ + +作者:[Martino Jones][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/monitor-linux-netdata/ +[1]: https://firehol.org/ +[2]: https://github.com/firehol/netdata +[3]: https://github.com/firehol/netdata/wiki/Installation + + + + + + + + diff --git a/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md b/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md new file mode 100644 index 0000000000..7ad127370b --- /dev/null +++ b/published/20160620 PowerPC gains an Android 4.4 port with Big Endian support.md @@ -0,0 +1,106 @@ +Android 4.4 移植到了 PowerPC 架构,支持大端架构 +=========================================================== + +eInfochips(一家软件厂商) 已将将 Android 4.4 系统移植到 PowerPC 架构,它将用于一家航空电子客户用来监视引擎的健康状况的人机界面(HMI:Human Machine Interface)。 + +eInfochips 已经开发了第一个面向 PowerPC 架构的 CPU 的 Android 移植版本,并支持大端(Big Endian)架构。此移植基于 Android 开源项目 [Android Open Source Project (AOSP)] 中 Android 4.4 (KitKat) 的代码,其功能内核的版本号为 3.12.19。 + +Android 开始兴起的时候,PowerPC 正在快速丢失和 ARM 架构共同角逐的市场。高端的网络客户和其它的企业级的嵌入式工具大多运行在诸如飞思卡尔(Freescale)的 PowerQUICC 和 QorIQ 这样的 PowerPC 处理器上,但是并不是 Linux 系统。不过,有几个 Android 的移植计划。在 2009 年,飞思卡尔和 Embedded Alley(一家软件厂商,当前是 Mentor Graphics 的 Linux 团队的一部分)[宣布了针对 PowerQUICC 和 QorIQ 芯片的移植版本][15],当前由 NXP 公司构建。另一个名为 [Android-PowerPC][16] 的项目也作出了相似的工作。 + +这些努力来的都并不容易,然而,当航空公司找到 eInfochips,希望能够为他们那些基于 PowerPC 的引擎监控系统添加 Android 应用程序以改善人机界面。该公司找出了这些早期的移植版本,然而,它们都相距甚远。所以,他们不得不从头开始新的移植。 + +最主要的问题是这些移植的 Android 版本实在是太老了,和现在的 Android 差别太大了。Embedded Alley 移植的版本为 Android 1.5 (Cupcake),它于 2009 年发布,Linux 内核版本为 2.6.28。而 Android-PowerPC 项目最后一版的移植是 Android 2.2 (Froyo),它于 2010 年发布,内核版本为 2.6.32。此外,航空公司还有一些额外的技术诉求,例如对大端架构(Big Endian)的支持,这种老式的内存访问方式仍旧应用于网络通信和电信行业。然而那些早期的移植版本仅能够支持小端(Little Endian)的内存访问。 + +### 来自 eInfochips 的全新 PowerPC 架构移植 + +eInfochips, 它最为出名的应该是那些基于 ARM/骁龙处理器的模块计算机板卡,例如 [Eragon 600][17]。 它已经完成了基于 QorIQ 的 Android 4.4 系统移植,且发布了白皮书介绍了该项目。采用该项目的航空电子设备客户仍旧不愿透露名称,目前仍旧不清楚什么时候会公开此该移植版本。 + +![](http://files.linuxgizmos.com/einfochips_porting_android_on_powerpc.jpg) + +*图片来自 eInfochips 的博客日志* + +全新的 PowerPC Android 项目包括: + +- 基于 PowerPC [e5500][1] 仿生定制 +- 基于 Android KitKat 的大端支持 +- 使用 GCC 5.2 工具链开发 +- Android 4.4 框架的 PowerPC 支持 +- PowerPC e5500 的 Android 内核版本为 3.12.19 + +根据 eInfochips 的销售经理 Sooryanarayanan Balasubramanian 描述,该航空电子客户想要使用 Android 主要是因为熟悉的界面能够缩减培训的时间,并且让程序更新和增加新程序变得更加容易。他继续解释说:“这次成功的移植了 Android,使得今后的工作仅仅需要在应用层作出修修改改,而不再向以前一样需要在所有层面之间作相互的校验。”,“这是第一次在航空航天工业作出这些尝试,这需要在设计时尽量认真。” + +通过白皮书,可以知道将 Android 移植到 PowerPC 上需要对框架、核心库、开发工具链、运行时链接器、对象链接器和开源编译工具作出大量的修改。在字节码生成阶段,移植团队决定使用便携模式(portable mode)而不是快速解释模式(fast interpreter mode)。这是因为还没有 PowerPC 可用的快速解释模式,而使用开源的 [libffi][18] 的便携模式能够支持 PowerPC。 + +同时,团队还面临着在 Android 运行时 (ART) 环境和 Dalvik 虚拟机 (DVM) 环境之间的选择。他们发现,ART 环境下的便携模式还未经测试且缺乏良好的文档支持,所以最终选择了 DVM 环境下的便携模式。 + +白皮书中还提及了其它的一些在移植过程中遇到的困难,包括重新开发工具链,重写脚本以解决 AOSP 对编译器标志“非标准”使用的问题。最终完成的移植版本提供了 37 个服务,以及提供了无界面的 Android 部署,在前端使用用户空间的模拟 UI。 + + +### 目标硬件 + +感谢来自 [eInfochips 博客日志][2] 的图片(如下图所示),让我们能够确认此 PowerPC 的 Android 移植项目的硬件平台。这个板卡为 [X-ES Xpedite 6101][3],它是一个加固级 XMC/PrPMC 夹层模组。 + +![](http://hackerboards.com/files/xes_xpedite6101-sm.jpg) + +*X-ES Xpedite 6101 照片和框图* + +X-ES Xpedite 6101 板卡拥有一个可选的 NXP 公司基于 QorIQ T 系列通信处理器(T2081、T1042 和 T1022),它们分别集成了 8 个、4 个和 2 个 e6500 核心,稍有不同的是,T2081 的处理器主频为 1.8GHz,T1042/22 的处理器主频为 1.4GHz。所有的核心都集成了 AltiVec SIMD 引擎,这也就意味着它能够提供 DSP 级别的浮点运算性能。所有以上 3 款 X-ES 板卡都能够支持最高 8GB 的 DDR3-1600 ECC SDRAM 内存。外加 512MB NOR 和 32GB 的 NAND 闪存。 + +![](http://hackerboards.com/files/nxp_qoriq_t2081_block-sm.jpg) + +*NXP T2081 框图* + +板卡的 I/O 包括一个 x4 PCI Express Gen2 通道,以及双工的千兆级网卡、 RS232/422/485 串口和 SATA 3.0 接口。此外,它可选 3 款 QorIQ 处理器,Xpedite 6101 提供了三种 [X-ES 加固等级][19],分别是额定工作温度 0 ~ 55°C, -40 ~ 70°C, 或者是 -40 ~ 85°C,且包含 3 类冲击和抗振类别。 + +此外,我们已经介绍过的基于 X-ES QorIQ 的 XMC/PrPMC 板卡包括 [XPedite6401 和 XPedite6370][20],它们支持已有的板卡级 Linux 、风河的 VxWorks(一种实时操作系统) 和 Green Hills 的 Integrity(也是一种操作系统)。 + + +### 更多信息 + +eInfochips Android PowerPC 移植白皮书可以[在此][4]下载(需要先免费注册)。 + +### 相关资料 + +- [Commercial embedded Linux distro boosts virtualization][5] +- [Freescale unveils first ARM-based QorIQ SoCs][6] +- [High-end boards run Linux on 64-bit ARM QorIQ SoCs][7] +- [Free, Open Enea Linux taps Yocto Project and Linaro code][8] +- [LynuxWorks reverts to its LynxOS roots, changes name][9] +- [First quad- and octa-core QorIQ SoCs unveiled][10] +- [Free white paper shows how Linux won embedded][11] +- [Quad-core Snapdragon COM offers three dev kit options][12] +- [Tiny COM runs Linux on quad-core 64-bit Snapdragon 410][13] +- [PowerPC based IoT gateway COM ships with Linux BSP][14] + + +-------------------------------------------------------------------------------- + +via: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/ + +作者:[Eric Brown][a] +译者:[dongfengweixiao](https://github.com/dongfengweixiao) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/ +[1]: http://linuxdevices.linuxgizmos.com/low-cost-powerquicc-chips-offer-flexible-interconnect-options/ +[2]: https://www.einfochips.com/blog/k2-categories/aerospace/presenting-a-case-for-porting-android-on-powerpc-architecture.html +[3]: http://www.xes-inc.com/products/processor-mezzanines/xpedite6101/ +[4]: http://biz.einfochips.com/portingandroidonpowerpc +[5]: http://hackerboards.com/commercial-embedded-linux-distro-boosts-virtualization/ +[6]: http://hackerboards.com/freescale-unveils-first-arm-based-qoriq-socs/ +[7]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/ +[8]: http://hackerboards.com/free-open-enea-linux-taps-yocto-and-linaro-code/ +[9]: http://hackerboards.com/lynuxworks-reverts-to-its-lynxos-roots-changes-name/ +[10]: http://hackerboards.com/first-quad-and-octa-core-qoriq-socs-unveiled/ +[11]: http://hackerboards.com/free-white-paper-shows-how-linux-won-embedded/ +[12]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/ +[13]: http://hackerboards.com/tiny-com-runs-linux-and-android-on-quad-core-64-bit-snapdragon-410/ +[14]: http://hackerboards.com/powerpc-based-iot-gateway-com-ships-with-linux-bsp/ +[15]: http://linuxdevices.linuxgizmos.com/android-ported-to-powerpc/ +[16]: http://www.androidppc.com/ +[17]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/ +[18]: https://sourceware.org/libffi/ +[19]: http://www.xes-inc.com/capabilities/ruggedization/ +[20]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/ diff --git a/published/20160624 How to permanently mount a Windows share on Linux.md b/published/20160624 How to permanently mount a Windows share on Linux.md new file mode 100644 index 0000000000..1782a76241 --- /dev/null +++ b/published/20160624 How to permanently mount a Windows share on Linux.md @@ -0,0 +1,123 @@ +如何在 Linux 上永久挂载一个 Windows 共享 +================================================== + +> 如果你已经厌倦了每次重启 Linux 就得重新挂载 Windows 共享,读读这个让共享永久挂载的简单方法。 + +![](http://tr2.cbsistatic.com/hub/i/2016/06/02/e965310b-b38d-43e6-9eac-ea520992138b/68fd9ec5d6731cc405bdd27f2f42848d/linuxadminhero.jpg) + +*图片: Jack Wallen* + +在 Linux 上和一个 Windows 网络进行交互从来就不是件轻松的事情。想想多少企业正在采用 Linux,需要在这两个平台上彼此协作。幸运的是,有了一些工具的帮助,你可以轻松地将 Windows 网络驱动器映射到一台 Linux 机器上,甚至可以确保在重启 Linux 机器之后共享还在。 + +### 在我们开始之前 + +要实现这个,你需要用到命令行。过程十分简单,但你需要编辑 /etc/fstab 文件,所以小心操作。还有,我假设你已经让 Samba 正常工作了,可以手动从 Windows 网络挂载共享到你的 Linux 机器,还知道这个共享的主机 IP 地址。 + +准备好了吗?那就开始吧。 + +### 创建你的挂载点 + +我们要做的第一件事是创建一个文件夹,他将作为共享的挂载点。为了简单起见,我们将这个文件夹命名为 share,放在 /media 之下。打开你的终端执行以下命令: + +``` +sudo mkdir /media/share +``` + +### 安装一些软件 + +现在我们得安装允许跨平台文件共享的系统;这个系统是 cifs-utils。在终端窗口输入: + +``` +sudo apt-get install cifs-utils +``` + +这个命令同时还会安装 cifs-utils 所有的依赖。 + +安装完成之后,打开文件 /etc/nsswitch.conf 并找到这一行: + +``` +hosts: files mdns4_minimal [NOTFOUND=return] dns +``` + +编辑这一行,让它看起来像这样: + +``` +hosts: files mdns4_minimal [NOTFOUND=return] wins dns +``` + +现在你需要安装 windbind 让你的 Linux 机器可以在 DHCP 网络中解析 Windows 机器名。在终端里执行: + +``` +sudo apt-get install libnss-windbind windbind +``` + +用这个命令重启网络服务: + +``` +sudo service networking restart +``` + +### 挂载网络驱动器 + +现在我们要映射网络驱动器。这里我们必须编辑 /etc/fstab 文件。在你做第一次编辑之前,用这个命令备份以下这个文件: + +``` +sudo cp /etc/fstab /etc/fstab.old +``` + +如果你需要恢复这个文件,执行以下命令: + +``` +sudo mv /etc/fstab.old /etc/fstab +``` + +在你的主目录创建一个认证信息文件 .smbcredentials。在这个文件里添加你的用户名和密码,就像这样(USER 和 PASSWORD 替换为实际的用户名和密码): + +``` +username=USER + +password=PASSWORD +``` + +你需要知道挂载这个驱动器的用户的组 ID(GID)和用户 ID(UID)。执行命令: + +``` +id USER +``` + +USER 是你的实际用户名,你应该会看到类似这样的信息: + +``` +uid=1000(USER) gid=1000(GROUP) +``` + +USER 是实际的用户名,GROUP 是组名。在(USER)和(GROUP)之前的数字将会被用在 /etc/fstab 文件之中。 + +是时候编辑 /etc/fstab 文件了。在你的编辑器中打开那个文件并添加下面这行到文件末尾(替换以下全大写字段以及远程机器的 IP 地址): + +``` +//192.168.1.10/SHARE /media/share cifs credentials=/home/USER/.smbcredentials,iocharset=uft8,gid=GID,udi=UID,file_mode=0777,dir_mode=0777 0 0 +``` + +**注意**:上面这些内容应该在同一行上。 + +保存并关闭那个文件。执行 sudo mount -a 命令,共享将被挂载。检查一下 /media/share,你应该能看到那个网络共享上的文件和文件夹了。 + +### 共享很简单 + +有了 cifs-utils 和 Samba,映射网络共享在一台 Linux 机器上简单得让人难以置信。现在,你再也不用在每次机器启动的时候手动重新挂载那些共享了。 + +更多网络提示和技巧,订阅我们的 Data Center 消息吧。 +[订阅](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fhow-to-permanently-mount-a-windows-share-on-linux%2F&) + +-------------------------------------------------------------------------------- + +via: http://www.techrepublic.com/article/how-to-permanently-mount-a-windows-share-on-linux/ + +作者:[Jack Wallen][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.techrepublic.com/search/?a=jack+wallen diff --git a/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md b/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md new file mode 100644 index 0000000000..17af8d85d9 --- /dev/null +++ b/published/20160624 IT runs on the cloud and the cloud runs on Linux. Any questions.md @@ -0,0 +1,63 @@ +IT 运行在云端,而云运行在 Linux 上。你怎么看? +=================================================================== + +> IT 正在逐渐迁移到云端。那又是什么驱动了云呢?答案是 Linux。 当连微软的 Azure 都开始拥抱 Linux 时,你就应该知道这一切都已经改变了。 + +![](http://zdnet1.cbsistatic.com/hub/i/r/2016/06/24/7d2b00eb-783d-4202-bda2-ca65d45c460a/resize/770xauto/732db8df725ede1cc38972788de71a0b/linux-owns-cloud.jpg) + +*图片: ZDNet* + +不管你接不接受, 云正在接管 IT 已经成为现实。 我们这几年见证了 [ 云在内部 IT 的崛起 ][1] 。 那又是什么驱动了云呢? 答案是 Linux 。 + +[Uptime Institute][2] 最近对 1000 个 IT 决策者进行了调查,发现约 50% 左右的资深企业 IT 决策者认为在将来[大部分的 IT 工作应该放在云上 ][3] 或托管网站上。在这个调查中,23% 的人认为这种改变即将发生在明年,有 70% 的人则认为这种情况会在四年内出现。 + +这一点都不奇怪。 我们中的许多人仍热衷于我们的物理服务器和机架, 但一般运营一个自己的数据中心并不会产生任何的经济效益。 + +很简单, 只需要对比你[运行在你自己的硬件上的资本费用(CAPEX)和使用云的业务费用(OPEX)][4]即可。 但这并不是说你应该把所有的东西都一股脑外包出去,而是说在大多数情况下你应该把许多工作都迁移到云端。 + +相应地,如果你想充分地利用云,你就得了解 Linux 。 + +[亚马逊的 AWS][5]、 [Apache CloudStack][6]、 [Rackspace][7]、[谷歌的 GCP][8] 以及 [ OpenStack ][9] 的核心都是运行在 Linux 上的。那么结果如何?截至到 2014 年, [在 Linux 服务器上部署的应用达到所有企业的 79% ][10],而 在 Windows 服务器上部署的则跌到 36%。从那时起, Linux 就获得了更多的发展动力。 + +即便是微软自身也明白这一点。 + +Azure 的技术主管 Mark Russinovich 曾说,仅仅在过去的几年内微软就从[四分之一的 Azure 虚拟机运行在 Linux 上][11] 变为[将近三分之一的 Azure 虚拟机运行在 Linux 上][12]。 + +试想一下。微软,一家正逐渐将[云变为自身财政收入的主要来源][13] 的公司,其三分之一的云产业依靠于 Linux 。 + +即使是到目前为止, 这些不论喜欢或者不喜欢微软的人都很难想象得到[微软会从一家以商业软件为基础的软件公司转变为一家开源的、基于云服务的企业][14] 。 + +Linux 对于这些专用服务器机房的渗透甚至比它刚开始的时候更深了。 举个例子, [Docker 最近发行了其在 Windows 10 和 Mac OS X 上的公测版本 ][15] 。 这难道是意味着 [Docker][16] 将会把其同名的容器服务移植到 Windows 10 和 Mac 上吗? 并不是的。 + +在这两个平台上, Docker 只是运行在一个 Linux 虚拟机内部。 在 Mac OS 上是 HyperKit ,在 Windows 上则是 Hyper-V 。 在图形界面上可能看起来就像另一个 Mac 或 Windows 上的应用, 但在其内部的容器仍然是运行在 Linux 上的。 + +所以,就像大量的安卓手机和 Chromebook 的用户压根就不知道他们所运行的是 Linux 系统一样。这些 IT 用户也会随之悄然地迁移到 Linux 和云上。 + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[chenxinlong](https://github.com/chenxinlong) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]: http://www.zdnet.com/article/2014-the-year-the-cloud-killed-the-datacenter/ +[2]: https://uptimeinstitute.com/ +[3]: http://www.zdnet.com/article/move-to-cloud-accelerating-faster-than-thought-survey-finds/ +[4]: http://www.zdnet.com/article/rethinking-capex-and-opex-in-a-cloud-centric-world/ +[5]: https://aws.amazon.com/ +[6]: https://cloudstack.apache.org/ +[7]: https://www.rackspace.com/en-us +[8]: https://cloud.google.com/ +[9]: http://www.openstack.org/ +[10]: http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/ +[11]: http://news.microsoft.com/bythenumbers/azure-virtual +[12]: http://www.zdnet.com/article/microsoft-nearly-one-in-three-azure-virtual-machines-now-are-running-linux/ +[13]: http://www.zdnet.com/article/microsofts-q3-azure-commercial-cloud-strong-but-earnings-revenue-light/ +[14]: http://www.zdnet.com/article/why-microsoft-is-turning-into-an-open-source-company/ +[15]: http://www.zdnet.com/article/new-docker-betas-for-azure-windows-10-now-available/ +[16]: http://www.docker.com/ + diff --git a/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md b/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md new file mode 100644 index 0000000000..cf916044a6 --- /dev/null +++ b/published/20160624 Industrial SBC builds on Raspberry Pi Compute Module.md @@ -0,0 +1,78 @@ +用树莓派计算模块搭建的工业单板计算机 +===================================================== + +![](http://hackerboards.com/files/embeddedmicro_mypi-thm.jpg) + +在 Kickstarter 众筹网站上,一个叫 “MyPi” 的项目用树莓派计算模块制作了一款 SBC(LCTT 译注: Single Board Computer 单板计算机),提供一个 mini-PCIe 插槽,串口,宽范围输入电源,以及模块扩展等功能。 + +你也许觉得奇怪,都 2016 年了,为什么还会有人发布这样一款长得有点像三明治,用过时的 ARM11 构建的 COM (LCTT 译注: Compuer on Module,模块化计算机)版本的树莓派单板计算机:[树莓派计算模块][1]。原因是这样的,首先,目前仍然有大量工业应用不需要太多 CPU 处理能力,第二,树莓派计算模块仍是目前仅有的基于树莓派硬件的 COM,虽然更便宜、有点像 COM 并采用同样的 700MHz 处理器的 [零号树莓派][2] 也很类似。 + +![](http://hackerboards.com/files/embeddedmicro_mypi-sm.jpg) + +*安装了 COM 和 I/O 组件的 MyPi* + +![](http://hackerboards.com/files/embeddedmicro_mypi_encl-sm.jpg) + +*装入了可选的工业外壳中* + +另外,Embedded Micro Technology 还表示它的 SBC 还设计成可升级替换为支持的树莓派计算模块 —— 采用了树莓派 3 的四核、Cortex-A53 博通 BCM2837处理器的 SoC。因为这个产品最近很快就会到货,不确定他们怎么能及时为 Kickstarter 赞助者处理好这一切。不过,以后能支持也挺不错,就算要为这个升级付费也可以接受。 + +MyPi 并不是唯一一款新的基于树莓派计算模块的商业嵌入式设备。Pigeon Computers 在五月份启动了 [Pigeon RB100][3] 的项目,是一个基于 COM 的工业自动化控制器。不过,包括 [Techbase Modberry][4] 在内的这一类设备大都出现在 2014 年 COM 发布之后的一小段时间内。 + +MyPi 的目标是 30 天内筹集 $21,696,目前已经实现了三分之一。早期参与包的价格 $119 起,九月份发货。其他选项有 $187 版本,里面包含了价值 $30 的树莓派计算模块,以及各种线缆。套件里还有各种插件板以及工业外壳可选。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_baseboard-sm.jpg) + +*不带 COM 和插件板的 MyPi 主板* + +![](http://hackerboards.com/files/embeddedmicro_mypi_detail-sm.jpg) + +*以及它的接口定义* + +树莓派计算模块能给 MyPi 带来博通 BCM2835 Soc,512MB 内存,以及 4GB eMMC 存储空间。MyPi 主板扩展了一个 microSD 卡槽,一个 HDMI 接口,两个 USB 2.0 接口,一个 10/100M 以太网口,还有一个像网口的 RS232 端口(通过 USB 连接)。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_angle1-sm.jpg) + +*插上树莓派计算模块和 mini-PCIe 模块的 MyPi 的两个视角* + +![](http://hackerboards.com/files/embeddedmicro_mypi_angle2.jpg) + +*插上树莓派计算模块和 mini-PCIe 模块的 MyPi 的两个视角* + +MyPi 还将配备一个 mini-PCIe 插槽,据说“只支持 USB,以及只适用 mPCIe 形式的调制解调器”。还带有一个 SIM 卡插槽。板上还有双标准的树莓派摄像头接口,一个音频输出接口,自带备用电池的 RTC,LED 灯。还支持宽范围的 9-23V 直流输入。 + +Embedded Micro 表示,MyPi 是为那些树莓派爱好者们设计的,他们拼接了太多 HAT 外接板,已经不能有效地工作了,或者不能很好地装入工业外壳里。MyPi 支持 HAT,另外还提供了公司自己定义的 “ASIO” (特定应用接口)插件模块,它会将自己的 I/O 扩展到载板上,载板再将它们连到载板边上的 8针的绿色凤凰式工业 I/O 连接器(标记了“ASIO Out”)上,在下面图片里有描述。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_io-sm.jpg) + +*MyPi 的模块扩展接口* + +就像 Kickstarter 页面里描述的:“比起在板边插满带 IO 信号接头的 HAT 板,我们更愿意把同样的 IO 信号接到另一个接头,它直接接到绿色的工业接头上。” 另外,“通过简单地延长卡上的插脚长度(抬高),你将来可以直接扩展 IO 集 - 这些都不需要任何排线!”Embedded Micro 表示。 + +![](http://hackerboards.com/files/embeddedmicro_mypi_with_iocards-sm.jpg) + +*MyPi 和它的可选 I/O 插件板卡* + +像上面展示的,这家公司为 MyPi 提供了一系列可靠的 ASIO 插卡,。一开始这些会包括 CAN 总线,4-20mA 传感器信号,RS485,窄带 RF,等等。 + +### 更多信息 + +MyPi 在 Kickstarter 上提供了 7 月 23 日到期的 79 英镑($119)早期参与包(不包括树莓派计算模块),预计九月份发货。更多信息请查看 [Kickstarter 上 MyPi 的页面][5] 以及 [Embedded Micro Technology 官网][6]。 + +-------------------------------------------------------------------------------- + +via: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ + +作者:[Eric Brown][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[Ezio](https://github.com/oska874) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/ +[1]: http://hackerboards.com/raspberry-pi-morphs-into-30-dollar-com/ +[2]: http://hackerboards.com/pi-zero-tweak-adds-camera-connector-keeps-5-price/ +[3]: http://hackerboards.com/automation-controller-runs-linux-on-raspberry-pi-com/ +[4]: http://hackerboards.com/automation-controller-taps-raspberry-pi-compute-module/ +[5]: https://www.kickstarter.com/projects/410598173/mypi-industrial-strength-raspberry-pi-for-iot-proj +[6]: http://www.embeddedpi.com/ diff --git a/published/20160625 How to Hide Linux Command Line History by Going Incognito.md b/published/20160625 How to Hide Linux Command Line History by Going Incognito.md new file mode 100644 index 0000000000..14bcdc2c7b --- /dev/null +++ b/published/20160625 How to Hide Linux Command Line History by Going Incognito.md @@ -0,0 +1,126 @@ +如何隐藏你的 Linux 的命令行历史 +================================================================ + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-featured.jpg) + +如果你是 Linux 命令行的用户,有的时候你可能不希望某些命令记录在你的命令行历史中。原因可能很多,例如,你在公司担任某个职位,你有一些不希望被其它人滥用的特权。亦或者有些特别重要的命令,你不希望在你浏览历史列表时误执行。 + +然而,有方法可以控制哪些命令进入历史列表,哪些不进入吗?或者换句话说,我们在 Linux 终端中可以开启像浏览器一样的无痕模式吗?答案是肯定的,而且根据你想要的具体目标,有很多实现方法。在这篇文章中,我们将讨论一些行之有效的方法。 + +注意:文中出现的所有命令都在 Ubuntu 下测试过。 + +### 不同的可行方法 + +前面两种方法已经在之前[一篇文章][1]中描述了。如果你已经了解,这部分可以略过。然而,如果你不了解,建议仔细阅读。 + +#### 1. 在命令前插入空格 + +是的,没看错。在命令前面插入空格,这条命令会被 shell 忽略,也就意味着它不会出现在历史记录中。但是这种方法有个前提,只有在你的环境变量 `HISTCONTROL` 设置为 "ignorespace" 或者 "ignoreboth" 才会起作用。在大多数情况下,这个是默认值。 + +所以,像下面的命令(LCTT 译注:这里`[space]`表示输入一个空格): + +``` +[space]echo "this is a top secret" +``` + +如果你之前执行过如下设置环境变量的命令,那么上述命令不会出现在历史记录中。 + +``` +export HISTCONTROL = ignorespace +``` + +下面的截图是这种方式的一个例子。 + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-bash-command-space.png) + +第四个 "echo" 命令因为前面有空格,它没有被记录到历史中。 + +#### 2. 禁用当前会话的所有历史记录 + +如果你想禁用某个会话所有历史,你可以在开始命令行工作前简单地清除环境变量 HISTFILESIZE 的值即可。执行下面的命令来清除其值: + +``` +export HISTFILESIZE=0 +``` + +HISTFILESIZE 表示对于 bash 会话其历史文件中可以保存命令的个数(行数)。默认情况,它设置了一个非零值,例如在我的电脑上,它的值为 1000。 + +所以上面所提到的命令将其值设置为 0,结果就是直到你关闭终端,没有东西会存储在历史记录中。记住同样你也不能通过按向上的箭头按键或运行 history 命令来看到之前执行的命令。 + +#### 3. 工作结束后清除整个历史 + +这可以看作是前一部分所提方案的另外一种实现。唯一的区别是在你完成所有工作之后执行这个命令。下面是刚说到的命令: + +``` +history -cw +``` + +刚才已经提到,这个和 HISTFILESIZE 方法有相同效果。 + +#### 4. 只针对你的工作关闭历史记录 + +虽然前面描述的方法(2 和 3)可以实现目的,它们可以清除整个历史,在很多情况下,有些可能不是我们所期望的。有时候你可能想保存直到你开始命令行工作之间的历史记录。对于这样的需求,你开始在工作前执行下述命令: + +``` +[space]set +o history +``` + +备注:[space] 表示空格。并且由于空格的缘故,该命令本身也不会被记录。 + +上面的命令会临时禁用历史功能,这意味着在这命令之后你执行的所有操作都不会记录到历史中,然而这个命令之前的所有东西都会原样记录在历史列表中。 + +要重新开启历史功能,执行下面的命令: + +``` +[Space]set -o history +``` + +它将环境恢复原状,也就是你完成了你的工作,执行上述命令之后的命令都会出现在历史中。 + +#### 5. 从历史记录中删除指定的命令 + +现在假设历史记录中已经包含了一些你不希望记录的命令。这种情况下我们怎么办?很简单。直接动手删除它们。通过下面的命令来删除: + +``` +history | grep "part of command you want to remove" +``` + +上面的命令会输出历史记录中匹配的命令,每一条前面会有个数字。 + +一旦你找到你想删除的命令,执行下面的命令,从历史记录中删除那个指定的项: + +``` +history -d [num] +``` + +下面是这个例子的截图。 + +![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/06/commandline-history-delete-specific-commands.png) + +第二个 ‘echo’命令被成功的删除了。 + +(LCTT 译注:如果你不希望上述命令本身也被记录进历史中,你可以在上述命令前加个空格) + +同样的,你可以使用向上的箭头一直往回翻看历史记录。当你发现你感兴趣的命令出现在终端上时,按下 “Ctrl + U”清除整行,也会从历史记录中删除它。 + +### 总结 + +有多种不同的方法可以操作 Linux 命令行历史来满足你的需求。然而请记住,从历史中隐藏或者删除命令通常不是一个好习惯,尽管本质上这并没有错。但是你必须知道你在做什么,以及可能产生的后果。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-command-line-history-incognito/ + +作者:[Himanshu Arora][a] +译者:[chunyang-wen](https://github.com/chunyang-wen) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.maketecheasier.com/author/himanshu/ +[1]: https://www.maketecheasier.com/command-line-history-linux/ + + + + + diff --git a/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md b/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md new file mode 100644 index 0000000000..444f37af43 --- /dev/null +++ b/published/20160629 USE TASK MANAGER EQUIVALENT IN LINUX.md @@ -0,0 +1,51 @@ +在 Linux 下使用任务管理器 +==================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Task-Manager-in-Linux.jpg) + +有很多 Linux 初学者经常问起的问题,“**Linux 有任务管理器吗?**”,“**怎样在 Linux 上打开任务管理器呢?**” + +来自 Windows 的用户都知道任务管理器非常有用。你可以在 Windows 中按下 `Ctrl+Alt+Del` 打开任务管理器。这个任务管理器向你展示了所有的正在运行的进程和它们消耗的内存,你可以从任务管理器程序中选择并杀死一个进程。 + +当你刚使用 Linux 的时候,你也会寻找一个**在 Linux 相当于任务管理器**的一个东西。一个 Linux 使用专家更喜欢使用命令行的方式查找进程和消耗的内存等等,但是你不用必须使用这种方式,至少在你初学 Linux 的时候。 + +所有主流的 Linux 发行版都有一个类似于任务管理器的东西。大部分情况下,它叫系统监视器(System Monitor),不过实际上它依赖于你的 Linux 的发行版及其使用的[桌面环境][1]。 + +在这篇文章中,我们将会看到如何在以 GNOME 为[桌面环境][2]的 Linux 上找到并使用任务管理器。 + +### 在使用 GNOME 桌面环境的 Linux 上的任务管理器等价物 + +使用 GNOME 时,按下 super 键(Windows 键)来查找任务管理器: + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-monitor-gnome-fedora.png) + +当你启动系统监视器的时候,它会向你展示所有正在运行的进程及其消耗的内存。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/fedora-system-monitor.jpeg) + +你可以选择一个进程并且点击“终止进程(End Process)”来杀掉它。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/kill-process-fedora.png) + +你也可以在资源(Resources)标签里面看到关于一些统计数据,例如 CPU 的每个核心的占用,内存用量、网络用量等。 + +![](https://itsfoss.com/wp-content/uploads/2016/06/system-stats-fedora.png) + +这是图形化的方式。如果你想使用命令行,在终端里运行“top”命令然后你就可以看到所有运行的进程及其消耗的内存。你也可以很容易地使用命令行[杀死进程][3]。 + +这就是关于在 Fedora Linux 上任务管理器的知识。我希望这个教程帮你学到了知识,如果你有什么问题,请尽管问。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/task-manager-linux/ + +作者:[Abhishek Prakash][a] +译者:[xinglianfly](https://github.com/xinglianfly) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://wiki.archlinux.org/index.php/desktop_environment +[2]: https://itsfoss.com/best-linux-desktop-environments/ +[3]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/ diff --git a/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md b/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md new file mode 100644 index 0000000000..541bcda865 --- /dev/null +++ b/published/20160708 Using Vagrant to control your DigitalOcean cloud instances.md @@ -0,0 +1,67 @@ +使用 Vagrant 控制你的 DigitalOcean 云主机 +========================================================= + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/fedora-vagrant-do-945x400.jpg) + +[Vagrant][1] 是一个使用虚拟机创建和支持虚拟开发环境的应用。Fedora 官方已经在本地系统上通过库 `libvirt` [支持 Vagrant][2]。[DigitalOcean][3] 是一个提供一键部署 Fedora 云服务实例到全 SSD 服务器的云计算服务提供商。在[最近的 Raleigh 举办的 FAD 大会][4]中,Fedora 云计算队伍为 Vagrant 打包了一个新的插件,它能够帮助 Fedora 用户通过使用本地的 Vagrantfile 文件来管理 DigitalOcean 上的云服务实例。 + +### 如何使用这个插件 + +第一步在命令行下是安装软件。 + +``` +$ sudo dnf install -y vagrant-digitalocean +``` + +安装 结束之后,下一步是创建本地的 Vagrantfile 文件。下面是一个例子。 + +``` +$ mkdir digitalocean +$ cd digitalocean +$ cat Vagrantfile +Vagrant.configure('2') do |config| + config.vm.hostname = 'dropletname.kushaldas.in' + # Alternatively, use provider.name below to set the Droplet name. config.vm.hostname takes precedence. + + config.vm.provider :digital_ocean do |provider, override| + override.ssh.private_key_path = '/home/kdas/.ssh/id_rsa' + override.vm.box = 'digital_ocean' + override.vm.box_url = "https://github.com/devopsgroup-io/vagrant- digitalocean/raw/master/box/digital_ocean.box" + + provider.token = 'Your AUTH Token' + provider.image = 'fedora-23-x64' + provider.region = 'nyc2' + provider.size = '512mb' + provider.ssh_key_name = 'Kushal' + end +end +``` + +### Vagrant DigitalOcean 插件的注意事项 + +一定要记住的几个关于 SSH 的关键命名规范 : 如果你已经在 DigitalOcean 上传了秘钥,请确保 `provider.ssh_key_name` 和已经在服务器中的名字吻合。 `provider.image` 具体的文档可以在[DigitalOcean documentation][5]找到。在控制面板上的 `App & API` 部分可以创建 AUTH 令牌。 + +你可以使用下面的命令启动一个实例。 + +``` +$ vagrant up --provider=digital_ocean +``` + +这个命令会在 DigitalOcean 的启动一个服务器实例。然后你就可以使用 `vagrant ssh` 命令来 `ssh` 登录进入这个实例。可以执行 `vagrant destroy` 来删除这个实例。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-vagrant-digitalocean-cloud/ + +作者:[Kushal Das][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[Ezio](https://github.com/oska874) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://kushal.id.fedoraproject.org/ +[1]: https://www.vagrantup.com/ +[2]: https://fedoramagazine.org/running-vagrant-fedora-22/ +[3]: https://www.digitalocean.com/ +[4]: https://communityblog.fedoraproject.org/fedora-cloud-fad-2016/ +[5]: https://developers.digitalocean.com/documentation/v2/#create-a-new-droplet diff --git a/published/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/published/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md new file mode 100644 index 0000000000..c58e73d197 --- /dev/null +++ b/published/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -0,0 +1,316 @@ +LFCS 系列第十讲:学习简单的 Shell 脚本编程和文件系统故障排除 +================================================================================ +Linux 基金会发起了 LFCS 认证 (Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员),这是一个全新的认证体系,旨在让世界各地的人能够参与到中等水平的 Linux 系统的基本管理操作的认证考试中去,这项认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。 + +![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png) + +*LFCS 系列第十讲* + +请看以下视频,这里边介绍了 Linux 基金会认证程序。 + + + +本讲是系列教程中的第十讲,主要集中讲解简单的 Shell 脚本编程和文件系统故障排除。这两块内容都是 LFCS 认证中的必备考点。 + +### 理解终端 (Terminals) 和 Shell ### + +首先要声明一些概念。 + +- Shell 是一个程序,它将命令传递给操作系统来执行。 +- Terminal 也是一个程序,允许最终用户使用它与 Shell 来交互。比如,下边的图片是 GNOME Terminal。 + +![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png) + +*Gnome Terminal* + +启动 Shell 之后,会呈现一个命令提示符 (也称为命令行) 提示我们 Shell 已经做好了准备,接受标准输入设备输入的命令,这个标准输入设备通常是键盘。 + +你可以参考该系列文章的 [第一讲 如何在 Linux 上使用 GNU sed 等命令来创建、编辑和操作文件][1] 来温习一些常用的命令。 + +Linux 为提供了许多可以选用的 Shell,下面列出一些常用的: + +**bash Shell** + +Bash 代表 Bourne Again Shell,它是 GNU 项目默认的 Shell。它借鉴了 Korn shell (ksh) 和 C shell (csh) 中有用的特性,并同时对性能进行了提升。它同时也是 LFCS 认证中所涵盖的各发行版中默认 Shell,也是本系列教程将使用的 Shell。 + +**sh Shell** + +Bourne SHell 是一个比较古老的 shell,多年来一直都是很多类 Unix 系统的默认 shell。 + +**ksh Shell** + +Korn SHell (ksh shell) 也是一个 Unix shell,是贝尔实验室 (Bell Labs) 的 David Korn 在 19 世纪 80 年代初的时候开发的。它兼容 Bourne shell ,并同时包含了 C shell 中的多数特性。 + +一个 shell 脚本仅仅只是一个可执行的文本文件,里边包含一条条可执行命令。 + +### 简单的 Shell 脚本编程 ### + +如前所述,一个 shell 脚本就是一个纯文本文件,因此,可以使用自己喜欢的文本编辑器来创建和编辑。你可以考虑使用 vi/vim (参考本系列 [第二讲 如何安装和使用纯文本编辑器 vi/vim][2]),它的语法高亮让我的编辑工作非常方便。 + +输入如下命令来创建一个名为 myscript.sh 的脚本文件: + + # vim myscript.sh + +shell 脚本的第一行 (著名的 [释伴(shebang)行](https://linux.cn/article-3664-1.html)) 必须如下: + + #!/bin/bash + +这条语句“告诉”操作系统需要用哪个解释器来运行这个脚本文件之后命令。 + +现在可以添加需要执行的命令了。通过注释,我们可以声明每一条命令或者整个脚本的具体含义。注意,shell 会忽略掉以井号 (#) 开始的注释语句。 + + #!/bin/bash + echo 这是关于 LFCS 认证系列的第十部分 + echo 今天是 $(date +%Y-%m-%d) + +编写并保存脚本之后,通过以下命令来使脚本文件成为可执行文件: + + # chmod 755 myscript.sh + +在执行脚本之前,我们需要说一下环境变量 ($PATH),运行: + + echo $PATH + +我们就会看到环境变量 ($PATH) 的具体内容:这是当输入命令时系统所搜索可执行程序的目录,每一项之间使用冒号 (:) 隔开。称它为环境变量,是因为它本是就是 shell 环境的一部分 —— 这是当 shell 每次启动时 shell 及其子进程可以获取的一系列信息。 + +当我们输入一个命令并按下回车时,shell 会搜索 $PATH 变量中列出的目录并执行第一个知道的实例。请看如下例子: + +![Linux Environment Variables](http://www.tecmint.com/wp-content/uploads/2014/11/Environment-Variable.png) + +*环境变量* + +假如存在两个同名的可执行程序,一个在 /usr/local/bin,另一个在 /usr/bin,则会执行环境变量中最先列出的那个,并忽略另外一个。 + +如果我们自己编写的脚本没有放在 $PATH 变量列出目录中的任何一个,则需要输入 ./filename 来执行它。而如果存储在 $PATH 变量中的任意一个目录,我们就可以像运行其他命令一样来运行之前编写的脚本了。 + + # pwd + # ./myscript.sh + # cp myscript.sh ../bin + # cd ../bin + # pwd + # myscript.sh + +![Execute Script in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Execute-Script.png) + +*执行脚本* + +#### if 条件语句 #### + +无论何时,当你需要在脚本中根据某个命令的运行结果来采取相应动作时,你应该使用 if 结构来定义条件。基本语法如下: + + if CONDITION; then + COMMANDS; + else + OTHER-COMMANDS + fi + +其中,CONDITION 可以是如下情形的任意一项 (仅列出常用的),并且达到以下条件时返回 true: + +- [ -a file ] → 指定文件存在。 +- [ -d file ] → 指定文件存在,并且是一个目录。 +- [ -f file ] → 指定文件存在,并且是一个普通文件。 +- [ -u file ] → 指定文件存在,并设置了 SUID 权限位。 +- [ -g file ] → 指定文件存在,并设置了 SGID 权限位。 +- [ -k file ] → 指定文件存在,并设置了“黏连 (Sticky)”位。 +- [ -r file ] → 指定文件存在,并且文件可读。 +- [ -s file ] → 指定文件存在,并且文件不为空。 +- [ -w file ] → 指定文件存在,并且文件可写入。 +- [ -x file ] → 指定文件存在,并且可执行。 +- [ string1 = string2 ] → 字符串相同。 +- [ string1 != string2 ] → 字符串不相同。 + +[ int1 op int2 ] 为前述列表中的一部分 (例如: -eq –> int1 与 int2 相同时返回 true) ,其中比较项也可以是一个列表子项, 其中 op 为以下比较操作符。 + +- -eq –> int1 等于 int2 时返回 true。 +- -ne –> int1 不等于 int2 时返回 true。 +- -lt –> int1 小于 int2 时返回 true。 +- -le –> int1 小于或等于 int2 时返回 true。 +- -gt –> int1 大于 int2 时返回 true。 +- -ge –> int1 大于或等于 int2 时返回 true。 + +#### for 循环语句 #### + +循环语句可以在某个条件下重复执行某个命令。基本语法如下: + + for item in SEQUENCE; do + COMMANDS; + done + +其中,item 为每次执行 COMMANDS 时,在 SEQUENCE 中匹配到的值。 + +#### While 循环语句 #### + +该循环结构会一直执行重复的命令,直到控制命令(EVALUATION_COMMAND)执行的退出状态值等于 0 时 (即执行成功) 停止。基本语法如下: + + while EVALUATION_COMMAND; do + EXECUTE_COMMANDS; + done + +其中,EVALUATION\_COMMAND 可以是任何能够返回成功 (0) 或失败 (0 以外的值) 的退出状态值的命令,EXECUTE\_COMMANDS 则可以是任何的程序、脚本或者 shell 结构体,包括其他的嵌套循环。 + +#### 综合使用 #### + +我们会通过以下例子来演示 if 条件语句和 for 循环语句。 + +**在基于 systemd 的发行版中探测某个服务是否在运行** + +先建立一个文件,列出我们想要想要查看的服务名。 + + # cat myservices.txt + + sshd + mariadb + httpd + crond + firewalld + +![Script to Monitor Linux Services](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Services.png) + +*使用脚本监控 Linux 服务* + +我们编写的脚本看起来应该是这样的: + + #!/bin/bash + + # This script iterates over a list of services and + # is used to determine whether they are running or not. + + for service in $(cat myservices.txt); do + systemctl status $service | grep --quiet "running" + if [ $? -eq 0 ]; then + echo $service "is [ACTIVE]" + else + echo $service "is [INACTIVE or NOT INSTALLED]" + fi + done + +![Linux Service Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Script.png) + +*Linux 服务监控脚本* + +**我们来解释一下这个脚本的工作流程** + +1). for 循环每次读取 myservices.txt 文件中的一项记录,每一项纪录表示一个服务的通用变量名。各项记录组成如下: + + # cat myservices.txt + +2). 以上命令由圆括号括着,并在前面添加美元符,表示它需要从 myservices.txt 的记录列表中取值并作为变量传递给 for 循环。 + +3). 对于记录列表中的每一项纪录 (即每一项纪录的服务变量),都会执行以下动作: + + # systemctl status $service | grep --quiet "running" + +此时,需要在每个通用变量名 (即每一项纪录的服务变量) 的前面添加美元符,以表明它是作为变量来传递的。其输出则通过管道符传给 grep。 + +其中,-quiet 选项用于阻止 grep 命令将发现的 “running” 的行回显到屏幕。当 grep 捕获到 “running” 时,则会返回一个退出状态码 “0” (在 if 结构体表示为 $?),由此确认某个服务正在运行中。 + +如果退出状态码是非零值 (即 systemctl status $service 命令中的回显中没有出现 “running”),则表明某个服务为运行。 + +![Services Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Services-Monitoring-Script.png) + +*服务监控脚本* + +我们可以增加一步,在开始循环之前,先确认 myservices.txt 是否存在。 + + #!/bin/bash + + # This script iterates over a list of services and + # is used to determine whether they are running or not. + + if [ -f myservices.txt ]; then + for service in $(cat myservices.txt); do + systemctl status $service | grep --quiet "running" + if [ $? -eq 0 ]; then + echo $service "is [ACTIVE]" + else + echo $service "is [INACTIVE or NOT INSTALLED]" + fi + done + else + echo "myservices.txt is missing" + fi + +**Ping 一系列网络或者 Internet 主机名以获取应答数据** + +你可能想把自己维护的主机写入一个文本文件,并使用脚本探测它们是否能够 ping 得通 (脚本中的 myhosts 可以随意替换为你想要的名称)。 + +shell 的内置 read 命令将告诉 while 循环一行行的读取 myhosts,并将读取的每行内容传给 host 变量,随后 host 变量传递给 ping 命令。 + + #!/bin/bash + + # This script is used to demonstrate the use of a while loop + + while read host; do + ping -c 2 $host + done < myhosts + +![Script to Ping Servers](http://www.tecmint.com/wp-content/uploads/2014/11/Script-to-Ping-Servers.png) + +*使用脚本 Ping 服务器* + +扩展阅读: + +- [Learn Shell Scripting: A Guide from Newbies to System Administrator][3] +- [5 Shell Scripts to Learn Shell Programming][4] + +### 文件系统排错 ### + +尽管 Linux 是一个很稳定的操作系统,但仍然会因为某些原因出现崩溃时 (比如因为断电等),正好你有一个 (或者更多个) 文件系统未能正确卸载,Linux 重启的时候就会自动检测其中可能发生的错误。 + +此外,每次系统正常启动的时候,都会在文件系统挂载之前校验它们的完整度。而这些全部都依赖于 fsck 工具 (“file system check,文件系统校验”)。 + +如果对 fsck 进行设定,它除了校验文件系统的完整性之外,还可以尝试修复错误。fsck 能否成功修复错误,取决于文件系统的损伤程度;如果可以修复,被损坏部分的文件会恢复到位于每个文件系统根目录的 lost+found。 + +最后但同样重要的是,我们必须注意,如果拔掉系统正在写入数据的 USB 设备同样会发生错误,甚至可能发生硬件损坏。 + +fsck 的基本用如下: + + # fsck [options] filesystem + +**检查文件系统错误并尝试自动修复** + +想要使用 fsck 检查文件系统,我们需要首先卸载文件系统。 + + # mount | grep sdg1 + # umount /mnt + # fsck -y /dev/sdg1 + +![Scan Linux Filesystem for Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Filesystem-Errors.png) + +*检查文件系统错误* + +除了 -y 选项,我们也可以使用 -a 选项来自动修复文件系统错误,而不必做出交互式应答,并在文件系统看起来 “干净” 卸载的情况下强制校验。 + + # fsck -af /dev/sdg1 + +如果只是要找出什么地方发生了错误 (不用在检测到错误的时候修复),我们可以使用 -n 选项,这样只会将文件系统错误输出到标准输出设备上。 + + # fsck -n /dev/sdg1 + +根据 fsck 输出的错误信息,我们可以知道是否可以自己修复或者需要将问题提交给工程师团队来做详细的硬件校验。 + +### 总结 ### + +至此,系列教程的第十讲就全部结束了,全系列教程涵盖了通过 LFCS 测试所需的基础内容。 + +但显而易见的,本系列的十讲并不足以在单个主题方面做到全面描述,我们希望这一系列教程可以成为你学习的基础素材,并一直保持学习的热情(LCTT 译注:还有后继补充的几篇)。 + +我们欢迎你提出任何问题或者建议,所以你可以毫不犹豫的通过以下链接联系到我们: 成为一个 [Linux 认证系统工程师][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://linux.cn/article-7161-1.html +[2]:https://linux.cn/article-7165-1.html +[3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/ +[4]:http://www.tecmint.com/basic-shell-programming-part-ii/ +[5]:http://www.shareasale.com/r.cfm?b=768106&u=1260899&m=59485&urllink=&afftrack= diff --git a/translated/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/published/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md similarity index 72% rename from translated/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md rename to published/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md index 5f511676af..c3fc8b67ab 100644 --- a/translated/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md +++ b/published/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md @@ -1,27 +1,26 @@ -GHLandy Translated - LFCS 系列第八讲:管理用户和用户组、文件权限和属性以及启用账户 sudo 访问权限 - ================================================================================ -去年八月份,Linux 基金会发起了全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让世界各地的人能够参与到关于 Linux 系统中间层的基本管理操作的认证考试中去,这项认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。 +去年八月份,Linux 基金会发起了全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让世界各地的人能够参与到中等水平的 Linux 系统的基本管理操作的认证考试中去,这项认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。 ![Linux Users and Groups Management](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-8.png) -LFCS 系列第八讲 +*第八讲: Linux 基金会认证系统管理员* 请看以下视频,里边将描述 LFCS 认证程序。 注:youtube视频 -本讲是《十套教程》系列的第八讲,在这一讲中,我们将引导你学习如何在 Linux 管理用户和用户组权限的设置,这些内容是 LFCS 认证开始中的必备知识。 +本讲是系列教程的第八讲,在这一讲中,我们将引导你学习如何在 Linux 管理用户和用户组权限的设置,这些内容是 LFCS 认证的必备知识。 由于 Linux 是一个多用户的操作系统(允许多个用户通过不同主机或者终端访问一个独立系统),因此你需要知道如何才能有效地管理用户:如何添加、编辑、禁用和删除用户账户,并赋予他们足以完成自身任务的必要权限。 +(LCTT 译注:本篇原文章节顺序有误,根据理解做了调整。) + ### 添加用户账户 ### -添加新用户账户,你需要以 root 运行一下两条命令中的任意一条: +添加新用户账户,你需要以 root 运行以下两条命令中的任意一条: # adduser [new_account] # useradd [new_account] @@ -29,43 +28,43 @@ LFCS 系列第八讲 当新用户账户添加到系统时,会自动执行以下操作: 1. 自动创建用户家目录(默认是 /home/username)。 - 2. 自动拷贝下列隐藏文件到新建用户的家目录,用来设置新用户会话的环境变量。 - .bash_logout - .bash_profile - .bashrc + .bash_logout + .bash_profile + .bashrc 3. 自动创建邮件缓存目录 /var/spool/mail/username。 - 4. 自动创建于用户名相同的用户组。 -**理解 /etc/passwd 中的内容** +####理解 /etc/passwd 中的内容#### /etc/passwd 文件中存储了所有用户账户的信息,每个用户在里边都有一条对应的记录,其格式(每个字段用冒号隔开)如下: [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] -- 字段 [username] 和 [Comment] 是自解释的关系。 +- 字段 [username] 和 [Comment] 是不言自明的。 - 第二个字段中 x 表明通过用户名 username 登录系统是有密码保护的, 密码保存在 /etc/shadow 文件中。 - [UID] 和 [GID] 字段用整数表示,代表该用户的用户标识符和对应所在组的组标志符。 - 字段 [Home directory] 为 username 用户家目录的绝对路径。 - 字段 [Default shell] 指定用户登录系统时默认使用的 shell。 -**理解 /etc/group 中的内容** +####理解 /etc/group 中的内容#### /etc/group 文件存储所有用户组的信息。每行记录的格式如下: [Group name]:[Group password]:[GID]:[Group members] - [Group name] 为用户组名称。 -- 字段 [Group password] 为 x 的话,则说明不适用用户组密码。 +- 字段 [Group password] 为 x 的话,则说明不使用用户组密码。 - [GID] 与 /etc/passwd 中保存的 GID 相同。 - [Group members] 用户组中的用户使用逗号隔开。 ![Add User Accounts in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-user-accounts.png) -添加用户账户 +*添加用户账户* + +#### 修改用户信息 #### 添加用户账户之后,你可以使用 usermod 命令来修改用户信息中的部分字段,该命令基本语法如下: @@ -95,24 +94,21 @@ LFCS 系列第八讲 # usermod --shell /bin/sh tecmint -**显示用户所属的用户组** - - # groups tecmint - # id tecmint - 下面,我们一次运行上述命令: # usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint ![usermod Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/usermod-command-examples.png) -usermod 命令例示 +*usermod 命令例示* 扩展阅读 - [15 useradd Command Examples in Linux][1] - [15 usermod Command Examples in Linux][2] +####锁定和解锁账户#### + 对于已有用户账户,我们还可以: **通过锁定密码来禁用账户** @@ -129,33 +125,9 @@ usermod 命令例示 ![Lock User in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/lock-user-in-linux.png) -锁定用户账户 +*锁定用户账户* -**为需要对指定文件进行读写的多个用户建立用户组** - -运行下列几条命令来完成: - - # groupadd common_group # 添加新用户组 - # chown :common_group common.txt # 将 common.txt 的用户组修改为 common_group - # usermod -aG common_group user1 # 添加用户 user1 到 common_group 用户组 - # usermod -aG common_group user2 # 添加用户 user2 到 common_group 用户组 - # usermod -aG common_group user3 # 添加用户 user3 到 common_group 用户组 - -**删除用户组** - -通过以下命令删除用户组: - - # groupdel [group_name] - -属于这个 group_name 用户组的文件是不会被删除的,而仅仅是删除了用户组。 - -### Linux 文件权限 ### - -除了我们在 [Setting File Attributes – Part 3][3] 中说到的基本的读取、写入和执行权限外,文件还有一些不常用却很重要的的权限设置,有时候把它当做“特殊权限”。 - -就像之前我们讨论的基本权限,这里同样使用八进制数字或者一个字母(象征性符号)好表示该权限类型。 - -**删除用户账户** +####删除用户账户#### 你可以通过 userdel --remove 命令来删除用户账户。这样会删除用户拥有的家目录和家目录下的所有文件,以及邮件缓存目录。 @@ -167,9 +139,9 @@ usermod 命令例示 比如,你有下列用户: -- user1 (primary group: user1) -- user2 (primary group: user2) -- user3 (primary group: user3) +- user1 (主组 user1) +- user2 (主组 user2) +- user3 (主组 user3) 他们都需要对你系统里边某个位置的 common.txt 文件,或者 user1 用户刚刚创建的共享进行读写。你可能会运行下列命令: @@ -181,53 +153,88 @@ usermod 命令例示 这时候,用户组就派上用场了,下面将演示怎么做。 +**显示用户所属的用户组** + + # groups tecmint + # id tecmint + +**为需要对指定文件进行读写的多个用户建立用户组** + +运行下列几条命令来完成: + + # groupadd common_group # 添加新用户组 + # chown :common_group common.txt # 将 common.txt 的用户组修改为 common_group + # usermod -aG common_group user1 # 添加用户 user1 到 common_group 用户组 + # usermod -aG common_group user2 # 添加用户 user2 到 common_group 用户组 + # usermod -aG common_group user3 # 添加用户 user3 到 common_group 用户组 + +####删除用户组#### + +通过以下命令删除用户组: + + # groupdel [group_name] + +属于这个 group_name 用户组的文件是不会被删除的,而仅仅是删除了用户组。 + +### Linux 文件权限 ### + +除了我们在 [LFCS 系列第三讲:归档/压缩文件及目录、设置文件属性和搜索文件][3] 中说到的基本的读取、写入和执行权限外,文件还有一些不常用却很重要的的权限设置,有时候把它当做“特殊权限”。 + +就像之前我们讨论的基本权限,这里同样使用八进制数字或者一个字母(象征性符号)表示该权限类型。 + **理解 Setuid 位** -当为可执行文件设置 setuid 位之后,用户运行程序是会继承该程序属主的有效特权。由于这样做会引起安全风险,因此设置 setuid 权限的文件及程序必须尽量少。你会发现,当系统中有用户需要执行 root 用户所管辖的程序时就是设置了 setuid 权限。 +当为可执行文件设置 setuid 位之后,用户运行程序时会继承该程序属主的有效特权。由于这样做会引起安全风险,因此设置 setuid 权限的文件及程序必须尽量少。你会发现,当系统中有用户需要访问属于 root 用户的文件是所运行的程序就带有了 setuid 权限。 -也就是说,用户不仅仅可以运行这个可执行文件,而且能以 root 权限来运行。比如,先检查 /bin/passwd 的权限,这个可执行文件用于改变账户的密码,并修改 /etc/shadow 文件。超级用户可以改变任意账户的密码,但是其他用户只能改变自己账户的密码。 +也就是说,用户不仅仅可以运行这个可执行文件,而且能以 root 权限来运行。比如,让我们来看看 /bin/passwd 的权限,这个可执行文件用于改变账户的密码,修改 /etc/shadow 文件。超级用户可以改变任意账户的密码,但是其他用户只能改变自己账户的密码。 ![passwd Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/passwd-command.png) -passwd 命令例示 +*passwd 命令例示* 因此,所有用户都有权限运行 /bin/passwd,但只有 root 用户可以指定改变指定用户账户的密码。其他用户只能改变其自身的密码。 ![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png) -修改用户密码 +*修改用户密码* + + # chmod o+u [filename] + +以八进制形式来设置 setuid 位,在当前基本权限(或者想要设置的权限)前加上数字 4 就行了。 + + # chmod 4755 [filename] **理解 Setgid 位** -设置 setgid 位之后,真实用户的有效 GID 变为属组的 GID。因此,任何用户都能以赋予属组用户的权限来访问文件。另外,当目录置了 setgid 位之后,新建的文件将继承其所属目录的 GID,并且新建的子目录会继承父目录的 setgid 位。通过这个方法,你能够以一个指定用户组的身份来访问该目录里边的文件,而不必管文件属主的主属组。 +设置 setgid 位之后,真实用户的有效 GID 变为属组的 GID。因此,任何用户都能以属组用户的权限来访问文件。另外,当目录置了 setgid 位之后,新建的文件将继承其所属目录的 GID,并且新建的子目录会继承父目录的 setgid 位。通过这个方法,你能够以一个指定的用户组身份来访问该目录里边的文件,而不必管文件属主的主属组。 # chmod g+s [filename] 以八进制形式来设置 setgid 位,在当前基本权限(或者想要设置的权限)前加上数字 2 就行了。 - # chmod 2755 [directory] + # chmod 2755 [filename] -**给目录设置 SETGID 位** +**给目录设置 Setgid 位** ![Add Setgid in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-setgid-to-directory.png) -给命令设置 setgid 位 +*给命令设置 setgid 位* -**理解 Sticky 位** +**理解黏连(Sticky)位** -文件设置了 Sticky 为之后,Linux 会将文件忽略,对于该文件影响到的目录,除了属主或者 root 用户外,其他用户无法删除,甚至重命名目录中其他文件也不行。 +文件设置了黏连(Sticky)位是没有意义,Linux 会忽略该位。如果设置到目录上,会防止其内的文件被删除或改名,除非你是该目录或文件的属主、或者是 root 用户。 -# chmod o+t [directory] + # chmod o+t [directory] 以八进制形式来设置 sticky 位,在当前基本权限(或者想要设置的权限)前加上数字 1 就行了。 -# chmod 1755 [directory] + # chmod 1755 [directory] -若没有 sticky 位,任何有权限读写目录的用户都可删除和重命名文件。因此,sticky 为通常出现在像 /tmp 之类的目录,这些目录是所有人都具有写权限的。 +若没有黏连位,任何有权限读写目录的用户都可删除和重命名其中的文件。因此,黏连位通常出现在像 /tmp 之类的目录,这些目录是所有人都具有写权限的。 ![Add Stickybit in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-sticky-bit-to-directory.png) -给目录设置 sticky 位 +*给目录设置黏连位* ### Linux 特殊文件属性 ### @@ -240,7 +247,7 @@ passwd 命令例示 ![Protect File from Deletion](http://www.tecmint.com/wp-content/uploads/2014/10/chattr-command.png) -通过 Chattr 命令来包含文件 +*通过 Chattr 命令来包含文件* ### 访问 root 账户并启用 sudo ### @@ -251,15 +258,16 @@ passwd 命令例示 然后输入 root 账户密码。 倘若授权成功,你将以 root 身份登录,工作目录则是登录前所在的位置。如果是想要一登录就自动进入 root 用户的家目录,请运行: + $ su - 然后输入 root 账户密码。 -![Enable sudo Access on Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png) +![switch user by su ](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png) -为用户启用 sudo 访问权限 +*用户通过 su 切换* -执行上个步骤需要普通用户知道 root 账户的密码,这样会引起非常严重的安全问题。于是,系统管理员通常会配置 sudo 命令来让普通用户在严格控制的环境中以其他用户身份(通常是 root)来执行命令。所有,可以在严格控制用户的情况下,又允许他运行一条或多条特权命令。 +执行上个步骤需要普通用户知道 root 账户的密码,这样会引起非常严重的安全问题。于是,系统管理员通常会配置 sudo 命令来让普通用户在严格控制的环境中以其他用户身份(通常是 root)来执行命令。所以,可以在严格控制用户的情况下,又允许他运行一条或多条特权命令。 - 扩展阅读:[Difference Between su and sudo User][5] @@ -269,7 +277,7 @@ passwd 命令例示 # visudo -这样会使用 vim(如果你按照 [Install and Use vim as Editor – Part 2][6] 里边说的来编辑文件)来打开 /etc/sudoers 文件。 +这样会使用 vim(你可以按照 [LFCS 系列第二讲:如何安装和使用纯文本编辑器 vi/vim][6] 里边说的来编辑文件)来打开 /etc/sudoers 文件。 以下是需要设置的相关的行: @@ -283,19 +291,19 @@ passwd 命令例示 Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin" -这一行指定 sudo 将要使用的目录,这样可以阻止使用用户指定的目录,那样的话可能会危及系统。 +这一行指定 sudo 将要使用的目录,这样可以阻止使用某些用户指定的目录,那样的话可能会危及系统。 下一行是用来指定权限的: root ALL=(ALL) ALL - 第一个 ALL 关键词表明这条规则适用于所有主机。 -- 第二个 ALL 关键词表明第一个字段中指定的用户能以任何用户身份所具有的权限来运行相应命令。 +- 第二个 ALL 关键词表明第一个字段中所指定的用户能以任何用户身份的权限来运行相应命令。 - 第三个 ALL 关键词表明可以运行任何命令。 tecmint ALL=/bin/yum update -如果 = 号后边没有指定用户,sudo 则默认为 root 用户。本例中,tecmint 用户能以 root身份运行 yum update 命令。 +如果 = 号后边没有指定用户,sudo 则默认为 root 用户。本例中,tecmint 用户能以 root 身份运行 yum update 命令。 gacanepa ALL=NOPASSWD:/bin/updatedb @@ -303,16 +311,17 @@ NOPASSWD 关键词表明 gacanepa 用户不需要密码,可以直接运行 /bi %admin ALL=(ALL) ALL -% 符号表示该行应用 admin 用户组。其他部分的含义与对于用户的含义是一样的。本例表示 admin 用户组的成员可以通过任何主机的链接来运行任何命令。 +% 符号表示该行应用于 admin 用户组。其他部分的含义与对于用户的含义是一样的。本例表示 admin 用户组的成员可以通过任何主机连接来运行任何命令。 通过 sudo -l 命令可以查看,你的账户拥有什么样的权限。 ![Sudo Access Rules](http://www.tecmint.com/wp-content/uploads/2014/10/sudo-access-rules.png) -Sudo 访问规则 +*Sudo 访问规则* ### 总结 ### -对于系统管理员来说,高效能的用户和文件管理技能是非常必要的。本文已经涵盖了这些内容,我们希望你将这些作为一个开始,,然后慢慢进步。随时在下边发表评论或提问,我们会尽快回应的。 + +对于系统管理员来说,高效能的用户和文件管理技能是非常必要的。本文已经涵盖了这些内容,我们希望你将这些作为一个开始,然后慢慢进步。随时在下边发表评论或提问,我们会尽快回应的。 -------------------------------------------------------------------------------- @@ -320,14 +329,14 @@ via: http://www.tecmint.com/manage-users-and-groups-in-linux/ 作者:[Gabriel Cánepa][a] 译者:[GHLandy](https://github.com/GHLandy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/add-users-in-linux/ [2]:http://www.tecmint.com/usermod-command-examples/ -[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[3]:https://linux.cn/article-7171-1.html [4]:http://www.tecmint.com/chattr-command-examples/ [5]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/ -[6]:http://www.tecmint.com/vi-editor-usage/ +[6]:https://linux.cn/article-7165-1.html diff --git a/published/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/published/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md new file mode 100644 index 0000000000..cdb0e8aecf --- /dev/null +++ b/published/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -0,0 +1,205 @@ +LFCS 系列第九讲: 使用 Yum、RPM、Apt、Dpkg、Aptitude 进行 Linux 软件包管理 +================================================================================ + +去年八月, Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划,这对广大系统管理员来说是一个很好的机会,管理员们可以通过认证考试来表明自己可以成功支持Linux系统的整体运营。 一个Linux基金会认证的系统管理员能有足够的专业知识来确保系统高效运行,提供第一手的故障诊断和监视,并且在需要的情况下将问题提交给工程师支持团队。 + +![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) + +*Linux基金会认证系统管理员 – 第九讲* + +请观看下面关于Linux基金会认证计划的演示。 + +注:youtube 视频 + + +本文是本系列教程中的第九讲,今天在这篇文章中我们会引导你学习Linux软件包管理,这也是LFCS认证考试所需要的。 + +### 软件包管理 ### + +简单的说,软件包管理是系统中安装和维护软件的一种方法,这里说的维护包含更新和卸载。 + +在Linux早期,程序只以源代码的方式发行,还带有所需的用户使用手册和必备的配置文件,甚至更多。现如今,大多数发行商一般使用预装程序或者被称为软件包的程序集合。用户可以使用这些预装程序或者软件包安装到该发行版中。然而,Linux最伟大的一点是我们仍然能够获得程序的源代码用来学习、改进和编译。 + +####软件包管理系统是如何工作的#### + +如果某一个软件包需要一定的资源,如共享库,或者需要另一个软件包,这就称之为依赖性。所有现在的包管理系统提供了一些解决依赖性的方法,以确保当安装一个软件包时,相关的依赖包也安装好了。 + +####打包系统#### + +几乎所有安装在现代Linux系统上的软件都会能互联网上找到。它要么由发行商通过中央仓库(中央仓库能包含几千个软件包,每个软件包都已经为发行版构建、测试并且维护好了)提供,要么能够直接得到可以下载和手动安装的源代码。 + +由于不同的发行版使用不同的打包系统(Debian的\*.deb文件/CentOS的\*.rpm文件/openSUSE的专门为openSUSE构建的*.rpm文件),因此为一个发行版开发的软件包会与其他发行版不兼容。然而,大多数发行版都属于LFCS认证所涉及的三个发行版家族之一。 + +####高级和低级打包工具#### + +为了有效地进行软件包管理的任务,你需要知道,有两种类型的实用工具:低级工具(能在后端实际安装、升级、卸载软件包文件),以及高级工具(负责确保能很好的执行依赖性解决和元数据检索的任务——元数据也称为数据的数据)。 + + +|发行版|低级工具|高级工具| +|-----|-------|------| +|Debian版及其衍生版|dpkg|apt-get / aptitude| +|CentOS版|rpm|yum| +|openSUSE版|rpm|zypper| + +让我们来看下低级工具和高级工具的描述。 + +dpkg的是基于Debian的系统的一个低级包管理器。它可以安装,删除,提供有关资料,以及建立*.deb包,但它不能自动下载并安装它们相应的依赖包。 + +- 阅读更多: [15个dpkg命令实例][1] + +apt-get是Debian及其衍生版的高级包管理器,并提供命令行方式来从多个来源检索和安装软件包,其中包括解决依赖性。和dpkg不同的是,apt-get不是直接基于.deb文件工作,而是基于软件包的正确名称。 + +- 阅读更多: [25个apt-get命令实力][2] + +Aptitude是基于Debian的系统的另一个高级包管理器,它可用于快速简便的执行管理任务(安装,升级和删除软件包,还可以自动处理解决依赖性)。它在atp-get的基础上提供了更多功能,例如提供对软件包的几个版本的访问。 + +rpm是Linux标准基础(LSB)兼容发行版所使用的一种软件包管理器,用来对软件包进行低级处理。就像dpkg一样,rpm可以查询、安装、检验、升级和卸载软件包,它多数用于基于Fedora的系统,比如RHEL和CentOS。 + +- 阅读更多: [20个rpm命令实例][3] + +相对于基于RPM的系统,yum增加了系统自动更新的功能和带依赖性管理的软件包管理功能。作为一个高级工具,和apt-get或者aptitude相似,yum需要配合仓库工作。 + +- 阅读更多: [20个yum命令实例][4] + +### 低级工具的常见用法 ### + +使用低级工具处理最常见的任务如下。 + +####1. 从已编译(*.deb或*.rpm)的文件安装一个软件包#### + +这种安装方法的缺点是没有提供解决依赖性的方案。当你在发行版本库中无法获得某个软件包并且又不能通过高级工具下载安装时,你很可能会从一个已编译文件安装该软件包。因为低级工具不会解决依赖性问题,所以当安装一个没有解决依赖性的软件包时会出现出错并且退出。 + + # dpkg -i file.deb [Debian版和衍生版] + # rpm -i file.rpm [CentOS版 / openSUSE版] + +**注意**:不要试图在CentOS中安装一个为openSUSE构建的.rpm文件,反之亦然! + +####2. 从已编译文件中更新一个软件包#### + +同样,当某个安装的软件包不在中央仓库中时,你只能手动升级该软件包。 + + # dpkg -i file.deb [Debian版和衍生版] + # rpm -U file.rpm [CentOS版 / openSUSE版] + +####3. 列举安装的软件包#### + +当你第一次接触一个已经在工作中的系统时,很可能你会想知道安装了哪些软件包。 + + # dpkg -l [Debian版和衍生版] + # rpm -qa [CentOS版 / openSUSE版] + +如果你想知道一个特定的软件包安装在哪儿,你可以使用管道命令从以上命令的输出中去搜索,这在这个系列的[第一讲 操作Linux文件][5] 中有介绍。例如我们需要验证mysql-common这个软件包是否安装在Ubuntu系统中: + + # dpkg -l | grep mysql-common + +![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) + +*检查安装的软件包* + +另外一种方式来判断一个软件包是否已安装。 + + # dpkg --status package_name [Debian版和衍生版] + # rpm -q package_name [CentOS版 / openSUSE版] + +例如,让我们找出sysdig软件包是否安装在我们的系统。 + + # rpm -qa | grep sysdig + +![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) + +*检查sysdig软件包* + +####4. 查询一个文件是由哪个软件包安装的#### + + # dpkg --search file_name + # rpm -qf file_name + +例如,pw_dict.hwm文件是由那个软件包安装的? + + # rpm -qf /usr/share/cracklib/pw_dict.hwm + +![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) + +*Linux中查询文件* + +### 高级工具的常见用法 ### + +使用高级工具处理最常见的任务如下。 + +####1. 搜索软件包#### + +aptitude的更新操作将会更新可用的软件包列表,而aptitude的搜索操作会根据软件包名进行实际搜索。 + + # aptitude update && aptitude search package_name + +在search all选项中,yum不仅可以通过软件包名还可以通过软件包的描述搜索。 + + # yum search package_name + # yum search all package_name + # yum whatprovides “*/package_name” + +假定我们需要一个名为sysdig文件,想知道我们需要安装哪个软件包才行,那么运行。 + + # yum whatprovides “*/sysdig” + +![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) + +*检查软件包描述* + +whatprovides告诉yum搜索一个含有能够匹配上述正则表达式的文件的软件包。 + + # zypper refresh && zypper search package_name [在openSUSE上] + +####2. 从仓库安装一个软件包#### + +当安装一个软件包时,在软件包管理器解决了所有依赖性问题后,可能会提醒你确认安装。需要注意的是运行更新( update)或刷新(refresh)(根据所使用的软件包管理器)不是绝对必要,但是考虑到安全性和依赖性的原因,保持安装的软件包是最新的是系统管理员的一个好经验。 + + # aptitude update && aptitude install package_name [Debian版和衍生版] + # yum update && yum install package_name [CentOS版] + # zypper refresh && zypper install package_name [openSUSE版] + +####3. 卸载软件包#### + +remove选项将会卸载软件包,但把配置文件保留完好,然而purge选项将从系统中完全删去该程序以及相关内容。 + + # aptitude remove / purge package_name + # yum erase package_name + + ---注意要卸载的openSUSE包前面的减号 --- + + # zypper remove -package_name + +在默认情况下,大部分(如果不是全部的话)的软件包管理器会提示你,在你实际卸载之前你是否确定要继续卸载。所以,请仔细阅读屏幕上的信息,以避免陷入不必要的麻烦! + +####4. 显示软件包的信息#### + +下面的命令将会显示birthday这个软件包的信息。 + + # aptitude show birthday + # yum info birthday + # zypper info birthday + +![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) + +*检查包信息* + +### 总结 ### + +作为一个系统管理员,软件包管理器是你不能回避的东西。你应该立即去使用本文中介绍的这些工具。希望你在准备LFCS考试和日常工作中会觉得这些工具好用。欢迎在下面留下您的意见或问题,我们将尽可能快的回复你。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-package-management/ + +作者:[Gabriel Cánepa][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/dpkg-command-examples/ +[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[5]:https://linux.cn/article-7161-1.html \ No newline at end of file diff --git a/published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md b/published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md new file mode 100644 index 0000000000..1c1b5577c9 --- /dev/null +++ b/published/Part 1 - How to Use Awk and Regular Expressions to Filter Text or String in Files.md @@ -0,0 +1,222 @@ +awk 系列:如何使用 awk 和正则表达式过滤文本或文件中的字符串 +============================================================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Linux-Awk-Command-Examples.png) + +当我们在 Unix/Linux 下使用特定的命令从字符串或文件中读取或编辑文本时,我们经常需要过滤输出以得到感兴趣的部分。这时正则表达式就派上用场了。 + +### 什么是正则表达式? + +正则表达式可以定义为代表若干个字符序列的字符串。它最重要的功能之一就是它允许你过滤一条命令或一个文件的输出、编辑文本或配置文件的一部分等等。 + +### 正则表达式的特点 + +正则表达式由以下内容组合而成: + +- **普通字符**,例如空格、下划线、A-Z、a-z、0-9。 +- 可以扩展为普通字符的**元字符**,它们包括: + - `(.)` 它匹配除了换行符外的任何单个字符。 + - `(*)` 它匹配零个或多个在其之前紧挨着的字符。 + - `[ character(s) ]` 它匹配任何由其中的字符/字符集指定的字符,你可以使用连字符(-)代表字符区间,例如 [a-f]、[1-5]等。 + - `^` 它匹配文件中一行的开头。 + - `$` 它匹配文件中一行的结尾。 + - `\` 这是一个转义字符。 + +你必须使用类似 awk 这样的文本过滤工具来过滤文本。你还可以把 awk 自身当作一个编程语言。但由于这个指南的适用范围是关于使用 awk 的,我会按照一个简单的命令行过滤工具来介绍它。 + +awk 的一般语法如下: + +``` +# awk 'script' filename +``` + +此处 `'script'` 是一个由 awk 可以理解并应用于 filename 的命令集合。 + +它通过读取文件中的给定行,复制该行的内容并在该行上执行脚本的方式工作。这个过程会在该文件中的所有行上重复。 + +该脚本 `'script'` 中内容的格式是 `'/pattern/ action'`,其中 `pattern` 是一个正则表达式,而 `action` 是当 awk 在该行中找到此模式时应当执行的动作。 + +### 如何在 Linux 中使用 awk 过滤工具 + +在下面的例子中,我们将聚焦于之前讨论过的元字符。 + +#### 一个使用 awk 的简单示例: + +下面的例子打印文件 /etc/hosts 中的所有行,因为没有指定任何的模式。 + +``` +# awk '//{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Command-Example.gif) + +*awk 打印文件中的所有行* + +#### 结合模式使用 awk + +在下面的示例中,指定了模式 `localhost`,因此 awk 将匹配文件 `/etc/hosts` 中有 `localhost` 的那些行。 + +``` +# awk '/localhost/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-Command-with-Pattern.gif) + +*awk 打印文件中匹配模式的行* + +#### 在 awk 模式中使用通配符 (.) + +在下面的例子中,符号 `(.)` 将匹配包含 loc、localhost、localnet 的字符串。 + +这里的正则表达式的意思是匹配 **l一个字符c**。 + +``` +# awk '/l.c/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Wild-Cards.gif) + +*使用 awk 打印文件中匹配模式的字符串* + +#### 在 awk 模式中使用字符 (*) + +在下面的例子中,将匹配包含 localhost、localnet、lines, capable 的字符串。 + +``` +# awk '/l*c/{print}' /etc/localhost +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Match-Strings-in-File.gif) + +*使用 awk 匹配文件中的字符串* + +你可能也意识到 `(*)` 将会尝试匹配它可能检测到的最长的匹配。 + +让我们看一看可以证明这一点的例子,正则表达式 `t*t` 的意思是在下面的行中匹配以 `t` 开始和 `t` 结束的字符串: + +``` +this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. +``` + +当你使用模式 `/t*t/` 时,会得到如下可能的结果: + +``` +this is t +this is tecmint +this is tecmint, where you get t +this is tecmint, where you get the best good t +this is tecmint, where you get the best good tutorials, how t +this is tecmint, where you get the best good tutorials, how tos, guides, t +this is tecmint, where you get the best good tutorials, how tos, guides, tecmint +``` + +在 `/t*t/` 中的通配符 `(*)` 将使得 awk 选择匹配的最后一项: + +``` +this is tecmint, where you get the best good tutorials, how to's, guides, tecmint +``` + +#### 结合集合 [ character(s) ] 使用 awk + +以集合 [al1] 为例,awk 将匹配文件 /etc/hosts 中所有包含字符 a 或 l 或 1 的字符串。 + +``` +# awk '/[al1]/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matching-Character.gif) + +*使用 awk 打印文件中匹配的字符* + +下一个例子匹配以 `K` 或 `k` 开始头,后面跟着一个 `T` 的字符串: + +``` +# awk '/[Kk]T/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matched-String-in-File.gif) + +*使用 awk 打印文件中匹配的字符* + +#### 以范围的方式指定字符 + +awk 所能理解的字符: + +- `[0-9]` 代表一个单独的数字 +- `[a-z]` 代表一个单独的小写字母 +- `[A-Z]` 代表一个单独的大写字母 +- `[a-zA-Z]` 代表一个单独的字母 +- `[a-zA-Z 0-9]` 代表一个单独的字母或数字 + +让我们看看下面的例子: + +``` +# awk '/[0-9]/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-To-Print-Matching-Numbers-in-File.gif) + +*使用 awk 打印文件中匹配的数字* + +在上面的例子中,文件 /etc/hosts 中的所有行都至少包含一个单独的数字 [0-9]。 + +#### 结合元字符 (\^) 使用 awk + +在下面的例子中,它匹配所有以给定模式开头的行: + +``` +# awk '/^fe/{print}' /etc/hosts +# awk '/^ff/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-All-Matching-Lines-with-Pattern.gif) + +*使用 awk 打印与模式匹配的行* + +#### 结合元字符 ($) 使用 awk + +它将匹配所有以给定模式结尾的行: + +``` +# awk '/ab$/{print}' /etc/hosts +# awk '/ost$/{print}' /etc/hosts +# awk '/rs$/{print}' /etc/hosts +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Given-Pattern-String.gif) + +*使用 awk 打印与模式匹配的字符串* + +#### 结合转义字符 (\\) 使用 awk + +它允许你将该转义字符后面的字符作为文字,即理解为其字面的意思。 + +在下面的例子中,第一个命令打印出文件中的所有行,第二个命令中我想匹配具有 $25.00 的一行,但我并未使用转义字符,因而没有打印出任何内容。 + +第三个命令是正确的,因为一个这里使用了一个转义字符以转义 $,以将其识别为 '$'(而非元字符)。 + +``` +# awk '//{print}' deals.txt +# awk '/$25.00/{print}' deals.txt +# awk '/\$25.00/{print}' deals.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Escape-Character.gif) + +*结合转义字符使用 awk* + +### 总结 + +以上内容并不是 awk 命令用做过滤工具的全部,上述的示例均是 awk 的基础操作。在下面的章节中,我将进一步介绍如何使用 awk 的高级功能。感谢您的阅读,请在评论区贴出您的评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ + +作者:[Aaron Kili][a] +译者:[wwy-hust](https://github.com/wwy-hust) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/published/Part 2 - How to Use Awk to Print Fields and Columns in File.md b/published/Part 2 - How to Use Awk to Print Fields and Columns in File.md new file mode 100644 index 0000000000..69f372d099 --- /dev/null +++ b/published/Part 2 - How to Use Awk to Print Fields and Columns in File.md @@ -0,0 +1,109 @@ +awk 系列:如何使用 awk 输出文本中的字段和列 +====================================================== + +在 Awk 系列的这一节中,我们将看到 awk 最重要的特性之一,字段编辑。 + +首先我们要知道,Awk 能够自动将输入的行,分隔为若干字段。每一个字段就是一组字符,它们和其他的字段由一个内部字段分隔符分隔开来。 + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Print-Fields-and-Columns.png) + +*Awk 输出字段和列* + +如果你熟悉 Unix/Linux 或者懂得 [bash shell 编程][1],那么你应该知道什么是内部字段分隔符(IFS)变量。awk 中默认的 IFS 是制表符和空格。 + +awk 中的字段分隔符的工作原理如下:当读到一行输入时,将它按照指定的 IFS 分割为不同字段,第一组字符就是字段一,可以通过 $1 来访问,第二组字符就是字段二,可以通过 $2 来访问,第三组字符就是字段三,可以通过 $3 来访问,以此类推,直到最后一组字符。 + +为了更好地理解 awk 的字段编辑,让我们看一个下面的例子: + +**例 1**:我创建了一个名为 tecmintinfo.txt 的文本文件。 + +``` +# vi tecmintinfo.txt +# cat tecmintinfo.txt +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Create-File-in-Linux.png) + +*在 Linux 上创建一个文件* + +然后在命令行中,我试着使用下面的命令从文本 tecmintinfo.txt 中输出第一个,第二个,以及第三个字段。 + +``` +$ awk '//{print $1 $2 $3 }' tecmintinfo.txt +TecMint.comisthe +``` + +从上面的输出中你可以看到,前三个字段的字符是以空格为分隔符输出的: + +- 字段一是 “TecMint.com”,可以通过 `$1` 来访问。 +- 字段二是 “is”,可以通过 `$2` 来访问。 +- 字段三是 “the”,可以通过 `$3` 来访问。 + +如果你注意观察输出的话可以发现,输出的字段值并没有被分隔开,这是 print 函数默认的行为。 + +为了使输出看得更清楚,输出的字段值之间使用空格分开,你需要添加 (,) 操作符。 + +``` +$ awk '//{print $1, $2, $3; }' tecmintinfo.txt + +TecMint.com is the +``` + +需要记住而且非常重要的是,`($)` 在 awk 和在 shell 脚本中的使用是截然不同的! + +在 shell 脚本中,`($)` 被用来获取变量的值。而在 awk 中,`($)` 只有在获取字段的值时才会用到,不能用于获取变量的值。 + +**例 2**:让我们再看一个例子,用到了一个名为 my_shoping.list 的包含多行的文件。 + +``` +No Item_Name Unit_Price Quantity Price +1 Mouse #20,000 1 #20,000 +2 Monitor #500,000 1 #500,000 +3 RAM_Chips #150,000 2 #300,000 +4 Ethernet_Cables #30,000 4 #120,000 +``` + +如果你只想输出购物清单上每一个物品的`单价`,你只需运行下面的命令: + +``` +$ awk '//{print $2, $3 }' my_shopping.txt + +Item_Name Unit_Price +Mouse #20,000 +Monitor #500,000 +RAM_Chips #150,000 +Ethernet_Cables #30,000 +``` + +可以看到上面的输出不够清晰,awk 还有一个 `printf` 的命令,可以帮助你将输出格式化。 + +使用 `printf` 来格式化 Item_Name 和 Unit_Price 的输出: + +``` +$ awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt + +Item_Name Unit_Price +Mouse #20,000 +Monitor #500,000 +RAM_Chips #150,000 +Ethernet_Cables #30,000 +``` + +### 总结 + +使用 awk 过滤文本或字符串时,字段编辑的功能是非常重要的。它能够帮助你从一个表的数据中得到特定的列。一定要记住的是,awk 中 `($)` 操作符的用法与其在 shell 脚本中的用法是不同的! + +希望这篇文章对您有所帮助。如有任何疑问,可以在评论区域发表评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/ + +作者:[Aaron Kili][a] +译者:[Cathon](https://github.com/Cathon),[ictlyh](https://github.com/ictlyh) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/category/bash-shell/ diff --git a/sources/news/README.md b/sources/news/README.md deleted file mode 100644 index 98d53847b1..0000000000 --- a/sources/news/README.md +++ /dev/null @@ -1 +0,0 @@ -这里放新闻类文章,要求时效性 diff --git a/sources/share/README.md b/sources/share/README.md deleted file mode 100644 index e5e225858e..0000000000 --- a/sources/share/README.md +++ /dev/null @@ -1 +0,0 @@ -这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。 diff --git a/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md index 10a30119e1..57faa7998f 100644 --- a/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md +++ b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md @@ -1,5 +1,6 @@ Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech ================================================================================ + ![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png) Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1] diff --git a/sources/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md b/sources/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md deleted file mode 100644 index 3390c232ac..0000000000 --- a/sources/talk/20151023 Mark Shuttleworth--The Man Behind Ubuntu Operating System.md +++ /dev/null @@ -1,119 +0,0 @@ -Mark Shuttleworth – The Man Behind Ubuntu Operating System -================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg) - -**Mark Richard Shuttleworth** is the founder of **Ubuntu** or the man behind the Debian as they call him. He was born in 1973 in Welkom, South Africa. He’s an entrepreneur and also space tourist who became later **1st citizen of independent African country who could travel to the space**. - -Mark also founded **Thawte** in 1996, the Internet commerce security company, while he was studying finance and IT at University of Cape Town. - -In 2000, Mark founded the HBD, as an investment company, and also he created the Shuttleworth Foundation in order to fund the innovative leaders in the society with combination of fellowships and some investments. - -> “The mobile world is crucial to the future of the PC. This month, for example, it became clear that the traditional PC is shrinking in favor of tablets. So if we want to be relevant on the PC, we have to figure out how to be relevant in the mobile world first. Mobile is also interesting because there’s no pirated Windows market. So if you win a device to your OS, it stays on your OS. In the PC world, we are constantly competing with “free Windows”, which presents somewhat unique challenges. So our focus now is to establish a great story around Ubuntu and mobile form factors – the tablet and the phone – on which we can build deeper relationships with everyday consumers.” -> -> — Mark Shuttleworth - -In 2002, he flew to International Space Station as member of their crew of Soyuz mission TM-34, after 1 year of training in the Star City, Russia. And after running campaign to promote the science, code, and mathematics to the aspiring astronauts and the other ambitious types at schools in SA, Mark founded the **Canonical Ltd**. and in 2013, he provided leadership for Ubuntu operating system for software development purposes. - -Today, Shuttleworth holds dual citizenship of United Kingdom and South Africa currently lives on lovely Mallards botanical garden in Isle of Man, with 18 precocious ducks, equally his lovely girlfriend Claire, 2 black bitches and occasional itinerant sheep. - -> “Computer is not a device anymore. It is an extension of your mind and your gateway to other people.” -> -> — Mark Shuttleworth - -### Mark Shuttleworth’s Early life ### - -As we mentioned above, Mark was born in Welkom, South Africa’s Orange Free State as son of surgeon and nursery-school teacher, Mark attended the school at Western Province Preparatory School where he became eventually the Head Boy in 1986, followed by 1 term at Rondebosch Boys’ High School, and later at Bishops/Diocesan College where he was again Head Boy in 1991. - -Mark obtained the Bachelor of Business Science degree in the Finance and Information Systems at University of Cape Town, where he lived there in Smuts Hall. He became, as a student, involved in installations of the 1st residential Internet connections at his university. - -> “There are many examples of companies and countries that have improved their competitiveness and efficiency by adopting open source strategies. The creation of skills through all levels is of fundamental importance to both companies and countries.” -> -> — Mark Shuttleworth - -### Mark Shuttleworth’s Career ### - -Mark founded Thawte in 1995, which was specialized in the digital certificates and Internet security, then he sold it to VeriSign in 1999, earning about $575 million at the time. - -In 2000, Mark formed the HBD Venture Capital (Here be Dragons), the business incubator and venture capital provider. In 2004, he formed the Canonical Ltd., for promotion and commercial support of the free software development projects, especially Ubuntu operating system. In 2009, Mark stepped down as CEO of Canonical, Ltd. - -> “In the early days of the DCC I preferred to let the proponents do their thing and then see how it all worked out in the end. Now we are pretty close to the end.” -> -> — Mark Shuttleworth - -### Linux and FOSS with Mark Shuttleworth ### - -In the late 1990s, Mark participated as one of developers of Debian operating system. - -In 2001, Mark formed the Shuttleworth Foundation, It is non-profit organization dedicated to the social innovation that also funds free, educational, and open source software projects in South Africa, including Freedom Toaster. - -In 2004, Mark returned to free software world by funding software development of Ubuntu, as it was Linux distribution based on Debian, throughout his company Canonical Ltd. - -In 2005, Mark founded Ubuntu Foundation and made initial investment of 10 million dollars. In Ubuntu project, Mark is often referred to with tongue-in-cheek title “**SABDFL (Self-Appointed Benevolent Dictator for Life)**”. To come up with list of names of people in order to hire for the entire project, Mark took about six months of Debian mailing list archives with him during his travelling to Antarctica aboard icebreaker Kapitan Khlebnikov in 2004. In 2005, Mark purchased 65% stake of Impi Linux. - -> “I urge telecommunications regulators to develop a commercial strategy for delivering effective access to the continent.” -> -> — Mark Shuttleworth - -In 2006, it was announced that Shuttleworth became **first patron of KDE**, which was highest level of sponsorship available at the time. This patronship ended in 2012, with financial support together for Kubuntu, which was Ubuntu variant with KDE as a main desktop. - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/shuttleworth-kde.jpg) - -In 2009, Shuttleworth announced that, he would step down as the CEO of Canonical in order to focus more energy on partnership, product design, and the customers. Jane Silber, took on this job as the CEO at Canonical after he was the COO at Canonical since 2004. - -In 2010, Mark received the honorary degree from Open University for that work. - -In 2012, Mark and Kenneth Rogoff took part together in debate opposite Peter Thiel and Garry Kasparov at Oxford Union, this debate was entitled “**The Innovation Enigma**”. - -In 2013, Mark and Ubuntu were awarded **Austrian anti-privacy Big Brother Award** for sending the local Ubuntu Unity Dash searches to the Canonical servers by default. One year earlier in 2012, Mark had defended the anonymization method that was used. - -> “All the major PC companies now ship PC’s with Ubuntu pre-installed. So we have a very solid set of working engagements in the industry. But those PC companies are nervous to promote something new to PC buyers. If we can get PC buyers familiar with Ubuntu as a phone and tablet experience, then they may be more willing buy it on the PC too. Because no OS ever succeeded by emulating another OS. Android is great, but if we want to succeed we need to bring something new and better to market. We are all at risk of stagnating if we don’t pursue the future, vigorously. But if you pursue the future, you have to accept that not everybody will agree with your vision.” -> -> — Mark Shuttleworth - -### Mark Shuttleworth’s Spaceflight ### - -Mark gained worldwide fame in 2002 as a second self-funded space tourist and the first South African who could travel to the space. Flying through Space Adventures, Mark launched aboard Russian Soyuz TM-34 mission as spaceflight participant, and he paid approximately $20 million for that voyage. 2 days later, Soyuz spacecraft arrived at International Space Station, where Mark spent 8 days participating in the experiments related to the AIDS and the GENOME research. Later in 2002, Mark returned to the Earth on the Soyuz TM-33. To participate in that flight, Mark had to undergo 1 year of preparation and training, including 7 months spent in the Star City, Russia. - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg) - -While in space, Mark had radio conversation with Nelson Mandela and another 14 year old South African girl, called Michelle Foster, who asked Mark to marry her. Of course Mark politely dodged that question, stating that he was much honored to this question before cunningly change the subject. The terminally ill Foster was also provided the opportunity to have conversation with Mark and Nelson Mandela by Reach for Dream foundation. - -Upon returning, Mark traveled widely and also spoke about that spaceflight to schoolchildren around the world. - -> “The raw numbers suggest that Ubuntu continues to grow in terms of actual users. And our partnerships – Dell, HP, Lenovo on the hardware front, and gaming companies like EA, Valve joining up on the software front – make me feel like we continue to lead where it matters.” -> -> — Mark Shuttleworth - -### Mark Shuttleworth’s Transport ### - -Mark has his private jet, Bombardier Global Express that is often referred to as Canonical One but it’s in fact owned through the HBD Venture Capital Company. The dragon depicted on side of the plane is Norman, HBD Venture Capital mascot. - -### The Legal Clash with South African Reserve Bank ### - -Upon the moving R2.5 billion in the capital from South Africa to Isle of Man, South African Reserve Bank imposed R250 million levy to release Mark’s assets. Mark appealed, and then after lengthy legal battle, Reserve Bank was ordered to repay Mark his R250 million, plus the interest. Mark announced that he would be donating that entire amount to trust that will be established in order to help others take cases to Constitutional Court. - -> “The exit charge was not inconsistent with the Constitution. The dominant purpose of the exit charge was not to raise revenue but rather to regulate conduct by discouraging the export of capital to protect the domestic economy.” -> -> — Judge Dikgang Moseneke - -In 2015, Constitutional Court of South Africa reversed and set-aside findings of lower courts, ruling that dominant purpose of the exit charge was in order to regulate conduct rather than for raising the revenue. - -### Mark Shuttleworth’s likes ### - -Cesária Évora, mp3s,Spring, Chelsea, finally seeing something obvious for first time, coming home, Sinatra, daydreaming, sundowners, flirting, d’Urberville, string theory, Linux, particle physics, Python, reincarnation, mig-29s, snow, travel, Mozilla, lime marmalade, body shots, the African bush, leopards, Rajasthan, Russian saunas, snowboarding, weightlessness, Iain m banks, broadband, Alastair Reynolds, fancy dress, skinny-dipping, flashes of insight, post-adrenaline euphoria, the inexplicable, convertibles, Clifton, country roads, international space station, machine learning, artificial intelligence, Wikipedia, Slashdot, kitesurfing, and Manx lanes. - -### Shuttleworth’s dislikes ### - -Admin, salary negotiations, legalese, and public speaking. - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system/ - -作者:[M.el Khamlichi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/pirat9/ \ No newline at end of file diff --git a/sources/talk/20151117 How bad a boss is Linus Torvalds.md b/sources/talk/20151117 How bad a boss is Linus Torvalds.md deleted file mode 100644 index 7ebba90483..0000000000 --- a/sources/talk/20151117 How bad a boss is Linus Torvalds.md +++ /dev/null @@ -1,78 +0,0 @@ -sonofelice translating -How bad a boss is Linus Torvalds? -================================================================================ -![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg) - -*Linus Torvalds addressed a packed auditorium of Linux enthusiasts during his speech at the LinuxWorld show in San Jose, California, on August 10, 1999. Credit: James Niccolai* - -**It depends on context. In the world of software development, he’s what passes for normal. The question is whether that situation should be allowed to continue.** - -I've known Linus Torvalds, Linux's inventor, for over 20 years. We're not chums, but we like each other. - -Lately, Torvalds has been getting a lot of flack for his management style. Linus doesn't suffer fools gladly. He has one way of judging people in his business of developing the Linux kernel: How good is your code? - -Nothing else matters. As Torvalds said earlier this year at the Linux.conf.au Conference, "I'm not a nice person, and I don't care about you. [I care about the technology and the kernel][1] -- that's what's important to me." - -Now, I can deal with that kind of person. If you can't, you should avoid the Linux kernel community, where you'll find a lot of this kind of meritocratic thinking. Which is not to say that I think everything in Linuxland is hunky-dory and should be impervious to calls for change. A meritocracy I can live with; a bastion of male dominance where women are subjected to scorn and disrespect is a problem. - -That's why I see the recent brouhaha about Torvalds' management style -- or more accurately, his total indifference to the personal side of management -- as nothing more than standard operating procedure in the world of software development. And at the same time, I see another instance that has come to light as evidence of a need for things to really change. - -The first situation arose with the [release of Linux 4.3][2], when Torvalds used the Linux Kernel Mailing List to tear into a developer who had inserted some networking code that Torvalds thought was -- well, let's say "crappy." "[[A]nd it generates [crappy] code.][3] It looks bad, and there's no reason for it." He goes on in this vein for quite a while. Besides the word "crap" and its earthier synonym, he uses the word "idiotic" pretty often. - -Here's the thing, though. He's right. I read the code. It's badly written and it does indeed seem to have been designed to use the new "overflow_usub()" function just for the sake of using it. - -Now, some people see this diatribe as evidence that Torvalds is a bad-tempered bully. I see a perfectionist who, within his field, doesn't put up with crap. - -Many people have told me that this is not how professional programmers should act. People, have you ever worked with top developers? That's exactly how they act, at Apple, Microsoft, Oracle and everywhere else I've known them. - -I've heard Steve Jobs rip a developer to pieces. I've cringed while a senior Oracle developer lead tore into a room of new programmers like a piranha through goldfish. - -In Accidental Empires, his classic book on the rise of PCs, Robert X. Cringely described Microsoft's software management style when Bill Gates was in charge as a system where "Each level, from Gates on down, screams at the next, goading and humiliating them." Ah, yes, that's the Microsoft I knew and hated. - -The difference between the leaders at big proprietary software companies and Torvalds is that he says everything in the open for the whole world to see. The others do it in private conference rooms. I've heard people claim that Torvalds would be fired in their company. Nope. He'd be right where he is now: on top of his programming world. - -Oh, and there's another difference. If you get, say, Larry Ellison mad at you, you can kiss your job goodbye. When you get Torvalds angry at your work, you'll get yelled at in an email. That's it. - -You see, Torvalds isn't anyone's boss. He's the guy in charge of a project with about 10,000 contributors, but he has zero hiring and firing authority. He can hurt your feelings, but that's about it. - -That said, there is a serious problem within both open-source and proprietary software development circles. No matter how good a programmer you are, if you're a woman, the cards are stacked against you. - -No case shows this better than that of Sarah Sharp, an Intel developer and formerly a top Linux programmer. [In a post on her blog in October][4], she explained why she had stopped contributing to the Linux kernel more than a year earlier: "I finally realized that I could no longer contribute to a community where I was technically respected, but I could not ask for personal respect.... I did not want to work professionally with people who were allowed to get away with subtle sexist or homophobic jokes." - -Who can blame her? I can't. Torvalds, like almost every software manager I've ever known, I'm sorry to say, has permitted a hostile work environment. - -He would probably say that it's not his job to ensure that Linux contributors behave with professionalism and mutual respect. He's concerned with the code and nothing but the code. - -As Sharp wrote: - -> I have the utmost respect for the technical efforts of the Linux kernel community. They have scaled and grown a project that is focused on maintaining some of the highest coding standards out there. The focus on technical excellence, in combination with overloaded maintainers, and people with different cultural and social norms, means that Linux kernel maintainers are often blunt, rude, or brutal to get their job done. Top Linux kernel developers often yell at each other in order to correct each other's behavior. -> -> That's not a communication style that works for me. … -> -> Many senior Linux kernel developers stand by the right of maintainers to be technically and personally brutal. Even if they are very nice people in person, they do not want to see the Linux kernel communication style change. - -She's right. - -Where I differ from other observers is that I don't think that this problem is in any way unique to Linux or open-source communities. With five years of work in the technology business and 25 years as a technology journalist, I've seen this kind of immature boy behavior everywhere. - -It's not Torvalds' fault. He's a technical leader with a vision, not a manager. The real problem is that there seems to be no one in the software development universe who can set a supportive tone for teams and communities. - -Looking ahead, I hope that companies and organizations, such as the Linux Foundation, can find a way to empower community managers or other managers to encourage and enforce civil behavior. - -We won't, unfortunately, find that kind of managerial finesse in our pure technical or business leaders. It's not in their DNA. - --------------------------------------------------------------------------------- - -via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html -[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/ -[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html -[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/ diff --git a/sources/talk/20160505 Confessions of a cross-platform developer.md b/sources/talk/20160505 Confessions of a cross-platform developer.md deleted file mode 100644 index 0f6af84070..0000000000 --- a/sources/talk/20160505 Confessions of a cross-platform developer.md +++ /dev/null @@ -1,76 +0,0 @@ -vim-kakali translating - - - -Confessions of a cross-platform developer -============================================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/business_clouds.png?itok=cucHuJnU) - -[Andreia Gaita][1] is giving a talk at this year's OSCON, titled [Confessions of a cross-platform developer][2]. She's a long-time open source and [Mono][3] contributor, and develops primarily in C#/C++. Andreia works at GitHub, where she's focused on building the GitHub Extension manager for Visual Studio. - -I caught up with Andreia ahead of her talk to ask about cross-platform development and what she's learned in her 16 years as a cross-platform developer. - -![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) - -**What languages have you found easiest and hardest to develop cross-platform code for?** - -It's less about which languages are good and more about the libraries and tooling available for those languages. The compilers/interpreters/build systems available for languages determine how easy it is to do cross-platform work with them (or whether it's even possible), and the libraries available for UI and native system access determine how deep you can integrate with the OS. With that in mind, I found C# to be the best for cross-platform work. The language itself includes features that allow fast native calls and accurate memory mapping, which you really need if you want your code to talk to the OS and native libraries. When I need very specific OS integration, I switch to C or C++. - -**What cross-platform toolkits/abstractions have you used?** - -Most of my cross-platform work has been developing tools, libraries and bindings for other people to develop cross-platform applications with, mostly in Mono/C# and C/C++. I don't get to use a lot of abstractions at that level, beyond glib and friends. I mostly rely on Mono for any cross-platform app that includes a UI, and Unity3D for the occasional game development. I play with Electron every now and then. - -**What has been your approach to build systems, and how does this vary by language or platform?** - -I try to pick the build system that is most suited for the language(s) I'm using. That way, it'll (hopefully) give me less headaches. It needs to allow for platform and architecture selection, be smart about build artifact locations (for multiple parallel builds), and be decently configurable. Most of the time I have projects combining C/C++ and C# and I want to build all the different configurations at the same time from the same source tree (Debug, Release, Windows, OSX, Linux, Android, iOS, etc, etc.), and that usually requires selecting and invoking different compilers with different flags per output build artifact. So the build system has to let me do all of this without getting (too much) in my way. I try out different build systems every now and then, just to see what's new, but in the end, I end up going back to makefiles and a combination of either shell and batch scripts or Perl scripts for driving them (because if I want users to build my things, I'd better pick a command line script language that is available everywhere). - -**How do you balance the desire for native look and feel with the need for uniform user interfaces?** - -Cross-platform UI is hard! I've implemented several cross-platform GUIs over the years, and it's the one thing for which I don't think there's an optimal solution. There's basically two options. You can pick a cross-platform GUI toolkit and do a UI that doesn't feel quite right in all the platforms you support, with a small codebase and low maintenance cost. Or you can choose to develop platform-specific UIs that will look and feel native and well integrated with a larger codebase and higher maintenance cost. The decision really depends on the type of app, how many features it has, how many resources you have, and how many platforms you're shipping to. - -In the end, I think there's an increase in users' tolerance for "One UI To Rule Them All" with frameworks like Electron. I have a Chromium+C+C# framework side project that will one day hopefully allow me build Electron-style apps in C#, giving me the best of both worlds. - -**Has building/packaging dependencies been an issue for you?** - -I'm very conservative about my use of dependencies, having been bitten so many times by breaking ABIs, clashing symbols, and missing packages. I decide which OS version(s) I'm targeting and pick the lowest common denominator release available of a dependency to minimize issues. That usually means having five different copies of Xcode and OSX Framework libraries, five different versions of Visual Studio installed side-to-side on the same machine, multiple clang and gcc versions, and a bunch of VMs running various other distros. If I'm unsure of the state of packages in the OS I'm targeting, I will sometimes link statically and sometimes submodule dependencies to make sure they're always available. And most of all, I avoid the bleeding edge unless I really, really need something there. - -**Do you use continuous integration, code review, and related tools?** - -All the time! It's the only way to keep sane. The first thing I do on a project is set up cross-platform build scripts to ensure everything is automateable as early as possible. When you're targeting multiple platforms, CI is essential. It's impossible for everyone to build all the different combinations of platforms in one machine, and as soon as you're not building all of them you're going to break something without being aware of it. In a shared multi-platform codebase, different people own different platforms and features, so the only way to guarantee quality is to have cross-team code reviews combined with CI and other analysis tools. It's no different than other software projects, there's just more points of failure. - -**Do you rely on automated build testing, or do you tend to build on each platform and test locally?** - -For tools and libraries that don't include UIs, I can usually get away with automated build testing. If there's a UI, then I need to do both—reliable, scriptable UI automation for existing GUI toolkits is rare to non-existent, so I would have to either invest in creating UI automation tools that work across all the platforms I want to support, or I do it manually. If a project uses a custom UI toolkit (like, say, an OpenGL UI like Unity3D does), then it's fairly easy to develop scriptable automation tools and automate most of that stuff. Still, there's nothing like the human ability to break things with a couple of clicks! - -**If you are developing cross-platform, do you support cross-editor build systems so that you can use Visual Studio on Windows, Qt Creator on Linux, and XCode on Mac? Or do you tend toward supporting one platform such as Eclipse on all platforms?** - -I favor cross-editor build systems. I prefer generating project files for different IDEs (preferably in a way that makes it easier to add more IDEs), with build scripts that can drive builds from the IDEs for the platform they're on. Editors are the most important tool for a developer. It takes time and effort to learn them, and they're not interchangeable. I have my favorite editors and tools, and everyone else should be able to use their favorite tool, too. - -**What is your preferred editor/development environment/IDE for cross-platform development?** - -The cross-platform developer is cursed with having to pick the lowest common denominator editor that works across the most platforms. I love Visual Studio, but I can't rely on it for anything except Windows work (and you really don't want to make Windows your primary cross-compiling platform), so I can't make it my primary IDE. Even if I could, an essential skill of cross-platform development is to know and use as many platforms as possible. That means really knowing them—using the platform's editors and libraries, getting to know the OS and its assumptions, behaviors, and limitations, etc. To do that and keep my sanity (and my shortcut muscle memory), I have to rely on cross-platform editors. So, I use Emacs and Sublime. - -**What are some of your favorite past and current cross-platform projects?** - -Mono is my all-time favorite, hands down, and most of the others revolve around it in some way. Gluezilla was a Mozilla binding I did years ago to allow C# apps to embed web browser views, and that one was a doozy. At one point I had a Winforms app, built on Linux, running on Windows with an embedded GTK view in it that was running a Mozilla browser view. The CppSharp project (formerly Cxxi, formerly CppInterop) is a project I started to generate C# bindings for C++ libraries so that you could call, create instances of, and subclass C++ classes from C#. It was done in such a way that it would detect at runtime what platform you'd be running on and what compiler was used to create the native library and generate the correct C# bindings for it. That was fun! - -**Where do you see cross-platform development heading in the future?** - -The way we build native applications is already changing, and I feel like the visual differences between the various desktop operating systems are going to become even more blurred so that it will become easier to build cross-platform apps that integrate reasonably well without being fully native. Unfortunately, that might mean applications will be worse in terms of accessibility and less innovative when it comes to using the OS to its full potential. Cross-platform development of tools, libraries, and runtimes is something that we know how to do well, but there's still a lot of work to do with cross-platform application development. - - --------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/5/oscon-interview-andreia-gaita - -作者:[Marcus D. Hanwell ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mhanwell -[1]: https://twitter.com/sh4na -[2]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/48702 -[3]: http://www.mono-project.com/ diff --git a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md index e17c33bd81..156fda88db 100644 --- a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md +++ b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md @@ -1,3 +1,5 @@ +vim-kakali translating + Linus Torvalds Talks IoT, Smart Devices, Security Concerns, and More[video] =========================================================================== @@ -44,3 +46,4 @@ via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security- [a]: https://www.linux.com/users/ericstephenbrown [0]: http://events.linuxfoundation.org/events/embedded-linux-conference [1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom + diff --git a/sources/talk/20160510 65% of companies are contributing to open source projects.md b/sources/talk/20160510 65% of companies are contributing to open source projects.md deleted file mode 100644 index ad3b4ef680..0000000000 --- a/sources/talk/20160510 65% of companies are contributing to open source projects.md +++ /dev/null @@ -1,63 +0,0 @@ -65% of companies are contributing to open source projects -========================================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_openseries.png?itok=s7lXChId) - -This year marks the 10th annual Future of Open Source Survey to examine trends in open source, hosted by Black Duck and North Bridge. The big takeaway from the survey this year centers around the mainstream acceptance of open source today and how much has changed over the last decade. - -The [2016 Future of Open Source Survey][1] analyzed responses from nearly 3,400 professionals. Developers made their voices heard in the survey this year, comprising roughly 70% of the participants. The group that showed exponential growth were security professionals, whose participation increased by over 450%. Their participation shows the increasing interest in ensuring that the open source community pays attention to security issues in open source software and securing new technologies as they emerge. - -Black Duck's [Open Source Rookies][2] of the Year awards identify some of these emerging technologies, like Docker and Kontena in containers. Containers themselves have seen huge growth this year–76% of respondents say their company has some plans to use containers. And an amazing 59% of respondents are already using containers in a variety of deployments, from development and testing to internal and external production environment. The developer community has embraced containers as a way to get their code out quickly and easily. - -It's not surprising that the survey shows a miniscule number of organizations having no developers contributing to open source software. When large corporations like Microsoft and Apple open source some of their solutions, developers gain new opportunities to participate in open source. I certainly hope this trend will continue, with more software developers contributing to open source projects at work and outside of work. - -### Highlights from the 2016 survey - -#### Business value - -* Open source is an essential element in development strategy with more than 65% of respondents relying on open source to speed development. -* More than 55% leverage open source within their production environments. - -#### Engine for innovation - -* Respondents reported use of open source to drive innovation through faster, more agile development; accelerated time to market and vastly superior interoperability. -* Additional innovation is afforded by open source's quality of solutions; competitive features and technical capabilities; and ability to customize. - -#### Proliferation of open source business models and investment - -* More diverse business models are emerging that promise to deliver more value to open source companies than ever before. They are not as dependent on SaaS and services/support. -* Open source private financing has increased almost 4x in five years. - -#### Security and management - -The development of best-in-class open source security and management practices has not kept pace with growth in adoption. Despite a proliferation of expensive, high-profile open source breaches in recent years, the survey revealed that: - -* 50% of companies have no formal policy for selecting and approving open source code. -* 47% of companies don’t have formal processes in place to track open source code, limiting their visibility into their open source and therefore their ability to control it. -* More than one-third of companies have no process for identifying, tracking or remediating known open source vulnerabilities. - -#### Open source participation on the rise - -The survey revealed an active corporate open source community that spurs innovation, delivers exponential value and shares camaraderie: - -* 67% of respondents report actively encouraging developers to engage in and contribute to open source projects. -* 65% of companies are contributing to open source projects. -* One in three companies have a fulltime resource dedicated to open source projects. -* 59% of respondents participate in open source projects to gain competitive edge. - -Black Duck and North Bridge learned a great deal this year about security, policy, business models and more from the survey, and we’re excited to share these findings. Thank you to our many collaborators and all the respondents for taking the time to take the survey. It’s been a great ten years, and I am happy that we can safely say that the future of open source is full of possibilities. - -Learn more, see the [full results][3]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/5/2016-future-open-source-survey - -作者:[Haidee LeClair][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -[a]: https://opensource.com/users/blackduck2016 -[1]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results -[2]: https://info.blackducksoftware.com/OpenSourceRookies2015.html -[3]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results%C2%A0 diff --git a/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md b/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md index ec7a33aaa6..dbde68bda8 100644 --- a/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md +++ b/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md @@ -1,3 +1,4 @@ +KevinSJ Translating Linux will be the major operating system of 21st century cars =============================================================== diff --git a/sources/talk/20160523 Driving cars into the future with Linux.md b/sources/talk/20160523 Driving cars into the future with Linux.md deleted file mode 100644 index 38ef546789..0000000000 --- a/sources/talk/20160523 Driving cars into the future with Linux.md +++ /dev/null @@ -1,104 +0,0 @@ -Driving cars into the future with Linux -=========================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY) - -I don't think much about it while I'm driving, but I sure do love that my car is equipped with a system that lets me use a few buttons and my voice to call my wife, mom, and children. That same system allows me to choose whether I listen to music streaming from the cloud, satellite radio, or the more traditional AM/FM radio. I also get weather updates and can direct my in-vehicle GPS to find the fastest route to my next destination. [In-vehicle infotainment][1], or IVI as it's known in the industry, has become ubiquitous in today's newest automobiles. - -A while ago, I had to travel hundreds of miles by plane and then rent a car. Happily, I discovered that my rental vehicle was equipped with IVI technology similar to my own car. In no time, I was connected via Bluetooth, had uploaded my contacts into the system, and was calling home to let my family know I arrived safely and my hosts to let them know I was en route to their home. - -In a recent [news roundup][2], Scott Nesbitt cited an article that said Ford Motor Company is getting substantial backing from a rival automaker for its open source [Smart Device Link][3] (SDL) middleware framework, which supports mobile phones. SDL is a project of the [GENIVI Alliance][4], a nonprofit committed to building middleware to support open source in-vehicle infotainment systems. According to [Steven Crumb][5], executive director of GENIVI, their [membership][6] is broad and includes Daimler Group, Hyundai, Volvo, Nissan, Honda, and 170 others. - -In order to remain competitive in the industry, automotive companies need a middleware system that can support the various human machine interface technologies available to consumers today. Whether you own an Android, iOS, or other device, automotive OEMs want their units to be able to support these systems. Furthermore, these IVI systems must be adaptable enough to support the ever decreasing half-life of mobile technology. OEMs want to provide value and add services in their IVI stacks that will support a variety of options for their customers. Enter Linux and open source software. - -In addition to GENIVI's efforts, the [Linux Foundation][7] sponsors the [Automotive Grade Linux][8] (AGL) workgroup, a software foundation dedicated to finding open source solutions for automotive applications. Although AGL will initially focus on IVI systems, they envision branching out to include [telematics][9], heads up displays, and other control systems. AGL has over 50 members at this time, including Jaguar, Toyota, and Nissan, and in a [recent press release][10] announced that Ford, Mazda, Mitsubishi, and Subaru have joined. - -To find out more, we interviewed two leaders in this emerging field. Specifically, we wanted to know how Linux and open source software are being used and if they are in fact changing the face of the automotive industry. First, we talk to [Alison Chaiken][11], a software engineer at Peloton Technology and an expert on automotive Linux, cybersecurity, and transparency. She previously worked for Mentor Graphics, Nokia, and the Stanford Linear Accelerator. Then, we chat with [Steven Crumb][12], executive director of GENIVI, who got started in open source in high-performance computing environments (supercomputers and early cloud computing). He says that though he's not a coder anymore, he loves to help organizations solve real business problems with open source software. - -### Interview with Alison Chaiken (by [Deb Nicholson][13]) - -#### How did you get interested in the automotive software space? - -I was working on [MeeGo][14] in phones at Nokia in 2009 when the project was cancelled. I thought, what's next? A colleague was working on [MeeGo-IVI][15], an early automotive Linux distribution. "Linux is going to be big in cars," I thought, so I headed in that direction. - -#### Can you tell us what aspects you're working on these days? - -I'm currently working for a startup on an advanced cruise control system that uses real-time Linux to increase the safety and fuel economy of big-rig trucks. I love working in this area, as no one would disagree that trucking can be improved. - -#### There have been a few stories about hacked cars in recent years. Can open source solutions help address this issue? - -I presented a talk on precisely this topic, on how Linux can (and cannot) contribute to security solutions in automotive at Southern California Linux Expo 2016 ([Slides][16]). Notably, GENIVI and Automotive Grade Linux have published their code and both projects take patches via Git. Please send your fixes upstream! Many eyes make all bugs shallow. - -#### Law enforcement agencies and insurance companies could find plenty of uses for data about drivers. How easy will it be for them to obtain this information? - -Good question. The Dedicated Short Range Communication Standard (IEEE-1609) takes great pains to keep drivers participating in Wi-Fi safety messaging anonymous. Still, if you're posting to Twitter from your car, someone will be able to track you. - -#### What can developers and private citizens do to make sure civil liberties are protected as automotive technology evolves? - -The Electronic Frontier Foundation (EFF) has done an excellent job of keeping on top of automotive issues, having commented through official channels on what data may be stored in automotive "black boxes" and on how DMCA's Provision 1201 applies to cars. - -#### What are some of the exciting things you see coming for drivers in the next few years? - -Adaptive cruise control and collision avoidance systems are enough of an advance to save lives. As they roll out through vehicle fleets, I truly believe that fatalities will decline. If that's not exciting, I don't know what is. Furthermore, capabilities like automated parking assist will make cars easier to drive and reduce fender-benders. - -#### What needs to be built and how can people get involved? - -Automotive Grade Linux is developed in the open and runs on cheap hardware (e.g. Raspberry Pi 2 and moderately priced Renesas Porter board) that anyone can buy. GENIVI automotive Linux middleware consortium has lots of software publicly available via Git. Furthermore, there is the ultra cool [OSVehicle open hardware][17] automotive platform. - -#### There are many ways for Linux software and open hardware folks with moderate budgets to get involved. Join us at #automotive on Freenode IRC if you have questions. - -### Interview with Steven Crumb (by Don Watkins) - -#### What's so huge about GENIVI's approach to IVI? - -GENIVI filled a huge gap in the automotive industry by pioneering the use of free and open source software, including Linux, for non-safety-critical automotive software like in-vehicle infotainment (IVI) systems. As consumers came to expect the same functionality in their vehicles as on their smartphones, the amount of software required to support IVI functions grew exponentially. The increased amount of software has also increased the costs of building the IVI systems and thus slowed time to market. - -GENIVI's use of open source software and a community development model has saved automakers and their software suppliers significant amounts of money while significantly reducing the time to market. I'm excited about GENIVI because we've been fortunate to lead a revolution of sorts in the automotive industry by slowly evolving organizations from a highly structured and proprietary methodology to a community-based approach. We're not done yet, but it's been a privilege to take part in a transformation that is yielding real benefits. - -#### How do your major members drive the direction of GENIVI? - -GENIVI has a lot of members and non-members contributing to our work. As with many open source projects, any company can influence the technical output by simply contributing code, patches, and time to test. With that said, BMW, Mercedes-Benz, Hyundai Motor, Jaguar Land Rover, PSA, Renault/Nissan, and Volvo are all active adopters of and contributors to GENIVI—and many other OEMs have IVI solutions in their cars that extensively use GENIVI's software. - -#### What licenses cover the contributed code? - -GENIVI employs a number of licenses ranging from (L)GPLv2 to MPLv2 to Apache 2.0. Some of our tools use the Eclipse license. We have a [public licensing policy][18] that details our licensing preferences. - -#### How does a person or group get involved? How important are community contributions to the ongoing success of the project? - -GENIVI does its development completely in the open ([projects.genivi.org][19]) and thus, anyone interested in using open software in automotive is welcome to participate. That said, the alliance can fund its continued development in the open through companies [joining GENIVI][20] as members. GENIVI members enjoy a wide variety of benefits, not the least of which is participation in the global community of 140 companies that has been developed over the last six years. - -Community is hugely important to GENIVI, and we could not have produced and maintained the valuable software we developed over the years without an active community of contributors. We've worked hard to make contributing to GENIVI as simple as joining an [email list][21] and connecting to the people in the various software projects. We use standard practices employed by many open source projects and provide high-quality tools and infrastructure to help developers feel at home and be productive. - -Regardless of someone's familiarity with the automotive software, they are welcome to join our community. People have modified cars for years, so for many people there is a natural draw to anything automotive. Software is the new domain for cars, and GENIVI wants to be the open door for anyone interested in working with automotive, open source software. - -------------------------------- -via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb - -作者:[Don Watkins][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[1]: https://en.wikipedia.org/wiki/In_car_entertainment -[2]: https://opensource.com/life/16/1/weekly-news-jan-9 -[3]: http://projects.genivi.org/smartdevicelink/home -[4]: http://www.genivi.org/ -[5]: https://www.linkedin.com/in/stevecrumb -[6]: http://www.genivi.org/genivi-members -[7]: http://www.linuxfoundation.org/ -[8]: https://www.automotivelinux.org/ -[9]: https://en.wikipedia.org/wiki/Telematics -[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and -[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3 -[12]: https://www.linkedin.com/in/stevecrumb -[13]: https://opensource.com/users/eximious -[14]: https://en.wikipedia.org/wiki/MeeGo -[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/ -[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf -[17]: https://www.osvehicle.com/ -[18]: http://projects.genivi.org/how -[19]: http://projects.genivi.org/ -[20]: http://genivi.org/join -[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects diff --git a/sources/talk/20160531 Linux vs. Windows device driver model.md b/sources/talk/20160531 Linux vs. Windows device driver model.md new file mode 100644 index 0000000000..9def01085b --- /dev/null +++ b/sources/talk/20160531 Linux vs. Windows device driver model.md @@ -0,0 +1,192 @@ +GHLandy Translating + +Linux vs. Windows device driver model : architecture, APIs and build environment comparison +============================================================================================ + +Device drivers are parts of the operating system that facilitate usage of hardware devices via certain programming interface so that software applications can control and operate the devices. As each driver is specific to a particular operating system, you need separate Linux, Windows, or Unix device drivers to enable the use of your device on different computers. This is why when hiring a driver developer or choosing an R&D service provider, it is important to look at their experience of developing drivers for various operating system platforms. + +![](https://c2.staticflickr.com/8/7289/26775594584_d2fe7483f9_c.jpg) + +The first step in driver development is to understand the differences in the way each operating system handles its drivers, underlying driver model and architecture it uses, as well as available development tools. For example, Linux driver model is very different from the Windows one. While Windows facilitates separation of the driver development and OS development and combines drivers and OS via a set of ABI calls, Linux device driver development does not rely on any stable ABI or API, with the driver code instead being incorporated into the kernel. Each of these models has its own set of advantages and drawbacks, but it is important to know them all if you want to provide a comprehensive support for your device. + +In this article we will compare Windows and Linux device drivers and explore the differences in terms of their architecture, APIs, build development, and distribution, in hopes of providing you with an insight on how to start writing device drivers for each of these operating systems. + +### 1. Device Driver Architecture + +Windows device driver architecture is different from the one used in Linux drivers, with either of them having their own pros and cons. Differences are mainly influenced by the fact that Windows is a closed-source OS while Linux is open-source. Comparison of the Linux and Windows device driver architectures will help us understand the core differences behind Windows and Linux drivers. + +#### 1.1. Windows driver architecture + +While Linux kernel is distributed with drivers themselves, Windows kernel does not include device drivers. Instead, modern Windows device drivers are written using the Windows Driver Model (WDM) which fully supports plug-and-play and power management so that the drivers can be loaded and unloaded as necessary. + +Requests from applications are handled by a part of Windows kernel called IO manager which transforms them into IO Request Packets (IRPs) which are used to identify the request and convey data between driver layers. + +WDM provides three kinds of drivers, which form three layers: + +- Filter drivers provide optional additional processing of IRPs. +- Function drivers are the main drivers that implement interfaces to individual devices. +- Bus drivers service various adapters and bus controllers that host devices. + +An IRP passes these layers as it travels from the IO manager down to the hardware. Each layer can handle an IRP by itself and send it back to the IO manager. At the bottom there is Hardware Abstraction Layer (HAL) which provides a common interface to physical devices. + +#### 1.2. Linux driver architecture + +The core difference in Linux device driver architecture as compared to the Windows one is that Linux does not have a standard driver model or a clean separation into layers. Each device driver is usually implemented as a module that can be loaded and unloaded into the kernel dynamically. Linux provides means for plug-and-play support and power management so that drivers can use them to manage devices correctly, but this is not a requirement. + +Modules export functions they provide and communicate by calling these functions and passing around arbitrary data structures. Requests from user applications come from the filesystem or networking level, and are converted into data structures as necessary. Modules can be stacked into layers, processing requests one after another, with some modules providing a common interface to a device family such as USB devices. + +Linux device drivers support three kinds of devices: + +- Character devices which implement a byte stream interface. +- Block devices which host filesystems and perform IO with multibyte blocks of data. +- Network interfaces which are used for transferring data packets through the network. + +Linux also has a Hardware Abstraction Layer that acts as an interface to the actual hardware for the device drivers. + +### 2. Device Driver APIs + +Both Linux and Windows driver APIs are event-driven: the driver code executes only when some event happens: either when user applications want something from the device, or when the device has something to tell to the OS. + +#### 2.1. Initialization + +On Windows, drivers are represented by a DriverObject structure which is initialized during the execution of the DriverEntry function. This entry point also registers a number of callbacks to react to device addition and removal, driver unloading, and handling the incoming IRPs. Windows creates a device object when a device is connected, and this device object handles all application requests on behalf of the device driver. + +As compared to Windows, Linux device driver lifetime is managed by kernel module's module_init and module_exit functions, which are called when the module is loaded or unloaded. They are responsible for registering the module to handle device requests using the internal kernel interfaces. The module has to create a device file (or a network interface), specify a numerical identifier of the device it wishes to manage, and register a number of callbacks to be called when the user interacts with the device file. + +#### 2.2. Naming and claiming devices + +##### **Registering devices on Windows** + +Windows device driver is notified about newly connected devices in its AddDevice callback. It then proceeds to create a device object used to identify this particular driver instance for the device. Depending on the driver kind, device object can be a Physical Device Object (PDO), Function Device Object (FDO), or a Filter Device Object (FIDO). Device objects can be stacked, with a PDO in the bottom. + +Device objects exist for the whole time the device is connected to the computer. DeviceExtension structure can be used to associate global data with a device object. + +Device objects can have names of the form **\Device\DeviceName**, which are used by the system to identify and locate them. An application opens a file with such name using CreateFile API function, obtaining a handle, which then can be used to interact with the device. + +However, usually only PDOs have distinct names. Unnamed devices can be accessed via device class interfaces. The device driver registers one or more interfaces identified by 128-bit globally unique identifiers (GUIDs). User applications can then obtain a handle to such device using known GUIDs. + +##### **Registering devices on Linux** + +On Linux user applications access the devices via file system entries, usually located in the /dev directory. The module creates all necessary entries during module initialization by calling kernel functions like register_chrdev. An application issues an open system call to obtain a file descriptor, which is then used to interact with the device. This call (and further system calls with the returned descriptor like read, write, or close) are then dispatched to callback functions installed by the module into structures like file_operations or block_device_operations. + +The device driver module is responsible for allocating and maintaining any data structures necessary for its operation. A file structure passed into the file system callbacks has a private_data field, which can be used to store a pointer to driver-specific data. The block device and network interface APIs also provide similar fields. + +While applications use file system nodes to locate devices, Linux uses a concept of major and minor numbers to identify devices and their drivers internally. A major number is used to identify device drivers, while a minor number is used by the driver to identify devices managed by it. The driver has to register itself in order to manage one or more fixed major numbers, or ask the system to allocate some unused number for it. + +Currently, Linux uses 32-bit values for major-minor pairs, with 12 bits allocated for the major number allowing up to 4096 distinct drivers. The major-minor pairs are distinct for character and block devices, so a character device and a block device can use the same pair without conflicts. Network interfaces are identified by symbolic names like eth0, which are again distinct from major-minor numbers of both character and block devices. + +#### 2.3. Exchanging data + +Both Linux and Windows support three ways of transferring data between user-level applications and kernel-level drivers: + +- **Buffered Input-Output** which uses buffers managed by the kernel. For write operations the kernel copies data from a user-space buffer into a kernel-allocated buffer, and passes it to the device driver. Reads are the same, with kernel copying data from a kernel buffer into the buffer provided by the application. +- **Direct Input-Output** which does not involve copying. Instead, the kernel pins a user-allocated buffer in physical memory so that it remains there without being swapped out while data transfer is in progress. +- **Memory mapping** can also be arranged by the kernel so that the kernel and user space applications can access the same pages of memory using distinct addresses. + +##### **Driver IO modes on Windows** + +Support for Buffered IO is a built-in feature of WDM. The buffer is accessible to the device driver via the AssociatedIrp.SystemBuffer field of the IRP structure. The driver simply reads from or writes to this buffer when it needs to communicate with the userspace. + +Direct IO on Windows is mediated by memory descriptor lists (MDLs). These are semi-opaque structures accessible via MdlAddress field of the IRP. They are used to locate the physical address of the buffer allocated by the user application and pinned for the duration of the IO request. + +The third option for data transfer on Windows is called METHOD_NEITHER. In this case the kernel simply passes the virtual addresses of user-space input and output buffers to the driver, without validating them or ensuring that they are mapped into physical memory accessible by the device driver. The device driver is responsible for handling the details of the data transfer. + +##### **Driver IO modes on Linux** + +Linux provides a number of functions like clear_user, copy_to_user, strncpy_from_user, and some others to perform buffered data transfers between the kernel and user memory. These functions validate pointers to data buffers and handle all details of the data transfer by safely copying the data buffer between memory regions. + +However, drivers for block devices operate on entire data blocks of known size, which can be simply moved between the kernel and user address spaces without copying them. This case is automatically handled by Linux kernel for all block device drivers. The block request queue takes care of transferring data blocks without excess copying, and Linux system call interface takes care of converting file system requests into block requests. + +Finally, the device driver can allocate some memory pages from kernel address space (which is non-swappable) and then use the remap_pfn_range function to map the pages directly into the address space of the user process. The application can then obtain the virtual address of this buffer and use it to communicate with the device driver. + +### 3. Device Driver Development Environment + +#### 3.1. Device driver frameworks + +##### **Windows Driver Kit** + +Windows is a closed-source operating system. Microsoft provides a Windows Driver Kit to facilitate Windows device driver development by non-Microsoft vendors. The kit contains all that is necessary to build, debug, verify, and package device drivers for Windows. + +Windows Driver Model defines a clean interface framework for device drivers. Windows maintains source and binary compatibility of these interfaces. Compiled WDM drivers are generally forward-compatible: that is, an older driver can run on a newer system as is, without being recompiled, but of course it will not have access to the new features provided by the OS. However, drivers are not guaranteed to be backward-compatible. + +##### **Linux source code** + +In comparison to Windows, Linux is an open-source operating system, thus the entire source code of Linux is the SDK for driver development. There is no formal framework for device drivers, but Linux kernel includes numerous subsystems that provide common services like driver registration. The interfaces to these subsystems are described in kernel header files. + +While Linux does have defined interfaces, these interfaces are not stable by design. Linux does not provide any guarantees about forward or backward compatibility. Device drivers are required to be recompiled to work with different kernel versions. No stability guarantees allow rapid development of Linux kernel as developers do not have to support older interfaces and can use the best approach to solve the problems at hand. + +Such ever-changing environment does not pose any problems when writing in-tree drivers for Linux, as they are a part of the kernel source, because they are updated along with the kernel itself. However, closed-source drivers must be developed separately, out-of-tree, and they must be maintained to support different kernel versions. Thus Linux encourages device driver developers to maintain their drivers in-tree. + +#### 3.2. Build system for device drivers + +Windows Driver Kit adds driver development support for Microsoft Visual Studio, and includes a compiler used to build the driver code. Developing Windows device drivers is not much different from developing a user-space application in an IDE. Microsoft also provides an Enterprise Windows Driver Kit, which enables command-line build environment similar to the one of Linux. + +Linux uses Makefiles as a build system for both in-tree and out-of-tree device drivers. Linux build system is quite developed and usually a device driver needs no more than a handful of lines to produce a working binary. Developers can use any [IDE][5] as long as it can handle Linux source code base and run make, or they can easily compile drivers manually from terminal. + +#### 3.3. Documentation support + +Windows has excellent documentation support for driver development. Windows Driver Kit includes documentation and sample driver code, abundant information about kernel interfaces is available via MSDN, and there exist numerous reference and guide books on driver development and Windows internals. + +Linux documentation is not as descriptive, but this is alleviated with the whole source code of Linux being available to driver developers. The Documentation directory in the source tree documents some of the Linux subsystems, but there are [multiple books][4] concerning Linux device driver development and Linux kernel overviews, which are much more elaborate. + +Linux does not provide designated samples of device drivers, but the source code of existing production drivers is available and can be used as a reference for developing new device drivers. + +#### 3.4. Debugging support + +Both Linux and Windows have logging facilities that can be used to trace-debug driver code. On Windows one would use DbgPrint function for this, while on Linux the function is called printk. However, not every problem can be resolved by using only logging and source code. Sometimes breakpoints are more useful as they allow to examine the dynamic behavior of the driver code. Interactive debugging is also essential for studying the reasons of crashes. + +Windows supports interactive debugging via its kernel-level debugger WinDbg. This requires two machines connected via a serial port: a computer to run the debugged kernel, and another one to run the debugger and control the operating system being debugged. Windows Driver Kit includes debugging symbols for Windows kernel so Windows data structures will be partially visible in the debugger. + +Linux also supports interactive debugging by means of KDB and KGDB. Debugging support can be built into the kernel and enabled at boot time. After that one can either debug the system directly via a physical keyboard, or connect to it from another machine via a serial port. KDB offers a simple command-line interface and it is the only way to debug the kernel on the same machine. However, KDB lacks source-level debugging support. KGDB provides a more complex interface via a serial port. It enables usage of standard application debuggers like GDB for debugging Linux kernel just like any other userspace application. + +### 4. Distributing Device Drivers + +##### 4.1. Installing device drivers + +On Windows installed drivers are described by text files called INF files, which are typically stored in C:\Windows\INF directory. These files are provided by the driver vendor and define which devices are serviced by the driver, where to find the driver binaries, the version of the driver, etc. + +When a new device is plugged into the computer, Windows looks though +installed drivers and loads an appropriate one. The driver will be automatically unloaded as soon as the device is removed. + +On Linux some drivers are built into the kernel and stay permanently loaded. Non-essential ones are built as kernel modules, which are usually stored in the /lib/modules/kernel-version directory. This directory also contains various configuration files, like modules.dep describing dependencies between kernel modules. + +While Linux kernel can load some of the modules at boot time itself, generally module loading is supervised by user-space applications. For example, init process may load some modules during system initialization, and the udev daemon is responsible for tracking the newly plugged devices and loading appropriate modules for them. + +#### 4.2. Updating device drivers + +Windows provides a stable binary interface for device drivers so in some cases it is not necessary to update driver binaries together with the system. Any necessary updates are handled by the Windows Update service, which is responsible for locating, downloading, and installing up-to-date versions of drivers appropriate for the system. + +However, Linux does not provide a stable binary interface so it is necessary to recompile and update all necessary device drivers with each kernel update. Obviously, device drivers, which are built into the kernel are updated automatically, but out-of-tree modules pose a slight problem. The task of maintaining up-to-date module binaries is usually solved with [DKMS][3]: a service that automatically rebuilds all registered kernel modules when a new kernel version is installed. + +#### 4.3. Security considerations + +All Windows device drivers must be digitally signed before Windows loads them. It is okay to use self-signed certificates during development, but driver packages distributed to end users must be signed with valid certificates trusted by Microsoft. Vendors can obtain a Software Publisher Certificate from any trusted certificate authority authorized by Microsoft. This certificate is then cross-signed by Microsoft and the resulting cross-certificate is used to sign driver packages before the release. + +Linux kernel can also be configured to verify signatures of kernel modules being loaded and disallow untrusted ones. The set of public keys trusted by the kernel is fixed at the build time and is fully configurable. The strictness of checks performed by the kernel is also configurable at build time and ranges from simply issuing warnings for untrusted modules to refusing to load anything with doubtful validity. + +### 5. Conclusion + +As shown above, Windows and Linux device driver infrastructure have some things in common, such as approaches to API, but many more details are rather different. The most prominent differences stem from the fact that Windows is a closed-source operating system developed by a commercial corporation. This is what makes good, documented, stable driver ABI and formal frameworks a requirement for Windows while on Linux it would be more of a nice addition to the source code. Documentation support is also much more developed in Windows environment as Microsoft has resources necessary to maintain it. + +On the other hand, Linux does not constrain device driver developers with frameworks and the source code of the kernel and production device drivers can be just as helpful in the right hands. The lack of interface stability also has an implications as it means that up-to-date device drivers are always using the latest interfaces and the kernel itself carries lesser burden of backwards compatibility, which results in even cleaner code. + +Knowing these differences as well as specifics for each system is a crucial first step in providing effective driver development and support for your devices. We hope that this Windows and Linux device driver development comparison was helpful in understanding them, and will serve as a great starting point in your study of device driver development process. + +Download this article as ad-free PDF (made possible by [your kind donation][2]): [Download PDF][1] + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/linux-vs-windows-device-driver-model.html + +作者:[Dennis Turpitka][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://xmodulo.com/author/dennis +[1]: http://xmodulo.com/linux-vs-windows-device-driver-model.html?format=pdf +[2]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=PBHS9R4MB9RX4 +[3]: http://xmodulo.com/build-kernel-module-dkms-linux.html +[4]: http://xmodulo.com/go/linux_device_driver_books +[5]: http://xmodulo.com/good-ide-for-c-cpp-linux.html diff --git a/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md b/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md new file mode 100644 index 0000000000..6992f76156 --- /dev/null +++ b/sources/talk/20160614 ​Ubuntu Snap takes charge of Linux desktop and IoT software distribution.md @@ -0,0 +1,110 @@ +Ubuntu Snap takes charge of Linux desktop and IoT software distribution +=========================================================================== + +[Canonical][28] and [Ubuntu][29] founder Mark Shuttleworth said in an interview that he hadn't planned on an announcement about Ubuntu's new [Snap app package format][30]. But then in a matter of a few months, developers from multiple Linux distributions and companies announced they would use Snap as a universal Linux package format. + +![](http://zdnet2.cbsistatic.com/hub/i/r/2016/06/14/a9b2a139-3cd4-41bf-8e10-180cb9450134/resize/770xauto/adc7d16a46167565399ecdb027dd1416/ubuntu-snap.jpg) +>Linux distributors, ISVs, and companies are all adopting Ubuntu Snap to distribute and update programs across all Linux varieties. + +Why? Because Snap enables a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device. According to Olli Ries, head of Canonical's Ubuntu client platform products and releases: + +>The [security mechanisms in Snap packages][1] allow us to open up the platform for much faster iteration across all our flavors as Snap applications are isolated from the rest of the system. Users can install a Snap without having to worry whether it will have an impact on their other apps or their system. + +Of course, as Matthew Garrett, a former Linux kernel developer and CoreOS security developer, has pointed out: If you [use Snap with an insecure program, such as the X11][2] window system, you don't actually gain any security. + +Shuttleworth agrees with Garrett but points out that you can control how Snap applications interact with the rest of this system. So, for example, a web browser can be contained within a secure Snap, which uses the Ubuntu packaged [openssl][3] Transport Layer Security (TLS) and Secure Sockets Layer (SSL) library. In addition, even if something does break into the browser instance, it still can't get to the underlying operating system. + +Many companies agree. [Dell][4], [Samsung][5], [Mozilla][6], [Krita][7], [Mycroft][8], and [Horizon Computing][9] are adopting Snap. [Arch Linux][10], [Debian][11], [Gentoo][12], and [OpenWrt][13] developers have also embraced Snaps and are adding it to their Linux distributions + +Snap packages, aka "Snaps", now work natively on Arch, Debian, Fedora, Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Ubuntu Unity, and Xubuntu. Snap is being validated on CentOS, Elementary, Gentoo, Mint, OpenSUSE, and Red Hat Enterprise Linux (RHEL), and are easy to enable on other Linux distributions. + +These distributions are adopting Snaps, Shuttleworth explained, because "Snaps bring those apps to every Linux desktop, server, device or cloud machine, giving users freedom to choose any Linux distribution while retaining access to the best apps." + +Taken together these distributions represent the vast majority of common Linux desktop, server and cloud distributions. Why would they switch from their existing package management systems? "One nice feature of Snaps is support for edge and beta channels, which allow users to opt-in to the pre-release developer versions of software or stick with the latest stable versions." explained Tim Jester-Pfadt, an Arch Linux contributor. + +In addition to the Linux distributors, independent software vendors (ISVs) are embracing Snap since it greatly simplifies third-party Linux app distribution and security maintenance. For example, [The Document Foundation][14] will be making the popular open-source office suite [LibreOffice][15] available as a Snap. + +Thorsten Behrens, co-founder of The Document Foundation explained: + +>Our objective is to make LibreOffice easily available to as many users as possible. Snaps enable our users to get the freshest LibreOffice releases across different desktops and distributions quickly, easily and consistently. As a bonus, it should help our release engineers to eventually move away from bespoke, home-grown and ancient Linux build solutions, towards something that is collectively maintained. + +In a statement, Nick Nguyen, Mozilla's [Firefox][16] VP, added: + +>We strive to offer users a great experience and make Firefox available across many platforms, devices and operating systems. With the introduction of Snaps, continually optimizing Firefox will become possible, providing Linux users the most up-to-date features. + +Boudewijn Rempt, project lead at the [Krita Foundation][17], a KDE-based graphics program, said: + +>Maintaining DEB packages in a private repository was complex and time consuming, snaps are much easier to maintain, package and distribute. Putting the snap in the store was particularly simple, this is the most streamlined app store I have published software in. [Krita 3.0][18] has just been released as a snap which will be updated automatically as newer versions become available. + +It's not just Linux desktop programmers who are excited by Snap. Internet of Things (IoT) and embedded developers are also grabbing on to Snap with both hands. + +Because Snaps are isolated from one another to help with data security, and can be updated or rolled back automatically, they are ideal for devices. Multiple vendors have launched snappy IoT devices, enabling a new class of "smart edge" device with IoT app store. Snappy devices receive automatic updates for the base OS, together with updates to the apps installed on the device. + +Dell, which according to Shuttleworth was one of the first IoT vendors to see the power of Snap, will be using Snap in its devices. + +"We believe Snaps address the security risks and manageability challenges associated with deploying and running multiple third party applications on a single IoT Gateway," said Jason Shepherd, Dell's Director of IoT Strategy and Partnerships. "This trusted and universal app format is essential for Dell, our IoT Solutions Partners and commercial customers to build a scalable, IT-ready, and vibrant ecosystem of IoT applications." + +It's simple, explained OpenWrt developer Matteo Croce. "Snaps deliver new applications to OpenWrt while leaving the core OS unchanged.... Snaps are a faster way to deliver a wider range of software to supported OpenWrt access points and routers." + +Shuttleworth doesn't see Snaps replacing existing Linux package systems such as [RPM][19] and [DEB][20]. Instead he sees it as being complementary to them. Snaps will sit alongside the native package. Each distribution has its own mechanisms to provide and update the core operating system and its updates. What Snap brings to the table is universal apps that cannot interfere with the base operating system + +Each Snap is confined using a range of kernel isolation and security mechanisms, tailored to the Snap application's needs. A careful review process ensures that snaps only receive the permissions they require to operate. Users will not have to make complex security decisions when installing the snap. + +Since Snaps are essentially self-contained zip files that can be quickly executed in place, "Snaps are much easier to create than traditional Linux packages, and allow us to evolve dependencies independent of the base operating system, so we can easily provide the very best and latest Chinese Linux apps to users across all distributions," explained Jack Yu, leader of the popular [Chinese Ubuntu Kylin][21] team. + +The snap format, designed by Canonical, is handled by [snapd][22]. Its development work is done on [GitHub][23]. Porting snapd to a wide range of Linux distributions has proven straightforward, and the community has grown to include contributors from a wide range of Linux backgrounds. + +Snap packages are created with the snapcrafttool. The home of the project is [snapcraft.io][24], which includes a tour and step-by-step guides to Snap creation, along with documentation for users and contributors to the project. Snaps can be built from existing distribution packages, but are more commonly built from source for optimization and size efficiency. + +Unless you're an Ubuntu power-user or serious Linux developer you may not have heard of Snap. In the future, anyone who does work with Linux on any platform will know the program. It's well on its way to becoming a major -- perhaps the most important of all -- Linux application installation and upgrade mechanism. + +#### Related Stories: + +- [Linux expert Matthew Garrett: Ubuntu 16.04's new Snap format is a security risk][25] +- [Ubuntu Linux 16.04 is here][26] +- [Microsoft and Canonical partner to bring Ubuntu to Windows 10][27] + + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/ubuntu-snap-takes-charge-of-linux-desktop-and-iot-software-distribution/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[28]: http://www.canonical.com/ +[29]: http://www.ubuntu.com/ +[30]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ +[1]: https://insights.ubuntu.com/2016/04/13/snaps-for-classic-ubuntu/ +[2]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ +[3]: https://www.openssl.org/ +[4]: http://www.dell.com/en-us/ +[5]: http://www.samsung.com/us/ +[6]: http://www.mozilla.com/ +[7]: https://krita.org/en/ +[8]: https://mycroft.ai/ +[9]: http://www.horizon-computing.com/ +[10]: https://www.archlinux.org/ +[11]: https://www.debian.org/ +[12]: https://www.gentoo.org/ +[13]: https://openwrt.org/ +[14]: https://www.documentfoundation.org/ +[15]: https://www.libreoffice.org/download/libreoffice-fresh/ +[16]: https://www.mozilla.org/en-US/firefox/new/ +[17]: https://krita.org/en/about/krita-foundation/ +[18]: https://krita.org/en/item/krita-3-0-released/ +[19]: http://rpm5.org/ +[20]: https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html +[21]: http://www.ubuntu.com/desktop/ubuntu-kylin +[22]: https://launchpad.net/ubuntu/+source/snapd +[23]: https://github.com/snapcore/snapd +[24]: http://snapcraft.io/ +[25]: http://www.zdnet.com/article/linux-expert-matthew-garrett-ubuntu-16-04s-new-snap-format-is-a-security-risk/ +[26]: http://www.zdnet.com/article/ubuntu-linux-16-04-is-here/ +[27]: http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/ + + diff --git a/sources/talk/20160620 5 Best Linux Package Managers for Linux Newbies.md b/sources/talk/20160620 5 Best Linux Package Managers for Linux Newbies.md new file mode 100644 index 0000000000..a92c5265e5 --- /dev/null +++ b/sources/talk/20160620 5 Best Linux Package Managers for Linux Newbies.md @@ -0,0 +1,118 @@ +transalting by ynmlml +5 Best Linux Package Managers for Linux Newbies +===================================================== + + +One thing a new Linux user will get to know as he/she progresses in using it is the existence of several Linux distributions and the different ways they manage packages. + +Package management is very important in Linux, and knowing how to use multiple package managers can proof life saving for a power user, since downloading or installing software from repositories, plus updating, handling dependencies and uninstalling software is very vital and a critical section in Linux system Administration. + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-Package-Managers.png) +>Best Linux Package Managers + +Therefore to become a Linux power user, it is significant to understand how the major Linux distributions actually handle packages and in this article, we shall take a look at some of the best package managers you can find in Linux. + +Here, our main focus is on relevant information about some of the best package managers, but not how to use them, that is left to you to discover more. But I will provide meaningful links that point out usage guides and many more. + +### 1. DPKG – Debian Package Management System + +Dpkg is a base package management system for the Debian Linux family, it is used to install, remove, store and provide information about `.deb` packages. + +It is a low-level tool and there are front-end tools that help users to obtain packages from remote repositories and/or handle complex package relations and these include: + +Don’t Miss: [15 Practical Examples of “dpkg commands” for Debian Based Distros][1] + +#### APT (Advanced Packaging Tool) + +It is a very popular, free, powerful and more so, useful command line package management system that is a front end for dpkg package management system. + +Users of Debian or its derivatives such as Ubuntu and Linux Mint should be familiar with this package management tool. + +To understand how it actually works, you can go over these how to guides: + +Don’t Miss: [15 Examples of How to Use New Advanced Package Tool (APT) in Ubuntu/Debian][2] + +Don’t Miss: [25 Useful Basic Commands of APT-GET and APT-CACHE for Package Management][3] + +#### Aptitude Package Manager + +This is also a popular command line front-end package management tool for Debian Linux family, it works similar to APT and there have been a lot of comparisons between the two, but above all, testing out both can make you understand which one actually works better. + +It was initially built for Debian and its derivatives but now its functionality stretches to RHEL family as well. You can refer to this guide for more understanding of APT and Aptitude: + +Don’t Miss: [What is APT and Aptitude? and What’s real Difference Between Them?][4] + +#### Synaptic Package Manager + +Synaptic is a GUI package management tool for APT based on GTK+ and it works fine for users who may not want to get their hands dirty on a command line. It implements the same features as apt-get command line tool. + +### 2. RPM (Red Hat Package Manager) + +This is the Linux Standard Base packing format and a base package management system created by RedHat. Being the underlying system, there several front-end package management tools that you can use with it and but we shall only look at the best and that is: + +#### YUM (Yellowdog Updater, Modified) + +It is an open source and popular command line package manager that works as a interface for users to RPM. You can compare it to APT under Debian Linux systems, it incorporates the common functionalities that APT has. You can get a clear understanding of YUM with examples from this how to guide: + +Don’t Miss: [20 Linux YUM Commands for Package Management][5] + +#### DNF – Dandified Yum + +It is also a package manager for the RPM-based distributions, introduced in Fedora 18 and it is the next generation of version of YUM. + +If you have been using Fedora 22 onwards, you must have realized that it is the default package manager. Here are some links that will provide you more information about DNF and how to use it: + +Don’t Miss: [DNF – The Next Generation Package Management for RPM Based Distributions][6] + +Don’t Miss: [27 ‘DNF’ Commands Examples to Manage Fedora Package Management][7] + +### 3. Pacman Package Manager – Arch Linux + +It is a popular and powerful yet simple package manager for Arch Linux and some little known Linux distributions, it provides some of the fundamental functionalities that other common package managers provide including installing, automatic dependency resolution, upgrading, uninstalling and also downgrading software. + +But most effectively, it is built to be simple for easy package management by Arch users. You can read this [Pacman overview][8] which explains into details some of its functions mentioned above. + +### 4. Zypper Package Manager – openSUSE + +It is a command line package manager on OpenSUSE Linux and makes use of the libzypp library, its common functionalities include repository access, package installation, resolution of dependencies issues and many more. + +Importantly, it can also handle repository extensions such as patterns, patches, and products. New OpenSUSE user can refer to this following guide to master it. + +Don’t Miss: [45 Zypper Commands to Master OpenSUSE Package Management][9] + +### 5. Portage Package Manager – Gentoo + +It is a package manager for Gentoo, a less popular Linux distribution as of now, but this won’t limit it as one of the best package managers in Linux. + +The main aim of the Portage project is to make a simple and trouble free package management system to include functionalities such as backwards compatibility, automation plus many more. + +For better understanding, try reading [Portage project page][10]. + +### Concluding Remarks + +As I already hinted at the beginning, the main purpose of this guide was to provide Linux users a list of the best package managers but knowing how to use them can be done by following the necessary links provided and trying to test them out. + +Users of the different Linux distributions will have to learn more on their own to better understand the different package managers mentioned above. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-package-managers/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Ravi Saive][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/admin/ +[1]: http://www.tecmint.com/dpkg-command-examples/ +[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/ +[3]: http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[4]: http://www.tecmint.com/difference-between-apt-and-aptitude/ +[5]: http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[6]: http://www.tecmint.com/dnf-next-generation-package-management-utility-for-linux/ +[7]: http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ +[8]: https://wiki.archlinux.org/index.php/Pacman +[9]: http://www.tecmint.com/zypper-commands-to-manage-suse-linux-package-management/ +[10]: https://wiki.gentoo.org/wiki/Project:Portage diff --git a/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md b/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md new file mode 100644 index 0000000000..1417c9fe7b --- /dev/null +++ b/sources/talk/20160620 Training vs. hiring to meet the IT needs of today and tomorrow.md @@ -0,0 +1,63 @@ +[Cathon is translating] +Training vs. hiring to meet the IT needs of today and tomorrow +================================================================ + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf) + +In the digital era, IT skills requirements are in a constant state of flux thanks to the constant change of the tools and technologies companies need to keep pace. It’s not easy for companies to find and hire talent with coveted skills that will enable them to innovate. Meanwhile, training internal staff to take on new skills and challenges takes time that is often in short supply. + +[Sandy Hill][1] is quite familiar with the various skills required across a variety of IT disciplines. As the director of IT for [Pegasystems][2], she is responsible for IT teams involved in areas ranging from application development to data center operations. What’s more, Pegasystems develops applications to help sales, marketing, service and operations teams streamline operations and connect with customers, which means she has to grasp the best way to use IT resources internally, and the IT challenges the company’s customers face. + +![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png) + +**The Enterprisers Project (TEP): How has the emphasis you put on training changed in recent years?** + +**Hill**: We’ve been growing exponentially over the past couple of years so now we’re implementing more global processes and procedures. With that comes the training aspect of making sure everybody is on the same page. + +Most of our focus has shifted to training staff on new products and tools that get implemented to drive innovation and enhance end user productivity. For example, we’ve implemented an asset management system; we didn’t have one before. So we had to do training globally instead of hiring someone who already knew the product. As we’re growing, we’re also trying to maintain a tight budget and flat headcount. So we’d rather internally train than try to hire new people. + +**TEP: Describe your approach to training. What are some of the ways you help employees evolve their skills?** + +**Hill**: I require each staff member to have a technical and non-technical training goal, which are tracked and reported on as part of their performance review. Their technical goal needs to align within their job function, and the non-technical goal can be anything from focusing on sharpening one of their soft skills to learning something outside of their area of expertise. I perform yearly staff evaluations to see where the gaps and shortages are so that teams remain well-rounded. + +**TEP: To what extent have your training initiatives helped quell recruitment and retention issues?** + +**Hill**: Keeping our staff excited about learning new technologies keeps their skill sets sharp. Having the staff know that we value them, and we are vested in their professional growth and development motivates them. + +**TEP: What sorts of training have you found to be most effective?** + +**Hill**: We use several different training methods that we’ve found to be effective. With new or special projects, we try to incorporate a training curriculum led by the vendor as part of the project rollout. If that’s not an option, we use off-site training. We also purchase on-line training packages, and I encourage my staff to attend at least one conference per year to keep up with what’s new in the industry. + +**TEP**: For what sorts of skills have you found it’s better to hire new people than train existing staff? + +**Hill**: It depends on the project. In one recent initiative, trying to implement OpenStack, we didn’t have internal expertise at all. So we aligned with a consulting firm that specialized in that area. We utilized their expertise on-site to help run the project and train internal team members. It was a massive undertaking to get internal people to learn the skills they needed while also doing their day-to-day jobs. + +The consultant helped us determine the headcount we needed to be proficient. This allowed us to assess our staff to see if gaps remained, which would require additional training or hiring. And we did end up hiring some of the contractors. But the alternative was to send some number of FTEs (full-time employees) for 6 to 8 weeks of training, and our pipeline of projects wouldn’t allow that. + +**TEP: In thinking about some of your most recent hires, what skills did they have that are especially attractive to you?** + +**Hill**: In recent hires, I’ve focused on soft skills. In addition to having solid technical skills, they need to be able to communicate effectively, work in teams and have the ability to persuade, negotiate and resolve conflicts. + +IT people in general kind of keep to themselves; they’re often not the most social people. Now, where IT is more integrated throughout the organization, the ability to give useful updates and status reports to other business units is critical to show that IT is an active presence and to be successful. + + + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/ + +作者:[ Paul Desmond][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://enterprisersproject.com/user/paul-desmond +[1]: https://enterprisersproject.com/user/sandy-hill +[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c| + + + + + + diff --git a/sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md b/sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md new file mode 100644 index 0000000000..35088e4be1 --- /dev/null +++ b/sources/talk/20160625 Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful.md @@ -0,0 +1,110 @@ +Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful? +================================================================================= + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg) + +An in-depth look into the new generation of packages starting to permeate the Linux ecosystem. + + +Lately we’ve been hearing more and more about Ubuntu’s Snap packages and Flatpak (formerly referred to as xdg-app) created by Red Hat’s employee Alexander Larsson. + +These 2 types of next generation packages are in essence having the same goal and characteristics which are: being standalone packages that doesn’t rely on 3rd-party system libraries in order to function. + +This new technology direction which Linux seems to be headed is automatically giving rise to questions such as, what are the advantages / disadvantages of standalone packages? does this lead us to a better Linux overall? what are the motives behind it? + +To answer these questions and more, let us explore the things we know about Snap and Flatpak so far. + +### The Motive + +According to both [Flatpak][1] and [Snap][2] statements, the main motive behind them is to be able to bring one and the same version of application to run across multiple Linux distributions. + +>“From the very start its primary goal has been to allow the same application to run across a myriad of Linux distributions and operating systems.” Flatpak + +>“… ‘snap’ universal Linux package format, enabling a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device.” Snap + +To be more specific, the guys behind Snap and Flatpak (S&F) believe that there’s a barrier of fragmentation on the Linux platform. + +A barrier which holds back the platform advancement by burdening developers with more, perhaps unnecessary, work to get their software run on the many distributions out there. + +Therefore, as leading Linux distributions (Ubuntu & Red Hat), they wish to eliminate the barrier and strengthen the platform in general. + +But what are the more personal gains which motivate the development of S&F? + +#### Personal Gains? + +Although not officially stated anywhere, it may be assumed that by leading the efforts of creating a unified package that could potentially be adopted by the vast majority of Linux distros (if not all of them), the captains of these projects could assume a key position in determining where the Linux ship sails. + +### The Advantages + +The benefits of standalone packages are diverse and can depend on different factors. + +Basically however, these factors can be categorized under 2 distinct criteria: + +#### User Perspective + ++ From a Linux user point of view, Snap and Flatpak both bring the possibility of installing any package (software / app) on any distribution the user is using. + +That is, for instance, if you’re using a not so popular distribution which has only a scarce supply of packages available in their repo, due to workforce limitations probably, you’ll now be able to easily and significantly increase the amount of packages available to you – which is a great thing. + ++ Also, users of popular distributions that do have many packages available in their repos, will enjoy the ability of installing packages that might not have behaved with their current set of installed libraries. + +For example, a Debian user who wants to install a package from ‘testing branch’ will not have to convert his entire system into ‘testing’ (in order for the package to run against newer libraries), rather, that user will simply be able to install only the package he wants from whichever branch he likes and on whatever branch he’s on. + +The latter point, was already basically possible for users who were compiling their packages straight from source, however, unless using a source based distribution such as Gentoo, most users will see this as just an unworthily hassle. + ++ The advanced user, or perhaps better put, the security aware user might feel more comfortable with this type of packages as long as they come from a reliable source as they tend to provide another layer of isolation since they are generally isolated from system packages. + +* Both S&F are being developed with enhanced security in mind, which generally makes use of “sandboxing” i.e isolation in order to prevent cases where they carry a virus which can infect the entire system, similar to the way .exe files on MS Windows may. (More on MS and S&F later) + +#### Developer Perspective + +For developers, the advantages of developing S&F packages will probably be a lot clearer than they are to the average user, some of these were already hinted in a previous section of this post. + +Nonetheless, here they are: + ++ S&F will make it easier on devs who want to develop for more than one Linux distribution by unifying the process of development, therefore minimizing the amount of work a developer needs to do in order to get his app running on multiple distributions. + +++ Developers could therefore gain easier access to a wider range of distributions. + ++ S&F allow devs to privately distribute their packages without being dependent on distribution maintainers to stabilize their package for each and every distro. + +++ Through the above, devs may gain access to direct statistics of user adoption / engagement for their software. + +++ Also through the above, devs could get more directly involved with users, rather than having to do so through a middleman, in this case, the distribution. + +### The Downsides + +– Bloat. Simple as that. Flatpak and Snap aren’t just magic making dependencies evaporate into thin air. Rather, instead of relying on the target system to provide the required dependencies, S&F comes with the dependencies prebuilt into them. + +As the saying goes “if the mountain won’t come to Muhammad, Muhammad must go to the mountain…” + +– Just as the security-aware user might enjoy S&F packages extra layer of isolation, as long they come from a trusted source. The less knowledgeable user on the hand, might be prone to the other side of the coin hazard which is using a package from an unknown source which may contain malicious software. + +The above point can be said to be valid even with today’s popular methods, as PPAs, overlays, etc might also be maintained by untrusted sources. + +However, with S&F packages the risk increases since malicious software developers need to create only one version of their program in order to infect a large number of distributions, whereas, without it they’d needed to create multiple versions in order to adjust their malware to other distributions. + +Was Microsoft Right All Along? + +With all that’s mentioned above in mind, it’s pretty clear that for the most part, the advantages of using S&F packages outweighs the drawbacks. + +At the least for users of binary-based distributions, or, non lightweight focused distros. + +Which eventually lead me to asking the above question – could it be that Microsoft was right all along? if so and S&F becomes the Linux standard, would you still consider Linux a Unix-like variant? + +Well apparently, the best one to answer those questions is probably time. + +Nevertheless, I’d argue that even if not entirely right, MS certainly has a good point to their credit, and having all these methods available here on Linux out of the box is certainly a plus in my book. + + +-------------------------------------------------------------------------------- + +via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/ + +作者:[Editorials][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.iwillfolo.com/category/editorials/ diff --git a/sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md b/sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md new file mode 100644 index 0000000000..98967d866b --- /dev/null +++ b/sources/talk/20160627 Linux Applications That Works On All Distributions – Are They Any Good.md @@ -0,0 +1,90 @@ +Linux Applications That Works On All Distributions – Are They Any Good? +============================================================================ + +![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Bundled-applications.jpg) + + +A revisit of Linux community’s latest ambitions – promoting decentralized applications in order to tackle distribution fragmentation. + +Following last week’s article: [Ubuntu’s Snap, Red Hat’s Flatpak And Is ‘One Fits All’ Linux Packages Useful][1]?, a couple of new opinions rose to the surface which may contain crucial information about the usefulness of such apps. + +### The Con Side + +Commenting on the subject [here][2], a [Gentoo][3] user who goes by the name Till, gave rise to a few points which hasn’t been addressed to the fullest the last time we covered the issue. + +While previously we settled on merely calling it bloat, Till here on the other hand, is dissecting that bloat further so to help us better understand both its components and its consequences. + +Referring to such apps as “bundled applications” – for the way they work on all distributions is by containing dependencies together with the apps themselves, Till says: + +>“bundles ship a lot of software that now needs to be maintained by the application developer. If library X has a security problem and needs an update, you rely on every single applications to ship correct updates in order to make your system save.” + +Essentially, Till raises an important security point. However, it isn’t necessarily has to be tied to security alone, but can also be linked to other aspects such as system maintenance, atomic updates, etc… + +Furthermore, if we take that notion one step further and assume that dependencies developers may cooperate, therefore releasing their software in correlation with the apps who use them (an utopic situation), we shall then get an overall slowdown of the entire platform development. + +Another problem that arises from the same point made above is dependencies-transparency becomes obscure, that is, if you’d want to know which libraries are bundled with a certain app, you’ll have to rely on the developer to publish such data. + +Or, as Till puts it: “Questions like, did package XY already include the updated library Z, will be your daily bread”. + +For comparison, with the standard methods available on Linux nowadays (both binary and source distributions), you can easily notice which libraries are being updated upon a system update. + +And you can also rest assure that all other apps on the system will use it, freeing you from the need to check each app individually. + +Other cons that may be deduced from the term bloat include: bigger package size (each app is bundled with dependencies), higher memory usage (no more library sharing) and also – + +One less filter mechanism to prevent malicious software – distributions package maintainers also serve as a filter between developers and users, helping to assure users get quality software. + +With bundled apps this may no longer be the case. + +As a finalizing general point, Till asserts that although useful in some cases, for the most part, bundled apps weakens the free software position in distributions (as proprietary vendors will now be able to deliver software without sharing it on public repositories). + +And apart from that, it introduces many other issues. Many problems are simply moved towards the developers. + +### The Pro Side + +In contrast, another comment by a person named Sven tries to contradict common claims that basically go against the use of bundled applications, hence justifying and promoting the use of it. + +“waste of space” – Sven claims that in today’s world we have many other things that wastes disk space, such as movies stored on hard drive, installed locals, etc… + +Ultimately, these things are infinitely more wasteful than a mere “100 MB to run a program you use all day … Don’t be ridiculous.” + +“waste of RAM” – the major points in favor are: + +- Shared libraries waste significantly less RAM compared to application runtime data. +- RAM is cheap today. + +“security nightmare” – not every application you run is actually security-critical. + +Also, many applications never even see any security updates, unless on a ‘rolling distro’. + +In addition to Sven’s opinions, who try to stick to the pragmatic side, a few advantages were also pointed by Till who admits that bundled apps has their merits in certain cases: + +- Proprietary vendors who want to keep their code out of the public repositories will be able to do so more easily. +- Niche applications, which are not packaged by your distribution, will now be more readily available. +- Testing on binary distributions which do not have beta packages will become easier. +- Freeing users from solving dependencies problems. + +### Final Thoughts + +Although shedding new light onto the matter, it seems that one conclusion still stands and accepted by all parties – bundled apps has their niche to fill in the Linux ecosystem. + +Nevertheless, the role that niche should take, whether main or marginal one, appears to be a lot clearer now, at least from a theoretical point of view. + +Users who are looking to make their system optimized as possible, should, in the majority of cases, avoid using bundled apps. + +Whereas, users that are after ease-of-use, meaning – doing the least of work in order to maintain their systems, should and probably would feel very comfortable adopting the new method. + +-------------------------------------------------------------------------------- + +via: http://www.iwillfolo.com/linux-applications-that-works-on-all-distributions-are-they-any-good/ + +作者:[Editorials][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.iwillfolo.com/category/editorials/ +[1]: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/ +[2]: http://www.proli.net/2016/06/25/gnulinux-bundled-application-ramblings/ +[3]: http://www.iwillfolo.com/5-reasons-use-gentoo-linux/ diff --git a/sources/talk/20160627 Linux Practicality vs Activism.md b/sources/talk/20160627 Linux Practicality vs Activism.md new file mode 100644 index 0000000000..fc4f5eff26 --- /dev/null +++ b/sources/talk/20160627 Linux Practicality vs Activism.md @@ -0,0 +1,72 @@ +Linux Practicality vs Activism +================================== + +>Is Linux actually more practical than other OSes, or is there some higher minded reason to use it? + +One of the greatest things about running Linux is the freedom it provides. Where the division among the Linux community appears is in how we value this freedom. + +For some, the freedom enjoyed by using Linux is the freedom from vendor lock-in or high software costs. Most would call this a practical consideration. Others users would tell you the freedom they enjoy is software freedom. This means embracing Linux distributions that support the [Free Software Movement][1], avoiding proprietary software completely and all things related. + +In this article, I'll walk you through some of the differences between these two freedoms and how they affect Linux usage. + +### The problem with proprietary + +One thing most Linux users have in common is their preference for avoiding proprietary software. For practical enthusiasts like myself, it's a matter of how I spend my money, the ability to control my software and avoiding vendor lock-in. Granted, I'm not a coder...so my tweaks to my installed software are pretty mild. But there are instances where a minor tweak to an application can mean the difference between it working and it not working. + +Then there are Linux enthusiasts who opt to avoid proprietary software because they feel it's unethical to use it. Usually the main concern here is that using proprietary software takes away or simply obstructs your personal freedom. Users in this corner prefer to use Linux distributions and software that support the [Free Software philosophy][2]. While it's similar to and often directly confused with Open Source concepts, [there are differences][3]. + +So here's the issue: Users such as myself tend to put convenience over the ideals of pure software freedom. Don't get me wrong, folks like me prefer to use software that meets the ideals behind Free Software, but we also are more likely to make concessions in order to accomplish specific tasks. + +Both types of Linux enthusiasts prefer using non-proprietary solutions. But Free Software advocates won't use proprietary at all, where as the practical user will rely on the best tool with the best performance. This means there are instances where the practical user is willing to run a proprietary application or code on their non-proprietary operating system. + +In the end, both user types enjoy using what Linux has to offer. But our reasons for doing so tend to vary. Some have argued that this is a matter of ignorance with those who don't support Free Software. I disagree and believe it's a matter of practical convenience. Users who prefer practical convenience simply aren't concerned about the politics of their software. + +### Practical Convenience + +When you ask most people why they use the operating system they use, it's usually tied in with practical convenience. Examples of this convenience might include "it's what I've always used" down to "it runs the software I need." Other folks might take this a step further and explain it's not so much the software that drives their OS preference, as the familiarity of the OS in question. And finally, there are specialty "niche tasks" or hardware compatibility issues that also provide good reasons for using one OS over another. + +This might surprise many of you, but the single biggest reason I run desktop Linux today is due to familiarity. Even though I provide support for Windows and OS X for others, it's actually quite frustrating to use these operating systems as they're simply not what my muscle memory is used to. I like to believe this allows me to empathize with Linux newcomers, as I too know how off-putting it can be to step into the realm of the unfamiliar. My point here is this – familiarity has value. And familiarity also powers practical convenience as well. + +Now if we compare this to the needs of a Free Software advocate, you'll find those folks are willing to learn something new and perhaps even more challenging if it translates into them avoiding using non-free software. It's actually something I've always admired about this type of user. Their willingness to take the path less followed to stick to their principles is, in my opinion, admirable. + +### The price of freedom + +One area I don't envy is the extra work involved in making sure a Free Software advocate is always using Linux distros and hardware that respect their digital freedom according to the standards set forth by the [Free Software Foundation][4]. This means the Linux kernel needs to be free from proprietary blobs for driver support and the hardware in question doesn't require any proprietary code whatsoever. Certainly not impossible, but it's pretty close. + +The absolute best scenario a Free Software advocate can shoot for is hardware that is "freedom-compatible." There are vendors out there that can meet this need, however most of them are offering hardware that relies on Linux compatible proprietary firmware. Great for the practical user, a show-stopper for the Free Software advocate. + +What all of this translates into is that the advocate must be far more vigilant than the practical Linux enthusiast. This isn't necessarily a negative thing per se, however it's a consideration if one is planning on jumping onto the Free Software approach to computing. Practical users, by contrast, can use any software or hardware that happens to be Linux compatible without a second thought. I don't know about you, but in my eyes this seems a bit easier to me. + +### Defining software freedom + +This part is going to get some folks upset as I personally don't subscribe to the belief that there's only one flavor of software freedom. From where I stand, I think true freedom is being able to soak in all the available data on a given issue and then come to terms with the approach that best suits that person's lifestyle. + +So for me, I prefer using Linux distributions that provide me with the desktop that meets all of my needs. This includes the use of non-proprietary software and proprietary software. Even though it's fair to suggest that the proprietary software restricts my personal freedom, I must counter this by pointing out that I had the freedom to use it in the first place. One might even call this freedom of choice. + +Perhaps this too, is why I find myself identifying more with the ideals of Open Source Software instead of sticking with the ideals behind the Free Software movement. I prefer to stand with the group that doesn't spend their time telling me how I'm wrong for using what works best for me. It's been my experience that the Open Source crowd is merely interested in sharing the merits of software freedom without the passion for Free Software idealism. + +I think the concept of Free Software is great. And to those who need to be active in software politics and point out the flaws of using proprietary software to folks, then I think Linux ([GNU/Linux][5]) activism is a good fit. Where practical users such as myself tend to change course from Free Software Linux advocates is in our presentation. + +When I present Linux on the desktop, I share my passion for its practical merits. And if I'm successful and they enjoy the experience, I allow the user to discover the Free Software perspective on their own. I've found most people use Linux on their computers not because they want to embrace software freedom, rather because they simply want the best user experience possible. Perhaps I'm alone in this, it's hard to say. + +What say you? Are you a Free Software Advocate? Perhaps you're a fan of using proprietary software/code on your desktop Linux distribution? Hit the Comments and share your Linux desktop experiences. + + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/linux-practicality-vs-activism.html + +作者:[Matt Hartley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.datamation.com/author/Matt-Hartley-3080.html +[1]: https://en.wikipedia.org/wiki/Free_software_movement +[2]: https://www.gnu.org/philosophy/free-sw.en.html +[3]: https://www.gnu.org/philosophy/free-software-for-freedom.en.html +[4]: https://en.wikipedia.org/wiki/Free_Software_Foundation +[5]: https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy + + diff --git a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md b/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md deleted file mode 100644 index 24799508c3..0000000000 --- a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md +++ /dev/null @@ -1,49 +0,0 @@ -Growing a career alongside Linux -================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT) - -My Linux story started in 1998 and continues today. Back then, I worked for The Gap managing thousands of desktops running [OS/2][1] (and a few years later, [Warp 3.0][2]). As an OS/2 guy, I was really happy then. The desktops hummed along and it was quite easy to support thousands of users with the tools the GAP had built. Changes were coming, though. - -In November of 1998, I received an invitation to join a brand new startup which would focus on Linux in the enterprise. This startup became quite famous as [Linuxcare][2]. - -### My time at Linuxcare - -I had played with Linux a bit, but had never considered delivering it to enterprise customers. Mere months later (which is a turn of the corner in startup time and space), I was managing a line of business that let enterprises get their hardware, software, and even books certified on a few flavors of Linux that were popular back then. - -I supported customers like IBM, Dell, and HP in ensuring their hardware ran Linux successfully. You hear a lot now about preloading Linux on hardware today, but way back then I was invited to Dell to discuss getting a laptop certified to run Linux for an upcoming trade show. Very exciting times! We also supported IBM and HP on a number of certification efforts that spanned a few years. - -Linux was changing fast, much like it always has. It gained hardware support for more key devices like sound, network, graphics. At around that time, I shifted from RPM-based systems to [Debian][3] for my personal use. - -### Using Linux through the years - -Fast forward some years and I worked at a number of companies that did Linux as hardened appliances, Linux as custom software, and Linux in the data center. By the mid 2000s, I was busy doing consulting for that rather large software company in Redmond around some analysis and verification of Linux compared to their own solutions. My personal use had not changed though—I would still run Debian testing systems on anything I could. - -I really appreciated the flexibility of a distribution that floated and was forever updated. Debian is one of the most fun and well supported distributions and has the best community I've ever been a part of. - -When I look back at my own adoption of Linux, I remember with fondness the numerous Linux Expo trade shows in San Jose, San Francisco, Boston, and New York in the early and mid 2000's. At Linuxcare we always did fun and funky booths, and walking the show floor always resulted in getting re-acquainted with old friends. Rumors of work were always traded, and the entire thing underscored the fun of using Linux in real endeavors. - -The rise of virtualization and cloud has really made the use of Linux even more interesting. When I was with Linuxcare, we partnered with a small 30-person company in Palo Alto. We would drive to their offices and get things ready for a trade show that they would attend with us. Who would have ever known that little startup would become VMware? - -I have so many stories, and there were so many people I was so fortunate to meet and work with. Linux has evolved in so many ways and has become so important. And even with its increasing importance, Linux is still fun to use. I think its openness and the ability to modify it has contributed to a legion of new users, which always astounds me. - -### Today - -I've moved away from doing mainstream Linux things over the past five years. I manage large scale infrastructure projects that include a variety of OSs (both proprietary and open), but my heart has always been with Linux. - -The constant evolution and fun of using Linux has been a driving force for me for over the past 18 years. I started with the 2.0 Linux kernel and have watched it become what it is now. It's a remarkable thing. An organic thing. A cool thing. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/3/my-linux-story-michael-perry - -作者:[Michael Perry][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -[a]: https://opensource.com/users/mpmilestogo -[1]: https://en.wikipedia.org/wiki/OS/2 -[2]: https://archive.org/details/IBMOS2Warp3Collection -[3]: https://en.wikipedia.org/wiki/Linuxcare -[4]: https://www.debian.org/ -[5]: diff --git a/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md b/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md index 821937390a..f6e1b0aaab 100644 --- a/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md +++ b/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md @@ -1,3 +1,5 @@ +lujianbo + How to Setup Pfsense Firewall and Basic Configuration ================================================================================ In this article our focus is Pfsense setup, basic configuration and overview of features available in the security distribution of FreeBSD. In this tutorial we will run network wizard for basic setting of firewall and detailed overview of services. After the [installation process][1] following snapshot shows the IP addresses of WAN/LAN and different options for the management of Pfsense firewall. @@ -263,4 +265,4 @@ via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/naveeda/ -[1]:http://linoxide.com/firewall/install-pfsense-firewall/ \ No newline at end of file +[1]:http://linoxide.com/firewall/install-pfsense-firewall/ diff --git a/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md b/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md deleted file mode 100644 index 36c28d25d6..0000000000 --- a/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md +++ /dev/null @@ -1,497 +0,0 @@ -Securi-Pi: Using the Raspberry Pi as a Secure Landing Point -================================================================================ - -Like many LJ readers these days, I've been leading a bit of a techno-nomadic lifestyle as of the past few years—jumping from network to network, access point to access point, as I bounce around the real world while maintaining my connection to the Internet and other networks I use on a daily basis. As of late, I've found that more and more networks are starting to block outbound ports like SMTP (port 25), SSH (port 22) and others. It becomes really frustrating when you drop into a local coffee house expecting to be able to fire up your SSH client and get a few things done, and you can't, because the network's blocking you. - -However, I have yet to run across a network that blocks HTTPS outbound (port 443). After a bit of fiddling with a Raspberry Pi 2 I have at home, I was able to get a nice clean solution that lets me hit various services on the Raspberry Pi via port 443—allowing me to walk around blocked ports and hobbled networks so I can do the things I need to do. In a nutshell, I have set up this Raspberry Pi to act as an OpenVPN endpoint, SSH endpoint and Apache server—with all these services listening on port 443 so networks with restrictive policies aren't an issue. - -### Notes -This solution will work on most networks, but firewalls that do deep packet inspection on outbound traffic still can block traffic that's tunneled using this method. However, I haven't been on a network that does that...yet. Also, while I use a lot of cryptography-based solutions here (OpenVPN, HTTPS, SSH), I haven't done a strict security audit of this setup. DNS may leak information, for example, and there may be other things I haven't thought of. I'm not recommending this as a way to hide all your traffic—I just use this so that I can connect to the Internet in an unfettered way when I'm out and about. - -### Getting Started -Let's start off with what you need to put this solution together. I'm using this on a Raspberry Pi 2 at home, running the latest Raspbian, but this should work just fine on a Raspberry Pi Model B, as well. It fits within the 512MB of RAM footprint quite easily, although performance may be a bit slower, because the Raspberry Pi Model B has a single-core CPU as opposed to the Pi 2's quad-core. My Raspberry Pi 2 is behind my home's router/firewall, so I get the added benefit of being able to access my machines at home. This also means that any traffic I send to the Internet appears to come from my home router's IP address, so this isn't a solution designed to protect anonymity. If you don't have a Raspberry Pi, or don't want this running out of your home, it's entirely possible to run this out of a small cloud server too. Just make sure that the server's running Debian or Ubuntu, as these instructions are targeted at Debian-based distributions. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg) - -Figure 1. The Raspberry Pi, about to become an encrypted network endpoint. - -### Installing and Configuring BIND -Once you have your platform up and running—whether it's a Raspberry Pi or otherwise—next you're going to install BIND, the nameserver that powers a lot of the Internet. You're going to install BIND as a caching nameserver only, and not have it service incoming requests from the Internet. Installing BIND will give you a DNS server to point your OpenVPN clients at, once you get to the OpenVPN step. Installing BIND is easy; it's just a simple `apt-get `command to install it: - -``` -root@test:~# apt-get install bind9 -Reading package lists... Done -Building dependency tree -Reading state information... Done -The following extra packages will be installed: - bind9utils -Suggested packages: - bind9-doc resolvconf ufw -The following NEW packages will be installed: - bind9 bind9utils -0 upgraded, 2 newly installed, 0 to remove and - ↪0 not upgraded. -Need to get 490 kB of archives. -After this operation, 1,128 kB of additional disk - ↪space will be used. -Do you want to continue [Y/n]? y -``` - -There are a couple minor configuration changes that need to be made to one of the config files of BIND before it can operate as a caching nameserver. Both changes are in `/etc/bind/named.conf.options`. First, you're going to uncomment the "forwarders" section of this file, and you're going to add a nameserver on the Internet to which to forward requests. In this case, I'm going to add Google's DNS (8.8.8.8). The "forwarders" section of the file should look like this: - -``` -forwarders { - 8.8.8.8; -}; -``` - -The second change you're going to make allows queries from your internal network and localhost. Simply add this line to the bottom of the configuration file, right before the `}`; that ends the file: - -``` -allow-query { 192.168.1.0/24; 127.0.0.0/16; }; -``` - -That line above allows this DNS server to be queried from the network it's on (in this case, my network behind my firewall) and localhost. Next, you just need to restart BIND: - -``` -root@test:~# /etc/init.d/bind9 restart -[....] Stopping domain name service...: bind9waiting - ↪for pid 13209 to die -. ok -[ ok ] Starting domain name service...: bind9. -``` - -Now you can test `nslookup` to make sure your server works: - -``` -root@test:~# nslookup -> server localhost -Default server: localhost -Address: 127.0.0.1#53 -> www.google.com -Server: localhost -Address: 127.0.0.1#53 - -Non-authoritative answer: -Name: www.google.com -Address: 173.194.33.176 -Name: www.google.com -Address: 173.194.33.177 -Name: www.google.com -Address: 173.194.33.178 -Name: www.google.com -Address: 173.194.33.179 -Name: www.google.com -Address: 173.194.33.180 -``` - -That's it! You've got a working nameserver on this machine. Next, let's move on to OpenVPN. - -### Installing and Configuring OpenVPN - -OpenVPN is an open-source VPN solution that relies on SSL/TLS for its key exchange. It's also easy to install and get working under Linux. Configuration of OpenVPN can be a bit daunting, but you're not going to deviate from the default configuration by much. To start, you're going to run an apt-get command and install OpenVPN: - -``` -root@test:~# apt-get install openvpn -Reading package lists... Done -Building dependency tree -Reading state information... Done -The following extra packages will be installed: - liblzo2-2 libpkcs11-helper1 -Suggested packages: - resolvconf -The following NEW packages will be installed: - liblzo2-2 libpkcs11-helper1 openvpn -0 upgraded, 3 newly installed, 0 to remove and - ↪0 not upgraded. -Need to get 621 kB of archives. -After this operation, 1,489 kB of additional disk - ↪space will be used. -Do you want to continue [Y/n]? y -``` - -Now that OpenVPN is installed, you're going to configure it. OpenVPN is SSL-based, and it relies on both server and client certificates to work. To generate these certificates, you need to configure a Certificate Authority (CA) on the machine. Luckily, OpenVPN ships with some wrapper scripts known as "easy-rsa" that help to bootstrap this process. You'll start by making a directory on the filesystem for the easy-rsa scripts to reside in and by copying the scripts from the template directory there: - -``` -root@test:~# mkdir /etc/openvpn/easy-rsa -root@test:~# cp -rpv - ↪/usr/share/doc/openvpn/examples/easy-rsa/2.0/* - ↪/etc/openvpn/easy-rsa/ - ``` - -Next, copy the vars file to a backup copy: - -``` -root@test:/etc/openvpn/easy-rsa# cp vars vars.bak -``` - -Now, edit vars so it's got information pertinent to your installation. I'm going specify only the lines that need to be edited, with sample data, below: - -``` -KEY_SIZE=4096 -KEY_COUNTRY="US" -KEY_PROVINCE="CA" -KEY_CITY="Silicon Valley" -KEY_ORG="Linux Journal" -KEY_EMAIL="bill.childers@linuxjournal.com" -``` - -The next step is to source the vars file, so that the environment variables in the file are in your current environment: - -``` -root@test:/etc/openvpn/easy-rsa# source ./vars -NOTE: If you run ./clean-all, I will be doing a - ↪rm -rf on /etc/openvpn/easy-rsa/keys - ``` - -### Building the Certificate Authority - -You're now going to run clean-all to ensure a clean working environment, and then you're going to build the CA. Note that I'm changing changeme prompts to something that's appropriate for this installation: - -``` -root@test:/etc/openvpn/easy-rsa# ./clean-all -root@test:/etc/openvpn/easy-rsa# ./build-ca -Generating a 4096 bit RSA private key -...................................................++ -...................................................++ -writing new private key to 'ca.key' ------ -You are about to be asked to enter information that -will be incorporated into your certificate request. -What you are about to enter is what is called a -Distinguished Name or a DN. -There are quite a few fields but you can leave some -blank. For some fields there will be a default value, -If you enter '.', the field will be left blank. ------ -Country Name (2 letter code) [US]: -State or Province Name (full name) [CA]: -Locality Name (eg, city) [Silicon Valley]: -Organization Name (eg, company) [Linux Journal]: -Organizational Unit Name (eg, section) - ↪[changeme]:SecTeam -Common Name (eg, your name or your server's hostname) - ↪[changeme]:test.linuxjournal.com -Name [changeme]:test.linuxjournal.com -Email Address [bill.childers@linuxjournal.com]: -``` - -### Building the Server Certificate - -Once the CA is created, you need to build the OpenVPN server certificate: - -```root@test:/etc/openvpn/easy-rsa# - ↪./build-key-server test.linuxjournal.com -Generating a 4096 bit RSA private key -...................................................++ -writing new private key to 'test.linuxjournal.com.key' ------ -You are about to be asked to enter information that -will be incorporated into your certificate request. -What you are about to enter is what is called a -Distinguished Name or a DN. -There are quite a few fields but you can leave some -blank. For some fields there will be a default value, -If you enter '.', the field will be left blank. ------ -Country Name (2 letter code) [US]: -State or Province Name (full name) [CA]: -Locality Name (eg, city) [Silicon Valley]: -Organization Name (eg, company) [Linux Journal]: -Organizational Unit Name (eg, section) - ↪[changeme]:SecTeam -Common Name (eg, your name or your server's hostname) - ↪[test.linuxjournal.com]: -Name [changeme]:test.linuxjournal.com -Email Address [bill.childers@linuxjournal.com]: - -Please enter the following 'extra' attributes -to be sent with your certificate request -A challenge password []: -An optional company name []: -Using configuration from - ↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf -Check that the request matches the signature -Signature ok -The Subject's Distinguished Name is as follows -countryName :PRINTABLE:'US' -stateOrProvinceName :PRINTABLE:'CA' -localityName :PRINTABLE:'Silicon Valley' -organizationName :PRINTABLE:'Linux Journal' -organizationalUnitName:PRINTABLE:'SecTeam' -commonName :PRINTABLE:'test.linuxjournal.com' -name :PRINTABLE:'test.linuxjournal.com' -emailAddress - ↪:IA5STRING:'bill.childers@linuxjournal.com' -Certificate is to be certified until Sep 1 - ↪06:23:59 2025 GMT (3650 days) -Sign the certificate? [y/n]:y - -1 out of 1 certificate requests certified, commit? [y/n]y -Write out database with 1 new entries -Data Base Updated -``` - -The next step may take a while—building the Diffie-Hellman key for the OpenVPN server. This takes several minutes on a conventional desktop-grade CPU, but on the ARM processor of the Raspberry Pi, it can take much, much longer. Have patience, as long as the dots in the terminal are proceeding, the system is building its Diffie-Hellman key (note that many dots are snipped in these examples): - -``` -root@test:/etc/openvpn/easy-rsa# ./build-dh -Generating DH parameters, 4096 bit long safe prime, - ↪generator 2 -This is going to take a long time -....................................................+ - -``` - -### Building the Client Certificate - -Now you're going to generate a client key for your client to use when logging in to the OpenVPN server. OpenVPN is typically configured for certificate-based auth, where the client presents a certificate that was issued by an approved Certificate Authority: - -``` -root@test:/etc/openvpn/easy-rsa# ./build-key - ↪bills-computer -Generating a 4096 bit RSA private key -...................................................++ -...................................................++ -writing new private key to 'bills-computer.key' ------ -You are about to be asked to enter information that -will be incorporated into your certificate request. -What you are about to enter is what is called a -Distinguished Name or a DN. There are quite a few -fields but you can leave some blank. -For some fields there will be a default value, -If you enter '.', the field will be left blank. ------ -Country Name (2 letter code) [US]: -State or Province Name (full name) [CA]: -Locality Name (eg, city) [Silicon Valley]: -Organization Name (eg, company) [Linux Journal]: -Organizational Unit Name (eg, section) - ↪[changeme]:SecTeam -Common Name (eg, your name or your server's hostname) - ↪[bills-computer]: -Name [changeme]:bills-computer -Email Address [bill.childers@linuxjournal.com]: - -Please enter the following 'extra' attributes -to be sent with your certificate request -A challenge password []: -An optional company name []: -Using configuration from - ↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf -Check that the request matches the signature -Signature ok -The Subject's Distinguished Name is as follows -countryName :PRINTABLE:'US' -stateOrProvinceName :PRINTABLE:'CA' -localityName :PRINTABLE:'Silicon Valley' -organizationName :PRINTABLE:'Linux Journal' -organizationalUnitName:PRINTABLE:'SecTeam' -commonName :PRINTABLE:'bills-computer' -name :PRINTABLE:'bills-computer' -emailAddress - ↪:IA5STRING:'bill.childers@linuxjournal.com' -Certificate is to be certified until - ↪Sep 1 07:35:07 2025 GMT (3650 days) -Sign the certificate? [y/n]:y - -1 out of 1 certificate requests certified, - ↪commit? [y/n]y -Write out database with 1 new entries -Data Base Updated -root@test:/etc/openvpn/easy-rsa# -``` - -Now you're going to generate an HMAC code as a shared key to increase the security of the system further: - -``` -root@test:~# openvpn --genkey --secret - ↪/etc/openvpn/easy-rsa/keys/ta.key -``` - -### Configuration of the Server - -Finally, you're going to get to the meat of configuring the OpenVPN server. You're going to create a new file, /etc/openvpn/server.conf, and you're going to stick to a default configuration for the most part. The main change you're going to do is to set up OpenVPN to use TCP rather than UDP. This is needed for the next major step to work—without OpenVPN using TCP for its network communication, you can't get things working on port 443. So, create a new file called /etc/openvpn/server.conf, and put the following configuration in it: Garrick, shrink below. - -``` -port 1194 -proto tcp -dev tun -ca easy-rsa/keys/ca.crt -cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever - ↪your hostname was -key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key - ↪- This file should be kept secret -management localhost 7505 -dh easy-rsa/keys/dh4096.pem -tls-auth /etc/openvpn/certs/ta.key 0 -server 10.8.0.0 255.255.255.0 # The server will use this - ↪subnet for clients connecting to it -ifconfig-pool-persist ipp.txt -push "redirect-gateway def1 bypass-dhcp" # Forces clients - ↪to redirect all traffic through the VPN -push "dhcp-option DNS 192.168.1.1" # Tells the client to - ↪use the DNS server at 192.168.1.1 for DNS - - ↪replace with the IP address of the OpenVPN - ↪machine and clients will use the BIND - ↪server setup earlier -keepalive 30 240 -comp-lzo # Enable compression -persist-key -persist-tun -status openvpn-status.log -verb 3 -``` - -And last, you're going to enable IP forwarding on the server, configure OpenVPN to start on boot and start the OpenVPN service: - -``` -root@test:/etc/openvpn/easy-rsa/keys# echo - ↪"net.ipv4.ip_forward = 1" >> /etc/sysctl.conf -root@test:/etc/openvpn/easy-rsa/keys# sysctl -p - ↪/etc/sysctl.conf -net.core.wmem_max = 12582912 -net.core.rmem_max = 12582912 -net.ipv4.tcp_rmem = 10240 87380 12582912 -net.ipv4.tcp_wmem = 10240 87380 12582912 -net.core.wmem_max = 12582912 -net.core.rmem_max = 12582912 -net.ipv4.tcp_rmem = 10240 87380 12582912 -net.ipv4.tcp_wmem = 10240 87380 12582912 -net.core.wmem_max = 12582912 -net.core.rmem_max = 12582912 -net.ipv4.tcp_rmem = 10240 87380 12582912 -net.ipv4.tcp_wmem = 10240 87380 12582912 -net.ipv4.ip_forward = 0 -net.ipv4.ip_forward = 1 - -root@test:/etc/openvpn/easy-rsa/keys# update-rc.d - ↪openvpn defaults -update-rc.d: using dependency based boot sequencing - -root@test:/etc/openvpn/easy-rsa/keys# - ↪/etc/init.d/openvpn start -[ ok ] Starting virtual private network daemon:. -``` - -### Setting Up OpenVPN Clients - -Your client installation depends on the host OS of your client, but you'll need to copy your client certs and keys created above to your client, and you'll need to import those certificates and create a configuration for that client. Each client and client OS does it slightly differently and documenting each one is beyond the scope of this article, so you'll need to refer to the documentation for that client to get it running. Refer to the Resources section for OpenVPN clients for each major OS. - -### Installing SSLH—the "Magic" Protocol Multiplexer - -The really interesting piece of this solution is SSLH. SSLH is a protocol multiplexer—it listens on port 443 for traffic, and then it can analyze whether the incoming packet is an SSH packet, HTTPS or OpenVPN, and it can forward that packet onto the proper service. This is what enables this solution to bypass most port blocks—you use the HTTPS port for all of this traffic, since HTTPS is rarely blocked. - -To start, `apt-get` install SSLH: - -``` -root@test:/etc/openvpn/easy-rsa/keys# apt-get - ↪install sslh -Reading package lists... Done -Building dependency tree -Reading state information... Done -The following extra packages will be installed: - apache2 apache2-mpm-worker apache2-utils - ↪apache2.2-bin apache2.2-common - libapr1 libaprutil1 libaprutil1-dbd-sqlite3 - ↪libaprutil1-ldap libconfig9 -Suggested packages: - apache2-doc apache2-suexec apache2-suexec-custom - ↪openbsd-inetd inet-superserver -The following NEW packages will be installed: - apache2 apache2-mpm-worker apache2-utils - ↪apache2.2-bin apache2.2-common - libapr1 libaprutil1 libaprutil1-dbd-sqlite3 - ↪libaprutil1-ldap libconfig9 sslh -0 upgraded, 11 newly installed, 0 to remove - ↪and 0 not upgraded. -Need to get 1,568 kB of archives. -After this operation, 5,822 kB of additional - ↪disk space will be used. -Do you want to continue [Y/n]? y -``` - -After SSLH is installed, the package installer will ask you if you want to run it in inetd or standalone mode. Select standalone mode, because you want SSLH to run as its own process. If you don't have Apache installed, the Debian/Raspbian package of SSLH will pull it in automatically, although it's not strictly required. If you already have Apache running and configured, you'll want to make sure it only listens on localhost's interface and not all interfaces (otherwise, SSLH can't start because it can't bind to port 443). After installation, you'll receive an error that looks like this: - -``` -[....] Starting ssl/ssh multiplexer: sslhsslh disabled, - ↪please adjust the configuration to your needs -[FAIL] and then set RUN to 'yes' in /etc/default/sslh - ↪to enable it. ... failed! -failed! -``` - -This isn't an error, exactly—it's just SSLH telling you that it's not configured and can't start. Configuring SSLH is pretty simple. Its configuration is stored in `/etc/default/sslh`, and you just need to configure the `RUN` and `DAEMON_OPTS` variables. My SSLH configuration looks like this: - -``` -# Default options for sslh initscript -# sourced by /etc/init.d/sslh - -# Disabled by default, to force yourself -# to read the configuration: -# - /usr/share/doc/sslh/README.Debian (quick start) -# - /usr/share/doc/sslh/README, at "Configuration" section -# - sslh(8) via "man sslh" for more configuration details. -# Once configuration ready, you *must* set RUN to yes here -# and try to start sslh (standalone mode only) - -RUN=yes - -# binary to use: forked (sslh) or single-thread - ↪(sslh-select) version -DAEMON=/usr/sbin/sslh - -DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh - ↪127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn - ↪127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid" - ``` - - Save the file and start SSLH: - -``` - root@test:/etc/openvpn/easy-rsa/keys# - ↪/etc/init.d/sslh start -[ ok ] Starting ssl/ssh multiplexer: sslh. -``` - -Now, you should be able to ssh to port 443 on your Raspberry Pi, and have it forward via SSLH: - -``` -$ ssh -p 443 root@test.linuxjournal.com -root@test:~# -``` - -SSLH is now listening on port 443 and can direct traffic to SSH, Apache or OpenVPN based on the type of packet that hits it. You should be ready to go! - -### Conclusion - -Now you can fire up OpenVPN and set your OpenVPN client configuration to port 443, and SSLH will route it to the OpenVPN server on port 1194. But because you're talking to your server on port 443, your VPN traffic won't get blocked. Now you can land at a strange coffee shop, in a strange town, and know that your Internet will just work when you fire up your OpenVPN and point it at your Raspberry Pi. You'll also gain some encryption on your link, which will improve the privacy of your connection. Enjoy surfing the Net via your new landing point! - -Resources - -Installing and Configuring OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) and [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn) - -OpenVPN client downloads: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html) - -OpenVPN Client for iOS: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8) - -OpenVPN Client for Android: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en) - -Tunnelblick for Mac OS X (OpenVPN client): [https://tunnelblick.net](https://tunnelblick.net) - -SSLH—Protocol Multiplexer: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) and [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh) - - ----------- -via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-landing-point?page=0,0 - -作者:[Bill Childers][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/bill-childers - - diff --git a/sources/tech/20160104 What is good stock portfolio management software on Linux.md b/sources/tech/20160104 What is good stock portfolio management software on Linux.md index 258cf104fc..b24139b9c5 100644 --- a/sources/tech/20160104 What is good stock portfolio management software on Linux.md +++ b/sources/tech/20160104 What is good stock portfolio management software on Linux.md @@ -1,3 +1,4 @@ +Translating by ivo-wang What is good stock portfolio management software on Linux ================================================================================ If you are investing in the stock market, you probably understand the importance of a sound portfolio management plan. The goal of portfolio management is to come up with the best investment plan tailored for you, considering your risk tolerance, time horizon and financial goals. Given its importance, no wonder there are no shortage of commercial portfolio management apps and stock market monitoring software, each touting various sophisticated portfolio performance tracking and reporting capabilities. diff --git a/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md b/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md index dc21bc0b23..7e0c87cd80 100644 --- a/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md +++ b/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md @@ -1,4 +1,8 @@ -zky001开始翻译 + + +【flankershen翻译中】 + + How to Set Nginx as Reverse Proxy on Centos7 CPanel ================================================================================ @@ -24,9 +28,9 @@ First of all, we need to install the EPEL repo to start-up with the process. --> Running transaction check ---> Package epel-release.noarch 0:7-5 will be installed --> Finished Dependency Resolution - + Dependencies Resolved - + =============================================================================================================================================== Package Arch Version Repository Size =============================================================================================================================================== @@ -44,9 +48,9 @@ First of all, we need to install the EPEL repo to start-up with the process. --> Running transaction check ---> Package nDeploy-release-centos.noarch 0:1.0-1 will be installed --> Finished Dependency Resolution - + Dependencies Resolved - + =============================================================================================================================================== Package Arch Version Repository Size =============================================================================================================================================== @@ -63,9 +67,9 @@ First of all, we need to install the EPEL repo to start-up with the process. (1/4): ndeploy/7/x86_64/primary_db | 14 kB 00:00:00 (2/4): epel/x86_64/group_gz | 169 kB 00:00:00 (3/4): epel/x86_64/primary_db | 3.7 MB 00:00:02 - + Dependencies Resolved - + =============================================================================================================================================== Package Arch Version Repository Size =============================================================================================================================================== @@ -78,7 +82,7 @@ First of all, we need to install the EPEL repo to start-up with the process. memcached x86_64 1.4.15-9.el7 base 84 k python-inotify noarch 0.9.4-4.el7 base 49 k python-lxml x86_64 3.2.1-4.el7 base 758 k - + Transaction Summary =============================================================================================================================================== Install 2 Packages (+5 Dependent packages) @@ -89,7 +93,7 @@ With these steps, we've completed with the installation of Nginx plugin in our s root@server1 [/usr]# /opt/nDeploy/scripts/cpanel-nDeploy-setup.sh enable Modifying apache http and https port in cpanel - + httpd restarted successfully. Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service. Created symlink from /etc/systemd/system/multi-user.target.wants/ndeploy_watcher.service to /usr/lib/systemd/system/ndeploy_watcher.service. @@ -109,7 +113,7 @@ As you can see these script will modify the Apache port from 80 to another port Main PID: 24760 (httpd) CGroup: /system.slice/httpd.service ‣ 24760 /usr/local/apache/bin/httpd -k start - + Jan 18 06:34:23 server1.centos7-test.com systemd[1]: Starting Apache Web Server... Jan 18 06:34:23 server1.centos7-test.com apachectl[25606]: httpd (pid 24760) already running Jan 18 06:34:23 server1.centos7-test.com systemd[1]: Started Apache Web Server. @@ -127,7 +131,7 @@ As you can see these script will modify the Apache port from 80 to another port ├─25473 nginx: worker process ├─25474 nginx: worker process └─25475 nginx: cache manager process - + Jan 17 17:18:29 server1.centos7-test.com systemd[1]: Starting nginx-nDeploy - high performance web server... Jan 17 17:18:29 server1.centos7-test.com nginx[3804]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Jan 17 17:18:29 server1.centos7-test.com nginx[3804]: nginx: configuration file /etc/nginx/nginx.conf test is successful @@ -159,14 +163,14 @@ The virtualhost entries created for the existing users as located in the folder listen 45.79.183.73:80; #CPIPVSIX:80; - + # ServerNames server_name saheetha.com www.saheetha.com; access_log /usr/local/apache/domlogs/saheetha.com main; access_log /usr/local/apache/domlogs/saheetha.com-bytes_log bytes_log; - + include /etc/nginx/sites-enabled/saheetha.com.include; - + } We can confirm the working of the web server status by calling a website in the browser. Please see the web server information on my server after the installation. @@ -200,4 +204,4 @@ via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/saheethas/ +[a]: http://linoxide.com/author/saheethas/ diff --git a/sources/tech/20160218 What do Linux developers think of Git and GitHub.md b/sources/tech/20160218 What do Linux developers think of Git and GitHub.md deleted file mode 100644 index b444b0958c..0000000000 --- a/sources/tech/20160218 What do Linux developers think of Git and GitHub.md +++ /dev/null @@ -1,95 +0,0 @@ -@4357 翻译中 - -What do Linux developers think of Git and GitHub? -===================================================== - -**Also in today’s open source roundup: DistroWatch reviews XStream Desktop 153, and Street Fighter V is coming to Linux and SteamOS in the spring** - -## What do Linux developers think of Git and GitHub? - -The popularity of Git and GitHub among Linux developers is well established. But what do developers think of them? And should GitHub really be synonymous with Git itself? A Linux redditor recently asked about this and got some very interesting answers. - -Dontwakemeup46 asked his question: - ->I am learning Git and Github. What I am interested in is how these two are viewed by the community. That git and github are used extensively, is something I know. But are there serious issues with either Git or Github? Something that the community would love to change? - -[More at Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580413015211&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) - -His fellow Linux redditors responded with their thoughts about Git and GitHub: - ->Derenir: ”Github is not affliated with Git. - ->Git is made by Linus Torvalds. - ->Github hardly supports Linux. - ->Github is a corporate bordelo that tries to make money from Git. - ->[https://desktop.github.com/](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580415025712&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&type=U&out=https%3A%2F%2Fdesktop.github.com%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=https%3A%2F%2Fdesktop.github.com%2F) see here no Linux Support.” - ->**Bilog78**: ”A minor update: git hasn't been “made by Linus Torvalds” for a while. The maintainer is Junio C Hamano and the main contributors after him are Jeff King and Shawn O. Pearce.” - ->**Fearthefuture**: ”I like git but can't understand why people even use github anymore. From my point of view the only thing it does better than bitbucket are user statistics and the larger userbase. Bitbucket has unlimited free private repos, much better UI and very good integration with other services such as Jenkins.” - ->**Thunger**: ”Gitlab.com is also nice, especially since you can host your own instance on your own servers.” - ->**Takluyver**: ”Lots of people are familiar with the UI of Github and associated services like Travis, and lots of people already have Github accounts, so it's a good place for projects to be. People also use their Github profile as a kind of portfolio, so they're motivated to put more projects on there. Github is a de facto standard for hosting open source projects.” - ->**Tdammers**: ”Serious issue with git would be the UI, which is kind of counterintuitive, to the point that many users just stick with a handful of memorized incantations. - -Github: most serious issue here is that it's a proprietary hosted solution; you buy convenience, and the price is that your code is on someone else's server and not under your control anymore. Another common criticism of github is that its workflow isn't in line with the spirit of git itself, particularly the way pull requests work. And finally, github is monopolizing the code hosting landscape, and that's bad for diversity, which in turn is crucial for a thriving free software community.” - ->**Dies**: ”How is that the case? More importantly, if that is the case, then what's done is done and I guess we're stuck with Github since they control so many projects.” - ->**Tdammers**: ”The code is hosted on someone else's server, "someone else" in this case being github. Which, for an open-source project, is not typically a huge problem, but still, you don't control it. If you have a private project on github, then the only assurance you have that it will remain private is github's word for it. If you decide to delete things, then you can never be sure whether it's been deleted, or just hidden. - -Github doesn't control the projects themselves (you can always take your code and host it elsewhere, declaring the new location the "official" one), it just has deeper access to the code than the developers themselves.” - ->**Drelos**: ”I have read a lot of praises and bad stuff about Github ([here's an example](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580428524613&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fwww.wired.com%2F2015%2F06%2Fproblem-putting-worlds-code-github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=here%27s%20an%20example)) but my simple noob question is why aren't efforts towards a free and open "version" ?” - ->**Twizmwazin**: ”GitLab is sorta pushing there.” - -[More at Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580429720714&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit) - -## DistroWatch reviews XStream Desktop 153 - -XStreamOS is a version of Solaris created by Sonicle. XStream Desktop brings the power of Solaris to desktop users, and distrohoppers might be interested in checking it out. DistroWatch did a full review of XStream Desktop 153 and found that it performed fairly well. - -Jesse Smith reports for DistroWatch: - ->I think XStream Desktop does a lot of things well. Admittedly, my trial got off to a rocky start when the operating system would not boot on my hardware and I could not get the desktop to use my display's full screen resolution when running in VirtualBox. However, after that, XStream performed fairly well. The installer works well, the operating system automatically sets up and uses boot environments, insuring we can recover the system if something goes wrong. The package management tools work well and XStream ships with a useful collection of software. - ->I did run into a few problems playing media, specifically getting audio to work. I am not sure if that is another hardware compatibility issue or a problem with the media software that ships with the operating system. On the other hand, tools such as the web browser, e-mail, productivity suite and configuration tools all worked well. - ->What I appreciate about XStream the most is that the operating system is a branch of the OpenSolaris family that is being kept up to date. Other derivatives of OpenSolaris tend to lag behind, at least with desktop software, but XStream is still shipping recent versions of Firefox and LibreOffice. - ->For me personally, XStream is missing a few components, like a printer manager, multimedia support and drivers for my specific hardware. Other aspects of the operating system are quite attractive. I like the way the developers have set up LXDE, I like the default collection of software and I especially like the way file system snapshots and boot environments are enabled out of the box. Most Linux distributions, openSUSE aside, have not caught on to the usefulness of boot environments yet and I hope it is a technology that is picked up by more projects. - -[More at DistroWatch](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580434172315&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fdistrowatch.com%2Fweekly.php%3Fissue%3D20160215%23xstreamos&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20DistroWatch) - -## Street Fighter V and SteamOS - -Street Fighter is one of the most well known game franchises of all time, and now [Capcom has announced](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) that Street Fighter V will be coming to Linux and SteamOS in the spring. This is great news for Linux gamers. - -Joe Parlock reports for Destructoid: - ->Are you one of the less than one percent of Steam users who play on a Linux-based system? Are you part of the even smaller percentage of people who play on Linux and are excited for Street Fighter V? Well, I’ve got some good news for you. - ->Capcom has announced via Steam that Street Fighter V will be coming to SteamOS and other Linux operating systems sometime this spring. It’ll come at no extra cost, so those who already own the PC build of the game will just be able to install it on Linux and be good to go. - -[More at Destructoid](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) - -Did you miss a roundup? Check the [Eye On Open home page](http://www.infoworld.com/blog/eye-on-open/) to get caught up with the latest news about open source and Linux. - ------------------------------------------------------------------------------- - -via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html - -作者:[Jim Lynch][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.infoworld.com/author/Jim-Lynch/ - diff --git a/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md b/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md deleted file mode 100644 index 81f7467719..0000000000 --- a/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md +++ /dev/null @@ -1,80 +0,0 @@ -Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI -=========================================== - -For enterprises, containers offer more efficient build environments, cloud-native applications and migration from legacy systems to the cloud. But enterprise adoption of the technology -- Docker specifically -- has been hampered by, among other issues, [a lack of mature developer tools][1]. - -Amsterdam-based [Wercker][2] is one of many early-stage companies looking to meet the need for better tools with its cloud platform for automating microservices and application development, based on Docker. - -The company [announced a $4.5 million Series A][3] funding round this month, which will help it ramp up development on an upcoming on-premise enterprise product. Key to its success, however, will be building a community around its newly [open-sourced CLI][4] tool. Wercker must quickly integrate with myriad other container technologies -- open source Kubernetes and Mesos among them -- to remain competitive in the evolving container space. - -“By open sourcing our CLI technology, we hope to get to dev-prod parity faster and turn “build once, ship anywhere” into an automated reality,” said Wercker CEO and founder Micha Hernández van Leuffen. - -I reached out to van Leuffen to learn more about the company, its CLI tool, and how it’s planning to help grow the pool of enterprise customers actually using containers in production. Below is an edited version of the interview. - -### Linux.com: Can you briefly tell us about Wercker? - -van Leuffen: Wercker is a container-centric platform for automating the development of microservices and applications. - -With Wercker’s Docker-based infrastructure, teams can increase developer velocity with custom automation pipelines using steps that produce containers as artifacts. Once the build passes, users can continue to deploy the steps as specified in the wercker.yml. Continuously repeating these steps allows teams to work in small increments, making it easy to debug and ship faster. - -![](https://www.linux.com/images/stories/66866/wercker-cli.png) - -### Linux.com: How does it help developers? - -van Leuffen: The Wercker CLI helps developers attain greater dev-prod parity. They’re able to release faster and more often because they are developing, building and testing in an environment very similar to that in production. We’ve open sourced the exact same program that we execute in the Wercker cloud platform to run your pipelines. - -### Linux.com: Can you point out some of the features and advantages of your tool as compared to competitors? - -van Leuffen: Unlike some of our competitors, we’re not just offering Docker support. With Wercker, the Docker container is the unit of work. All jobs run inside containers, and each build artifact can be a Docker container. - -Wercker’s Docker container pipeline is completely customizable. A ‘pipeline’ refers to any automated workflow, for instance, a build or deploy pipeline. In those workflows, you want to execute tasks: install dependencies, test your code, push your container, or create a slack notification when something fails, for example. We call these tasks ‘steps,’ and there is no limit to the types of steps created. In fact, we have a marketplace of steps built by the Wercker community. So if you’ve built a step that fits my workflow, I can use that in my pipeline. - -Our Docker container pipelines adapt to any developer workflow. Users can use any Docker container out there — not just those made by or for Wercker. Whether the container is on Docker Hub or a private registry such as CoreOS’s Quay, it works with Wercker. - -Our competitors range from the classic CI/CD tools to larger-scale DevOps solutions like CloudBees. - -### Linux.com: How does it integrate with other cloud technologies? - -van Leuffen: Wercker is vendor-agnostic and can automate development with any cloud platform or service. We work closely with ecosystem partners like Mesosphere, Kubernetes and CoreOS to make integrations as seamless as possible. We also recently partnered with Atlassian to integrate the Wercker platform with Bitbucket. More than 3 million Bitbucket users can install the Wercker Pipeline Viewer and view build status directly from their dashboard. - -### Linux.com: Why did you open source the Wercker CLI tool? - -van Leuffen: Open sourcing the Wercker CLI will help us stay ahead of the curve and strengthen the developer community. The market landscape is changing fast; developers are expected to release more frequently, using infrastructure of increasing complexity. While Docker has solved a lot of infrastructure problems, developer teams are still looking for the perfect tools to test, build and deploy rapidly. - -The Wercker community is already experimenting with these new tools: Kubernetes, Mesosphere, CoreOS. It makes sense to tap that community to create integrations that work with our technology – and make that process as frictionless as possible. By open sourcing our CLI technology, we hope to get to dev-prod parity faster and turn “build once, ship anywhere” into an automated reality. - -### Linux.com: You recently raised over $4.5 million, so how is this fund being used for product development? - -van Leuffen: We’re focused on building out our commercial team and bringing an enterprise product to market. We’ve had a lot of inbound interest from the enterprise looking for VPC and on-premise solutions. While the enterprise is still largely in the discovery stage, we can see the market shifting toward containers. Enterprise software devs need to release often, just like the small, agile teams with whom they are increasingly competing. We need to prove containers can scale, and that Wercker has the organizational permissions and the automation suite to make that process as efficient as possible. - -In addition to continuing to invest in our product, we’ll be focusing our resources on market education and developer evangelism. Developer teams are still looking for the right mix of tools to test, build and deploy rapidly (including Kubernetes, Mesosphere, CoreOS, etc.). As an ecosystem, we need to do more to educate and provide the tutorials and resources to help developers succeed in this changing landscape. - -### Linux.com: What products do you offer and who is your target audience? - -van Leuffen: We currently offer one service level of our product Wercker; however, we’re developing an enterprise offering. Current organizations using Wercker range from startups, such as Open Listings, to larger companies and big agencies, like Pivotal Labs. - - -### Linux.com: What does this recently open-sourced CLI do? - -van Leuffen: Using the Wercker Command Line Interface (CLI), developers can spin up Docker containers on their desktop, automate their build and deploy processes and then deploy them to various cloud providers, like AWS, and scheduler and orchestration platforms, such as Mesosphere and Kubernetes. - -The Wercker Command Line Interface is available as an open source project on GitHub and runs on both OSX and Linux machines. - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/enterprise/systems-management/887177-achieving-enterprise-ready-container-tools-with-werckers-open-source-cli - -作者:[Swapnil Bhartiya][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/community/forums/person/61003 -[1]:http://thenewstack.io/adopting-containers-enterprise/ -[2]:http://wercker.com/ -[3]:http://venturebeat.com/2016/01/28/wercker-raises-4-5-million-open-sources-its-command-line-tool/ -[4]:https://github.com/wercker/wercker - - diff --git a/sources/tech/20160301 Viper, the Python IoT Development Suite, is now Zerynth.md b/sources/tech/20160301 Viper, the Python IoT Development Suite, is now Zerynth.md deleted file mode 100644 index f732382bc6..0000000000 --- a/sources/tech/20160301 Viper, the Python IoT Development Suite, is now Zerynth.md +++ /dev/null @@ -1,35 +0,0 @@ -(翻译中 by runningwater) -Viper, the Python IoT Development Suite, is now Zerynth -============================================================ - - -![](http://www.open-electronics.org/wp-content/uploads/2016/02/Logo_Zerynth-636x144.png) - - -The startup that launched the tools to develop embedded solutions in Python language announced the brand change along with the first official release. - ->Exactly one year after the Kickstarter launch of the suite for developing Internet of Things solutions in Python language, **Viper becomes Zerynth**. It is definitely a big day for the startup that created a radically new way to approach the world of microcontrollers and connected devices, making professionals and makers able to design interactive solutions with reduced efforts and shorter time. - ->“We really believe in the uniqueness of our tools, this is why they deserve an adequate recognition. Viper was a great name for a product, but other notable companies had the same feeling many decades ago, with the result that this term was shared with too many other actors out there. We are grown now, and ready to take off fast and light, like the design processes that our tools are enabling”, says the Viper (now Zerynth), co-founders. - ->**Thousands of users** developed amazing connected solutions in just 9 months of life in Beta version. Built to be cross-platform, Zerynth’s tools are meant for high-level design of Internet/cloud-connected devices, interactive objects, artistic installations. They are: **Zerynth Studio**, a browser-based IDE for programming embedded devices in Python with cloud sync and board management features; **Zerynth Virtual Machine**: a multithreaded real-time OS that provides real hardware independence allowing code reuse on the entire ARM architecture; **Zerynth App**, a general purpose interface that turns any mobile into the controller and display for smart objects and IoT systems. - ->This modular set of tools, adaptable to different hardware and cloud infrastructures, can dramatically reduce the time to market and the overall development costs for makers, professionals and companies. - ->Now Zerynth celebrates its new name launching the **first official release** of the toolkit. Check it here [www.zerynth.com][1] - -![](http://www.open-electronics.org/wp-content/uploads/2016/02/Zerynth-Press-Release_Studio-Img-768x432.png) - --------------------------------------------------------------------------------- - -via: http://www.open-electronics.org/viper-the-python-iot-development-suite-is-now-zerynth/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+OpenElectronics+%28Open+Electronics%29 - -作者:[Staff ][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.open-electronics.org/author/staff/ -[1]: http://www.zerynth.com/ - diff --git a/sources/tech/20160303 Top 5 open source command shells for Linux.md b/sources/tech/20160303 Top 5 open source command shells for Linux.md deleted file mode 100644 index 8705ad2981..0000000000 --- a/sources/tech/20160303 Top 5 open source command shells for Linux.md +++ /dev/null @@ -1,90 +0,0 @@ -翻译中;by ping -Top 5 open source command shells for Linux -=============================================== - -keyword: shell , Linux , bash , zsh , fish , ksh , tcsh , license - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa) - -There are two kinds of Linux users: the cautious and the adventurous. - -On one side is the user who almost reflexively tries out ever new option which hits the scene. They’ve tried handfuls of window managers, dozens of distributions, and every new desktop widget they can find. - -On the other side is the user who finds something they like and sticks with it. They tend to like their distribution’s defaults. If they’re passionate about a text editor, it’s whichever one they mastered first. - -As a Linux user, both on the server and the desktop, for going on fifteen years now, I am definitely more in the second category than the first. I have a tendency to use what’s presented to me, and I like the fact that this means more often than not I can find thorough documentation and examples of most any use case I can dream up. If I used something non-standard, the switch was carefully researched and often predicated by a strong pitch from someone I trust. - -But that doesn’t mean I don’t like to sometimes try and see what I’m missing. So recently, after years of using the bash shell without even giving it a thought, I decided to try out four alternative shells: ksh, tcsh, zsh, and fish. All four were easy installs from my default repositories in Fedora, and they’re likely already packaged for your distribution of choice as well. - -Here’s a little bit on each option and why you might choose it to be your next Linux command-line interpreter. - -### bash - -First, let’s take a look back at the familiar. [GNU Bash][1], the Bourne Again Shell, has been the default in pretty much every Linux distribution I’ve used through the years. Originally released in 1989, bash has grown to easily become the most used shell across the Linux world, and it is commonly found in other unix-like operating systems as well. - -Bash is a perfectly respectable shell, and as you look for documentation of how to do various things across the Internet, almost invariably you’ll find instructions which assume you are using a bash shell. But bash has some shortcomings, as anyone who has ever written a bash script that’s more than a few lines can attest to. It’s not that you can’t do something, it’s that it’s not always particularly intuitive (or at least elegant) to read and write. For some examples, see this list of [common bash pitfalls][2]. - -That said, bash is probably here to stay for at least the near future, with its enormous install base and legions of both casual and professional system administrators who are already attuned to its usage, and quirks. The bash project is available under a [GPLv3][3] license. - -### ksh - -[KornShell][4], also known by its command invocation, ksh, is an alternative shell that grew out of Bell Labs in the 1980s, written by David Korn. While originally proprietary software, later versions were released under the [Eclipse Public License][5]. - -Proponents of ksh list a number of ways in which they feel it is superior, including having a better loop syntax, cleaner exit codes from pipes, an easier way to repeat commands, and associative arrays. It's also capable of emulating many of the behaviors of vi or emacs, so if you are very partial to a text editor, it may be worth giving a try. Overall, I found it to be very similar to bash for basic input, although for advanced scripting it would surely be a different experience. - -### tcsh - -[Tcsh][6] is a derivative of csh, the Berkely Unix C shell, and sports a very long lineage back to the early days of Unix and computing itself. - -The big selling point for tcsh is its scripting language, which should look very familiar to anyone who has programmed in C. Tcsh's scripting is loved by some and hated by others. But it has other features as well, including adding arguments to aliases, and various defaults that might appeal to your preferences, including the way autocompletion with tab and history tab completion work. - -You can find tcsh under a [BSD license][7]. - -### zsh - -[Zsh][8] is another shell which has similarities to bash and ksh. Originating in the early 90s, zsh sports a number of useful features, including spelling correction, theming, namable directory shortcuts, sharing your command history across multiple terminals, and various other slight tweaks from the original Bourne shell. - -The code and binaries for zsh can be distributed under an MIT-like license, though portions are under the GPL; check the [actual license][9] for details. - -### fish - -I knew I was going to like the Friendly Interactive Shell, [fish][10], when I visited the website and found it described tongue-in-cheek with "Finally, a command line shell for the 90s"—fish was written in 2005. - -The authors of fish offer a number of reasons to make the switch, all invoking a bit of humor and poking a bit of fun at shells that don't quite live up. Features include autosuggestions ("Watch out, Netscape Navigator 4.0"), support of the "astonishing" 256 color palette of VGA, but some actually quite helpful features as well including command completion based on the man pages on your machine, clean scripting, and a web-based configuration. - -Fish is licensed primarily unde the GPL version 2 but with portions under other licenses; check the repository for [complete information][11]. - -*** - -Looking for a more detailed rundown on the precise differences between each option? [This site][12] ought to help you out. - -So where did I land? Well, ultimately, I’m probably going back to bash, because the differences were subtle enough that someone who mostly used the command line interactively as opposed to writing advanced scripts really wouldn't benefit much from the switch, and I'm already pretty comfortable in bash. - -But I’m glad I decided to come out of my shell (ha!) and try some new options. And I know there are many, many others out there. Which shells have you tried, and which one do you prefer? Let us know in the comments! - - - - -via: https://opensource.com/business/16/3/top-linux-shells - -作者:[Jason Baker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jason-baker - -[1]: https://www.gnu.org/software/bash/ -[2]: http://mywiki.wooledge.org/BashPitfalls -[3]: http://www.gnu.org/licenses/gpl.html -[4]: http://www.kornshell.org/ -[5]: https://www.eclipse.org/legal/epl-v10.html -[6]: http://www.tcsh.org/Welcome -[7]: https://en.wikipedia.org/wiki/BSD_licenses -[8]: http://www.zsh.org/ -[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE -[10]: https://fishshell.com/ -[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING -[12]: http://hyperpolyglot.org/unix-shells - diff --git a/sources/tech/20160314 15 podcasts for FOSS fans.md b/sources/tech/20160314 15 podcasts for FOSS fans.md deleted file mode 100644 index eae53102ad..0000000000 --- a/sources/tech/20160314 15 podcasts for FOSS fans.md +++ /dev/null @@ -1,80 +0,0 @@ -zpl1025 -15 podcasts for FOSS fans -============================= - -keyword : FOSS , podcast - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/oss_podcasts.png?itok=3KwxsunX) - -I listen to a lot of podcasts. A lot. On my phone's podcatcher, I am subscribed to around 60 podcasts... and I think that only eight of those have podfaded (died). Unsurprisingly, a fairly sizeable proportion of those remaining alive-and-well subscriptions are shows with a specific interest or relevance to open source software. As I seek to resurrect my own comatose podcast from the nebulous realm of podfadery, I thought it would be great for us as a community to share what we're listening to. - ->Quick digression: I understand that there are a lot of "pod"-prefixed words in that first paragraph. Furthermore, I also know that the term itself is related to a proprietary device that, by most accounts, isn't even used for listening to these web-based audio broadcasts. However, the term 'webcast' died in the nineties and 'oggcast' never gathered a substantial foothold among the listening public. As such, in order to ensure that the most people actually know what I'm referring to, I'm essentially forced to use the web-anachronistic, but publicly recognized term, podcast. - -I should also mention that a number of these shows involve grown-ups using grown-up language (i.e. swearing). I've tried to indicate which shows these are by putting a red E next to their names, but please do your own due diligence if you're concerned about listening to these shows at work or with children around. - -The following lists are podcasts that I keep in heavy rotation (each sublist is listed in alphabetical order). In the first list are the ones I think of as my "general coverage" shows. They tend to either discuss general topics related to free and open source software, or they give a survey of multiple open source projects from one episode to the next. - -- [Bad Voltage][1] E — Regular contributor and community moderator here on Opensource.com, Jono Bacon, shares hosting dutes on this podcast with Jeremy Garcia, Stuart Langridge, and Bryan Lunduke, four friends with a variety of digressing and intersecting opinions. That's the most interesting part of the show for me. Of course, they also do product reviews and cover timely news relevant to free and open source software, but it's the banter that I stick around for. - -- [FLOSS Weekly][2] — The Twit network of podcasts is a long-time standby in technology broadcasts. Hosted by Randal Schwartz, FLOSS Weekly focuses on covering one open source project each week, typically by interviewing someone relevant in the development of that project. It's a really good show for getting exposed to new open source tools... or learning more about the programs you're already familiar with. - -- [Free as in Freedom][3] — Hosted by Bradley Kuhn and Karen Sandler, this show has a specific focus on legal and policy matters as it relates to both specific free and open source projects, as well as open culture in general. The show seems to have gone on a bit of a hiatus since its last episode in November of 2015, but I for one am immensely hopeful that Free as in Freedom emerges victoriously from its battle with being podfaded and returns to its regular bi-weekly schedule. - -- [GNU World Order][4] — I think that this show can be best descrbed as a free and open source variety show. Solo host, Klaatu, spends the majority of each show going in-depth at nearly tutorial level with a whole range of specific software tools and workflows. It's a really friendly way to get an open source neophyte up to speed with everything from understanding SSH to playing with digital painting and video. And there's a video component to the show, too, which certainly helps make some of these topics easier to follow. - -- [Hacker Public Radio][5] — This is just a well-executed version of a fantastic concept. Hacker Public Radio (HPR) is a community-run daily (well, working-week daily) podcast with a focus on "anything of interest to hackers." Sure there are wide swings in audio quality from show to show, but it's an open platform where anyone can share what they know (or what they think) in that topic space. Show topics include 3D printing, hardware hacking, conference interviews, and more. There are even long-running tutorial series and an audio book club. The monthly recap episodes are particularly useful if you're having trouble picking a place to start. And best of all, you can record your own episode and add it to the schedule. In fact, they actively encourage it. - -My next list of open source podcasts are a bit more specific to particular topics or software packages in the free and open source ecosystem. - -- [Blender Podcast][6] — Although this podcast is very specific to one particular application—Blender, in case you couldn't guess—many of the topics are relevant to issues faced by users and developers of open source other softrware programs. Hosts Thomas Dinges and Campbell Barton—both on the core development team for Blender—discuss the latest happenings in the Blender community, sometimes with a guest. The release schedule is a bit sporadic, but one of the things I really like about this particular show is the fact that they talk about both user issues and developer issues... and the various intersections of the two. It's a great way for each part of the community to gain insight from the other. - -- [Sunday Morning Linux Review][7] — As it's name indicates, SMLR offers a weekly review of topics relevant to Linux. Since around the end of last year, the show has seen a bit of a restructuring. However, that has not detracted from its quality. Tony Bemus, Mary Tomich, and Tom Lawrence deliver a lot of good information, and you can catch them recording their shows live through their website (if you happen to have free time on your Sundays). - -- [LinuxLUGcast][8] — The LinuxLUGcast is a community podcast that's really a recording of an online Linux Users Group (LUG) that meets on the first and third Friday of each month. The group meets (and records) via Mumble and discussions range from home builds with single-board computers like the Raspberry Pi to getting help with trying out a new distro. The LUG is open to everyone, but there is a rotating cast of regulars who've made themselves (and their IRC handles) recognizable fixtures on the show. (Full disclosure: I'm a regular on this one) - -- [The Open EdTech Podcast][9] — Thaj Sara's Open EdTech Podcast is a fairly new show that so far only has three episodes. However, since there's a really sizeable community of open source users in the field of education (both in teaching and in IT), this show serves an important and underserved segment of our community. I've spoken with Thaj via email and he assures me that new episodes are in the pipe. He just needs to set aside the time to edit them. - -- [The Linux Action Show][10] — It would be remiss of me to make a list of open source podcasts and not mention one of the stallwart fixtures in the space: The Linux Action Show. Chris Fisher and Noah Chelliah discuss current news as it pertains to Linux and open source topics while at the same time giving feature attention to specific projects or their own experiences using various open source tools. - -This next section is what I'm going to term my "honorable mention" section. These shows are either new or have a more tangential focus on open source software and culture. In any case, I still think readers of Opensource.com would enjoy listening to these shows. - -- [Blender Institute Podcast][11] — The Blender Institute—the more commercial creative production spin-off from the Blender Foundation—started hosting their own weekly podcast a few months ago. In the show, artists (and now a developer!) working at the Institute discuss the open content projects they're working on, answer questions about using Blender, and give great insight into how things go (or occasionally don't go) in their day-to-day work. - -- [Geek News Radio][12] E — There was a tangible sense of loss about a year ago when the hosts of Linux Outlaws hung up their mics. Well good news! A new show has sprung from its ashes. In episodes of Geek News Radio, Fab Scherschel and Dave Nicholas have a wider focus than Linux Outlaws did. Rather than being an actual news podcast, it's more akin to an informal discussion among friends about video games, movies, technology, and open source (of course). - -- [Geekrant][13] — Formerly known as the Everyday Linux Podcast, this show was rebranded at the start of the year to reflect kind of content that the hosts Mark Cockrell, Seth Anderson, and Chris Neves were already discussing. They do discuss open source software and culture, but they also give their own spin and opinions on topics of interest in general geek culture. Topics have a range that includes everything from popular media to network security. (P.S. Opensource.com content manager Jen Wike Huger was a guest on Episode 164.) - -- [Open Source Creative][14] E — In case you haven't read my little bio blurb, I also have my own podcast. In this show, I talk about news and topics that are [hopefully] of interest to artists and creatives who use free and open source tools. I record it during my work commute so episode length varies with traffic, and I haven't quite figured out a good way to do interviews safely, but if you listen while you're on your way to work, it'll be like we're carpooling. The show has been on a bit of hiatus for almost a year, but I've commited to making sure it comes back... and soon. - -- [Still Untitled][15] E — As you may have noticed from most of the selections on this list, I tend to lean toward the indie side of the spectrum, preferring to listen to shows by people with less of a "name." That said, this show really hits a good place for me. Hosts Adam Savage, Norman Chan, and Will Smith talk about all manner of interesting and geeky things. From Adam's adventures with Mythbusters to maker builds and book reviews, there's rarely ever a show that hasn't been fun for me to listen to. - -So there you go! I'm always looking for more interesting shows to listen to on my commute (as I'm sure many others are). What suggestions or recommendations do you have? - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/3/open-source-podcasts - -作者:[Jason van Gumster][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jason-van-gumster -[1]: http://badvoltage.org/ -[2]: https://twit.tv/shows/floss-weekly -[3]: http://faif.us/ -[4]: http://gnuworldorder.info/ -[5]: http://hackerpublicradio.org/ -[6]: https://blender-podcast.org/ -[7]: http://smlr.us/ -[8]: http://linuxlugcast.com/ -[9]: http://openedtechpodcast.com/ -[10]: http://www.jupiterbroadcasting.com/tag/linux-action-show/ -[11]: http://podcast.blender.institute/ -[12]: http://sixgun.org/geeknewsradio -[13]: http://elementopie.com/geekrant-episodes -[14]: http://monsterjavaguns.com/podcast -[15]: http://www.tested.com/still-untitled-the-adam-savage-project/ diff --git a/sources/tech/20160314 Healthy Open Source.md b/sources/tech/20160314 Healthy Open Source.md deleted file mode 100644 index 57dd559284..0000000000 --- a/sources/tech/20160314 Healthy Open Source.md +++ /dev/null @@ -1,213 +0,0 @@ -Translating by yuba0604 -Healthy Open Source -============================ - -keyword: Node.js , opensource , project management , software - -*A walkthrough of the Node.js Foundation’s base contribution policy*. - -A lot has changed since io.js and Node.js merged under the Node.js Foundation. The most impressive change, and probably the change that is most relevant to the rest of the community and to open source in general, is the growth in contributors and committers to the project. - -A few years ago, Node.js had just a few committers (contributors with write access to the repository in order to merge code and triage bugs). The maintenance overhead for the few committers on Node.js Core was overwhelming and the project began to see a decline in both committers and outside contribution. This resulted in a corresponding decline in releases. - -Today, the Node.js project is divided into many components with a full org size of well over 400 members. Node.js Core now has over 50 committers and over 100 contributors per month. - -Through this growth we’ve found many tools that help scale the human infrastructure around an Open Source project. We also identified a few core values we believe are fundamental to modern Open Source: transparency, participation, and efficacy. As we continue to scale the way we do Open Source we try to find a balance of these values and adapt the practices we find help to fit the needs of each component of the Node.js project. - -Now that Node.js is in a good place, the foundation is looking to promote this kind of sustainability in the ecosystem. Part of this is a new umbrella for additional projects to enter the foundation, of which [Express was recently admitted][1], and the creation of this new contribution policy. - -This contribution policy is not universal. It’s meant as a starting point. Additions and alterations to this policy are encouraged so that the process used by each project fits its needs and can continue to change shape as the project grows and faces new challenges. - -The [current version][2] is hosted in the Node.js Foundation. We expect to iterate on this over time and encourage people to [log issues][3] with questions and feedback regarding the policy for future iterations. - -This document describes a very simple process suitable for most projects in the Node.js ecosystem. Projects are encouraged to adopt this whether they are hosted in the Node.js Foundation or not. - -The Node.js project is organized into over a hundred repositories and a few dozen Working Groups. There are large variations in contribution policy between many of these components because each one has different constraints. This document is a minimalist version of the processes and philosophy we’ve found works best everywhere. - -We believe that contributors should own their projects, and that includes contribution policies like this. While new foundation projects start with this policy we expect many of them to alter it or possibly diverge from it entirely to suite their own specific needs. - -The goal of this document is to create a contribution process that: - -* Encourages new contributions. - -* Encourages contributors to remain involved. - -* Avoids unnecessary processes and bureaucracy whenever possible. - -* Creates a transparent decision making process which makes it clear how contributors can be involved in decision making. - -Most contribution processes are created by maintainers who feel overwhelmed by outside contributions. These documents have traditionally been about processes that make life easier for a small group of maintainers, often at the cost of attracting new contributors. - -We’ve gone the opposite direction. The purpose of this policy is to gain contributors, to retain them as much as possible, and to use a much larger and growing contributor base to manage the corresponding influx of contributions. - -As projects mature, there’s a tendency to become top heavy and overly hierarchical as a means of quality control and this is enforced through process. We use process to add transparency that encourages participation which grows the code review pool which leads to better quality control. - -This document is based on much prior art in the Node.js community, io.js, and the Node.js project. - -This document is based on what we’ve learned growing the Node.js project. Not just the core project, which has been a massive undertaking, but also much smaller sub-projects like the website which have very different needs and, as a result, very different processes. - -When we began these reforms in the Node.js project, we were taking a lot of inspiration from the broader Node.js ecosystem. In particular, Rod Vagg’s [OPEN Open Source policy][4]. Rod’s work in levelup and nan is the basis for what we now call “liberal contribution policies.” - -### Vocabulary - -* A **Contributor** is any individual creating or commenting on an issue or pull request. - -* A **Committer** is a subset of contributors who have been given write access to the repository. - -* A **TC (Technical Committee)** is a group of committers representing the required technical expertise to resolve rare disputes. - -Every person who shows up to comment on an issue or submit code is a member of a project’s community. Just being able to see them means that they have crossed the line from being a user to being a contributor. - -Typically open source projects have had a single distinction for those that have write access to the repository and those empowered with decision making. We’ve found this to be inadequate and have separated this into two distinctions which we’ll dive into more a bit later. - -![](https://www.linux.com/images/stories/66866/healthy_1.png) - -healthy 1Looking at the community in and around a project as a bunch of concentric circles helps to visualize this. - -In the outermost circle are users, a subset of those users are contributors, a subset of contributors become committers who can merge code and triage issues. Finally, a smaller group of trusted experts who only get pulled in to the hard problems and can act as a tie-breaker in disputes. - -This is what a healthy project should look like. As the demands on the project from increased users rise, so do the contributors, and as contributors increase more are converted into committers. As the committer base grows, more of them rise to the level of expertise where they should be involved in higher level decision making. - -![](https://www.linux.com/images/stories/66866/healthy-2.png) - -If these groups don’t grow in proportion to each other they can’t carry the load imposed on them by outward growth. A project’s ability to convert people from each of these groups is the only way it can stay healthy if its user base is growing. - -This is what unhealthy projects look like in their earliest stages of dysfunction, but imagine that the committers bubble is so small you can’t actually read the word “committers” in it, and imagine this is a logarithmic scale. - -healthy-2A massive user base is pushing a lot of contributions onto a very small number of maintainers. - -This is when maintainers build processes and barriers to new contributions as a means to manage the workload. Often the problems the project is facing will be attributed to the tools the project is using, especially GitHub. - -In Node.js we had all the same problems, resolved them without a change in tooling, and today manage a growing workload much larger than most projects, and GitHub has not been a bottleneck. - -We know what happens to unhealthy projects over a long enough time period, more maintainers leave, contributions eventually fall, and **if we’re lucky** users leave it. When we aren’t so lucky adoption continues and years later we’re plagued with security and stability issues in widely adopt software that can’t be effectively maintained. - -The number of users a project has is a poor indicator of the health of the project, often it is the most used software that suffers the biggest contribution crisis. - -### Logging - -Log an issue for any question or problem you might have. When in doubt, log an issue, any additional policies about what to include will be provided in the responses. The only exception is security disclosures which should be sent privately. - -The first sentence is surprisingly controversial. A lot of maintainers complain that there isn’t a more heavy handed way of forcing people to read a document before they log an issue on GitHub. We have documents all over projects in the Node.js Foundation about writing good bug reports but, first and foremost, we encourage people to log something and try to avoid putting barriers in the way of that. - -Sure, we get bad bugs, but we have a ton of contributors who can immediately work with people who log them to educate them on better practices and treat it as an opportunity to educate. This is why we have documentation on writing good bugs, in order to educate contributors, not as a barrier to entry. - -Creating barriers to entry just reduces the number of people there’s a chance to identify, educate and potentially grow into greater contributors. - -Of course, never log a public issue about a security disclosure, ever. This is a bit vague about the best private venue because we can’t determine that for every project that adopts this policy, but we’re working on a responsible disclosure mechanism for the broader community (stay tuned). - -Committers may direct you to another repository, ask for additional clarifications, and add appropriate metadata before the issue is addressed. - -For smaller projects this isn’t a big deal but in Node.js we’ve had to continually break off work into other, more specific, repositories just to keep the volume on a single repo manageable. But all anyone has to do when someone puts something in the wrong place is direct them to the right one. - -Another benefit of growing the committer base is that there’s more people to deal with little things, like redirecting issues to other repos, or adding metadata to issues and PRs. This allows developers who are more specialized to focus on just a narrow subset of work rather than triaging issues. - -Please be courteous, respectful, and every participant is expected to follow the project’s Code of Conduct. - -One thing that can burn out a project is when people show up with a lot of hostility and entitlement. Most of the time this sentiment comes from a feeling that their input isn’t valued. No matter what, a few people will show up who are used to more hostile environments and it’s good to have these kinds of expectations explicit and written down. - -And each project should have a Code of Conduct, which is an extension of these expectations that makes people feel safe and respected. - -### Contributions - -Any change to resources in this repository must be through pull requests. This applies to all changes to documentation, code, binary files, etc. Even long term committers and TC members must use pull requests. - -No pull request can be merged without being reviewed. - -Every change needs to be a pull request. - -A Pull Request captures the entire discussion and review of a change. Allowing some subset of committers to slip things in without a Pull Request gives the impression to potential contributors that they they can’t be involved in the project because they don’t have access to a behind the scenes process or culture. - -This isn’t just a good practice, it’s a necessity in order to be transparent enough to attract new contributors. - -For non-trivial contributions, pull requests should sit for at least 36 hours to ensure that contributors in other timezones have time to review. Consideration should also be given to weekends and other holiday periods to ensure active committers all have reasonable time to become involved in the discussion and review process if they wish. - -Part of being open and inviting to more contributors is making the process accessible to people in timezones all over the world. We don’t want to add an artificial delay in small doc changes but for any change that needs a bit of consideration needs to give people in different parts of the world time to consider it. - -In Node.js we actually have an even longer timeline than this, 48 hours on weekdays and 72 on weekends. That might be too much for smaller projects so it is shorter in this base policy but as a project grows it may want to increase this as well. - -The default for each contribution is that it is accepted once no committer has an objection. During review committers may also request that a specific contributor who is most versed in a particular area gives a “LGTM” before the PR can be merged. There is no additional “sign off” process for contributions to land. Once all issues brought by committers are addressed it can be landed by any committer. - -A key part of the liberal contribution policies we’ve been building is an inversion of the typical code review process. Rather than the default mode for a change to be rejected until enough people sign off, we make the default for every change to land. This puts the onus on reviewers to note exactly what adjustments need to be made in order for it to land. - -For new contributors it’s a big leap just to get that initial code up and sent. Viewing the code review process as a series of small adjustments and education, rather than a quality control hierarchy, does a lot to encourage and retain these new contributors. - -It’s important not to build processes that encourage a project to be too top heavy, with a few people needing to sign off on every change. Instead, we just mention any committer than we think should weigh in on a specific review. In Node.js we have people who are the experts on OpenSSL, any change to crypto is going to need a LGTM from them. This kind of expertise forms naturally as a project grows and this is a good way to work with it without burning people out. - -In the case of an objection being raised in a pull request by another committer, all involved committers should seek to arrive at a consensus by way of addressing concerns being expressed by discussion, compromise on the proposed change, or withdrawal of the proposed change. - -This is what we call a lazy consensus seeking process. Most review comments and adjustments are uncontroversial and the process should optimize for getting them in without unnecessary process. When there is disagreement, try to reach an easy consensus among the committers. More than 90% of the time this is simple, easy and obvious. - -If a contribution is controversial and committers cannot agree about how to get it to land or if it should land then it should be escalated to the TC. TC members should regularly discuss pending contributions in order to find a resolution. It is expected that only a small minority of issues be brought to the TC for resolution and that discussion and compromise among committers be the default resolution mechanism. - -For the minority of changes that are controversial and don’t reach an easy consensus we escalate that to the TC. These are rare but when they do happen it’s good to reach a resolution quickly rather than letting things fester. Contentious issues tend to get a lot of attention, especially by those more casually involved in the project or even entirely outside of it, but they account for a relatively small amount of what the project does every day. - -### Becoming a Committer - -All contributors who land a non-trivial contribution should be on-boarded in a timely manner, and added as a committer, and be given write access to the repository. - -This is where we diverge sharply from open source tradition. - -Projects have historically guarded commit rights to their version control system. This made a lot of sense when we were using version control systems like subversion. A single contributor can inadvertently mess up a project pretty badly in older version control systems, but not so much in git. In git, there isn’t a lot that can’t be fixed and so most of the quality controls we put on guarding access are no longer necessary. - -Not every committer has the rights to release or make high level decisions, so we can be much more liberal about giving out commit rights. That increases the committer base for code review and bug triage. As a wider range of expertise in the committer pool smaller changes are reviewed and adjusted without the intervention of the more technical contributors, who can spend their time on reviews only they can do. - -This is they key to scaling contribution growth: committer growth. - -Committers are expected to follow this policy and continue to send pull requests, go through proper review, and have other committers merge their pull requests. - -This part is entirely redundant, but on purpose. Just a reminder even once someone is a committer their changes still flow through the same process they followed before. - -### TC Process - -The TC uses a “consensus seeking” process for issues that are escalated to the TC. The group tries to find a resolution that has no open objections among TC members. If a consensus cannot be reached that has no objections then a majority wins vote is called. It is also expected that the majority of decisions made by the TC are via a consensus seeking process and that voting is only used as a last-resort. - -The best solution tends to be the one everyone can agree to so you would think that consensus systems would be the norm. However, **pure consensus** systems incentivize obstructionism which we need to avoid. - -In pure consensus everyone essentially has a veto. So, if I don’t want something to happen I’m in a strong position of power over everyone that wants something to happen. They have to convince me, and I don’t have to convince anyone else of anything. - -To avoid this we use a system called “consensus seeking” which has a long history outside of open source. It’s quite simple, just attempt to reach a consensus, if a consensus can’t be reached then call for a majority wins vote. - -Just the fact that a vote **is a possibility** means that people can’t be obstructionists, whether someone favor a change or not, they have to convince their peers and if they aren’t willing to put in the work to convince their peers then they probably don’t involve themselves in that decision at all. - -The way these incentives play out is pretty impressive. We started using this process in io.js and adopted it in Node.js when we merged into the foundation. In that entire time we’ve never actually had to call for a vote, just the fact that we could is enough to keep everyone working together to find a solution and move forward. - -Resolution may involve returning the issue to committers with suggestions on how to move forward towards a consensus. It is not expected that a meeting of the TC will resolve all issues on its agenda during that meeting and may prefer to continue the discussion happening among the committers. - -A TC tries to resolve things in a timely manner so that people can make progress but often it’s better to provide some additional guidance that pushes the greater contributorship towards resolution without being heavy handed. - -Avoid creating big decision hierarchies. Instead, invest in a broad, growing and empowered contributorship that can make progress without intervention. We need to view a constant need for intervention by a few people to make any and every tough decision as the biggest obstacle to healthy Open Source. - -Members can be added to the TC at any time. Any committer can nominate another committer to the TC and the TC uses its standard consensus seeking process to evaluate whether or not to add this new member. Members who do not participate consistently at the level of a majority of the other members are expected to resign. - -The TC just uses the same consensus seeking process for adding new members as it uses for everything else. - -It’s a good idea to encourage committers to nominate people to the TC and not just wait around for TC members to notice the impact some people are having. Listening to the broader committers about who they see as having a big impact keeps the TC’s perspective inline with the rest of the project. - -As a project grows it’s important to add people from a variety of skill sets. If people are doing a lot of docs work, or test work, treat the investment they are making as equally valuable as the hard technical stuff. - -Projects should have the same ladder, user -> contributor -> commiters -> TC member, for every skill set they want to build into the project to keep it healthy. - -I often see long time maintainers worry about adding people who don’t understand every part of the project, as if they have to be involved in every decision. The reality is that people do know their limitations and want to defer hard decisions to people they know have more experience. - -Thanks to Greg [Wallace][5] and ashley [williams][6]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/biz-os/governance/892141-healthy-open-source - -作者:[Mikeal Rogers][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/community/forums/person/66928 - - -[1]: https://medium.com/@nodejs/node-js-foundation-to-add-express-as-an-incubator-project-225fa3008f70#.mc30mvj4m -[2]: https://github.com/nodejs/TSC/blob/master/BasePolicies/CONTRIBUTING.md -[3]: https://github.com/nodejs/TSC/issues -[4]: https://github.com/Level/community/blob/master/CONTRIBUTING.md -[5]: https://medium.com/@gtewallaceLF -[6]: https://medium.com/@ag_dubs diff --git a/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md b/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md deleted file mode 100644 index 4fa50b0a45..0000000000 --- a/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md +++ /dev/null @@ -1,87 +0,0 @@ -A newcomer's guide to navigating OpenStack Infrastructure -=========================================================== - -New contributors to OpenStack are welcome, but having a road map for navigating within this maturing, fast-paced open source community doesn't hurt. At OpenStack Summit in Austin, [Paul Belanger][1] (Red Hat, Inc.), [Elizabeth K. Joseph][2] (HPE), and [Christopher Aedo][3] (IBM) will lead a session on [OpenStack Infrastructure for Beginners][4]. In this interview, they offer tips and resources to help onboard new OpenStack contributors. - -![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) - -**Your talk description says you'll be "diving into the heart of infrastructure and explain everything you need to know about the systems that keep OpenStack working." That's a tall order for a 40-minute time slot. What are the top things beginners should know about OpenStack infrastructure?** - -**Elizabeth K. Joseph (EKJ)**: We don't use GitHub for OpenStack patches. This is something that trips up a lot of new contributors because we do maintain mirrors of all our repositories on GitHub for historical reasons. Instead we use a fully open source code review and continuous integration (CI) system maintained by the OpenStack Infrastructure team. Relatedly, since we run a CI system, every change proposed to OpenStack is tested before merging. - -**Paul Belanger (PB)**: A lot of passionate people in the project, so don't get discouraged if your patch gets a -1. - -**Christopher Aedo (CA)**: The community wants to help you succeed, don't be afraid to ask questions or ask for pointers to more information to improve your understanding. - -### Which online resources would you recommend for beginners to fill in the holes for what you can't cover in your talk? - -**PB**: Definitely our [OpenStack Project Infrastructure documentation][5]. At lot of effort has been taken to keep it up to date as much as possible. Every system used in running OpenStack as a project has a dedicated page, even the OpenStack cloud the Infrastructure teams is bringing online. - -**EKJ**: I'll echo what Paul said about the Infrastructure documentation, and add that we love seeing patches from folks who are learning. We often don't realize what we're missing in terms of documentation until someone asks. So read, learn, and then help us fill in the gaps. You can ask questions on the [openstack-infra mailing list][6] or in our IRC channel at #openstack-infra on Freenode. - -**CA**: I love [this detailed post][7] about building images, by Ian Wienand. - -### Which "gotchas" should new OpenStack contributors look out for? - -**EKJ**: Contributing is not just about submitting new code and new features; the OpenStack community places a very high value on doing code reviews. If you want people to look at a patch you submitted, consider reviewing some of the work of others and providing clear and constructive feedback. The more your fellow contributors know about your work and see you doing reviews, the more likely you'll get your code reviewed in a timely manner. - -**CA**: I see a lot of newcomers getting tripped up with [Gerrit][8]. Read through the [developer workflow][9] in the Developers Guide, and then maybe read through it one more time. If you're not used to Gerrit, it can seem confusing and overwhelming at first, but walking through a few code reviews usually makes it all come together. Also, I'm a big fan of IRC. It can be a great place to get help, but it's best if you can maintain a persistent presence so people can answer your questions even if you're not "there" at that particular moment. (Read [IRC, the secret to success in open source][10].) You don't need to be "always on," but the ability to easily scroll back in a channel and catch up on a conversation can be invaluable. - -**PB**: I agree with both Elizabeth and Chris—Gerrit is what to look out for. It is going to be the hub of your development effort. Not only will you be submitting code for people to review, but you'll also be reviewing other contributors' code. Watch out for the Gerrit UI; it can be confusing at times. I'd recommend trying out [Gertty][11], which is a console-based interface to the Gerrit Code Review system, which happens to be a project driven by OpenStack Infrastructure. - -### What resources do you recommend for beginners to help them network with other OpenStack contributors? - -**PB**: For me, it was using IRC and joining the #openstack-infra channel on Freenode ([IRC logs][12]). There is a lot of fantastic information and people in that channel. You get to see the day-to-day operations of the OpenStack project, and once you know how the project works, you'll have a better understanding on how to contribute to its future. - -**CA**: I want to second that note for IRC; staying on IRC throughout the day made a huge difference for me in terms of feeling informed and connected. It's also such a great way to get help when you're stuck with someone on one of the projects—the ones with active IRC channels always have someone around willing to get your issues sorted out. - -**EKJ**: The [openstack-dev mailing list][13] is quite important for staying up to date with news about projects you're working on inside of OpenStack, so I recommend subscribing to that. The mailing list uses subject tags to separate projects, so you can instruct your email client to use those and focus on threads that impact projects you care about. Beyond online resources, many OpenStack groups have popped up all over the world that serve the needs of both users and contributors to OpenStack, and many of them routinely have talks and events with key OpenStack contributors. You can search on Meetup.com in your area, or search on [groups.openstack.org][14] to see if there is an OpenStack group in your area. Finally, there are the [OpenStack Summits][15], which happen every six months, and where we'll be giving our Infrastructure talk. In their current format, the summits consist of both a user conference and a developer conference in one space to talk about everything related to OpenStack, past, present, and future. - -### In which areas does OpenStack need to improve to become more beginner-friendly? - -**PB**: I think our [account-setup][16] process could be made easier for new contributors, especially how many steps are needed to submit your first patch. There is a large cost to enroll into OpenStack development model, which maybe be too much for contributors; however, once enrolled, the model works fantastic for developers. - -**CA**: We have a very pro-developer community, but the focus is on developing OpenStack itself, with less consideration given to the users of OpenStack clouds. We need to bring in application developers and encourage more people to develop things that run beautifully on OpenStack clouds, and encourage them to share those apps in the [Community App Catalog][17]. We can do this by continuing to improve our API standards and by ensuring different libraries (like libcloud, phpopencloud, and others) continue to work reliably for developers. Oh, also by sponsoring more OpenStack hackathons! All these things can ease entry for newcomers, which will lead to them sticking around. - -**EKJ**: I've worked on open source software for many years, but for a large number of OpenStack developers, this is the first open source project they've every worked on. I've found that their proprietary software background doesn't prepare them for the open source ideals, methodologies, and collaboration techniques used in an open source project. I'd love to see us do a better job of welcoming people who have this proprietary software background and working with them so they can truly understand the value of what they're working on in the open source software community. - -### I think 2016 is shaping up to be the Year of the Open Source Haiku. Explain OpenStack to beginners via Haiku. - -**PB**: OpenStack runs clouds If you enjoy free software Submit your first patch - -**CA**: In the near future OpenStack will rule the world Help make it happen! - -**EKJ**: OpenStack is free Deploy on your own servers And run your own cloud! - -*Paul, Elizabeth*, and Christopher will be [speaking at OpenStack Summit][18] in Austin on Monday, April 25, starting at 11:15am. - - ------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners - -作者:[linux.com][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://rikkiendsley.com/ -[1]: https://twitter.com/pabelanger -[2]: https://twitter.com/pleia2 -[3]: https://twitter.com/docaedo -[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 -[5]: http://docs.openstack.org/infra/system-config/ -[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html -[8]: https://code.google.com/p/gerrit/ -[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow -[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/ -[11]: https://pypi.python.org/pypi/gertty -[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/ -[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -[14]: https://groups.openstack.org/ -[15]: https://www.openstack.org/summit/ -[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup -[17]: https://apps.openstack.org/ -[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 diff --git a/sources/tech/20160511 4 Container Networking Tools to Know.md b/sources/tech/20160511 4 Container Networking Tools to Know.md index 5b80791c6f..220d41644d 100644 --- a/sources/tech/20160511 4 Container Networking Tools to Know.md +++ b/sources/tech/20160511 4 Container Networking Tools to Know.md @@ -1,3 +1,4 @@ +[Translating by itsang] 4 Container Networking Tools to Know ======================================= diff --git a/sources/tech/20160511 An introduction to data processing with Cassandra and Spark.md b/sources/tech/20160511 An introduction to data processing with Cassandra and Spark.md deleted file mode 100644 index 46331a9ae5..0000000000 --- a/sources/tech/20160511 An introduction to data processing with Cassandra and Spark.md +++ /dev/null @@ -1,51 +0,0 @@ -Translating KevinSJ -An introduction to data processing with Cassandra and Spark -============================================================== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28) - - -There's been a huge surge of interest around the Apache Cassandra database due to the increasing uptime and performance demands of modern cloud applications. - -So, what is Apache Cassandra? A distributed OLTP database built for high availability and linear scalability. When people ask what Cassandra is used for, think about the type of system you want close to the customer. This is ultimately the system that our users interact with. Applications that must always be available: product catalogs, IoT, medical systems, and mobile applications. In these categories downtime can mean loss of revenue or even more dire outcomes depending on your specific use case. Netflix was one of the earliest adopters of this project, which was open sourced in 2008, and their contributions, along with successes, put it on the radar of the masses. - -Cassandra became a top level Apache Software Foundation project in 2010 and has been riding the wave in popularity since then. Now even knowledge in Cassandra gets you serious returns in the job market. It's both crazy and awesome to consider a NoSQL and open source technology could perform this sort of disruption next to the giants of enterprise SQL. This begs the question, what makes it so popular? - -Cassandra has the ability to be always on in spite of massive hardware and network failures by utilizing a design first widely discussed in [the Dynamo paper from Amazon][1]. By using a peer to peer model, with no single point of failure, we can survive rack failure and even complete network partitions. We can deal with an entire data center failure without impacting our customer's experience. A distributed system that plans for failure is a properly planned distributed system, because frankly, failures are just going to happen. With Cassandra, we accept that cruel fact of life, and bake it into the database's architecture and functionality. - -We know what you’re thinking, "But, I’m coming from a relational background, isn't this going to be a daunting transition?" The answer is somewhat yes and no. Data modeling with Cassandra will feel familiar to developers coming from the relational world. We use tables to model our data, and CQL, the Cassandra Query Language, to query the database. However, unlike SQL, Cassandra supports more complex data structures such as nested and user defined types. For instance, instead of creating a dedicated table to store likes on a cat photo, we can store that data in a collection with the photo itself enabling faster, sequential lookups. That's expressed very naturally in CQL. In our photo table we may want to track the name, URL, and the people that liked the photo. - -![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png) - -In a high performance system milliseconds matter for both user experience and for customer retention. Expensive JOIN operations limit our ability to scale out by adding unpredictable network calls. By denormalizing our data so it can be fetched in as few requests as possible, we profit from the trend of decreasing costs in disk space and in return get predictable, high performance applications. We embrace the concept of denormalization with Cassandra because it offers a pretty appealing tradeoff. - -We're obviously not just limited to storing likes on cat photos. Cassandra is a optimized for high write throughput. This makes it the perfect solution for big data applications where we’re constantly ingesting data. Time series and IoT use cases are growing at a steady rate in both demand and appearance in the market, and we're continuously finding ways to utilize the data we collect to improve our technological application. - -This brings us to the next step, we've talked about storing our data in a modern, cost-effective fashion, but how do we get even more horsepower? Meaning, once we've collected all that data, what do we do with it? How can we analyze hundreds of terabytes efficiently? How can we react to information we're receiving in real-time, making decisions in seconds rather than hours? Enter Apache Spark. - -Spark is the next step in the evolution of big data processing. Hadoop and MapReduce were revolutionary projects, giving the big data world an opportunity to crunch all the data we've collected. Spark takes our big data analysis to the next level by drastically improving performance and massively decreasing code complexity. Through Spark, we can perform massive batch processing calculations, react quickly to stream processing, make smart decisions through machine learning, and understand complex, recursive relationships through graph traversals. It’s not just about offering your customers a fast and reliable connection to their application (which is what Cassandra offers), it's also about being able to leverage insights from the data Cassandra stores to make more intelligent business decisions and better cater to customer needs. - -You can check out the [Spark-Cassandra Connector][2] (open source) and give it a shot. To learn more about both technologies, we highly recommend the free self-paced courses on [DataStax Academy][3]. - -Have fun digging in and learning some killer new technology! If you want to learn more, check out our [OSCON tutorial][4], with a hands on exploration into the worlds of both Cassandra and Spark. - -We also love taking questions on Twitter, so give us a shout and we’ll try to help: [Dani][5] and [Jon][6]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing - -作者:[Jon Haddad][a],[Dani Traphagen][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/rustyrazorblade -[b]: https://opensource.com/users/dtrapezoid -[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf -[2]: https://github.com/datastax/spark-cassandra-connector -[3]: https://academy.datastax.com/ -[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162 -[5]: https://twitter.com/dtrapezoid -[6]: https://twitter.com/rustyrazorblade diff --git a/sources/tech/20160512 Bitmap in Linux Kernel.md b/sources/tech/20160512 Bitmap in Linux Kernel.md deleted file mode 100644 index 06297fa204..0000000000 --- a/sources/tech/20160512 Bitmap in Linux Kernel.md +++ /dev/null @@ -1,398 +0,0 @@ -[Translating By cposture 20160520] -Data Structures in the Linux Kernel -================================================================================ - -Bit arrays and bit operations in the Linux kernel --------------------------------------------------------------------------------- - -Besides different [linked](https://en.wikipedia.org/wiki/Linked_data_structure) and [tree](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) based data structures, the Linux kernel provides [API](https://en.wikipedia.org/wiki/Application_programming_interface) for [bit arrays](https://en.wikipedia.org/wiki/Bit_array) or `bitmap`. Bit arrays are heavily used in the Linux kernel and following source code files contain common `API` for work with such structures: - -* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) -* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) - -Besides these two files, there is also architecture-specific header file which provides optimized bit operations for certain architecture. We consider [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture, so in our case it will be: - -* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) - -header file. As I just wrote above, the `bitmap` is heavily used in the Linux kernel. For example a `bit array` is used to store set of online/offline processors for systems which support [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) cpu (more about this you can read in the [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) part), a `bit array` stores set of allocated [irqs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) during initialization of the Linux kernel and etc. - -So, the main goal of this part is to see how `bit arrays` are implemented in the Linux kernel. Let's start. - -Declaration of bit array -================================================================================ - -Before we will look on `API` for bitmaps manipulation, we must know how to declare it in the Linux kernel. There are two common method to declare own bit array. The first simple way to declare a bit array is to array of `unsigned long`. For example: - -```C -unsigned long my_bitmap[8] -``` - -The second way is to use the `DECLARE_BITMAP` macro which is defined in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) header file: - -```C -#define DECLARE_BITMAP(name,bits) \ - unsigned long name[BITS_TO_LONGS(bits)] -``` - -We can see that `DECLARE_BITMAP` macro takes two parameters: - -* `name` - name of bitmap; -* `bits` - amount of bits in bitmap; - -and just expands to the definition of `unsigned long` array with `BITS_TO_LONGS(bits)` elements, where the `BITS_TO_LONGS` macro converts a given number of bits to number of `longs` or in other words it calculates how many `8` byte elements in `bits`: - -```C -#define BITS_PER_BYTE 8 -#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) -#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) -``` - -So, for example `DECLARE_BITMAP(my_bitmap, 64)` will produce: - -```python ->>> (((64) + (64) - 1) / (64)) -1 -``` - -and: - -```C -unsigned long my_bitmap[1]; -``` - -After we are able to declare a bit array, we can start to use it. - -Architecture-specific bit operations -================================================================================ - -We already saw above a couple of source code and header files which provide [API](https://en.wikipedia.org/wiki/Application_programming_interface) for manipulation of bit arrays. The most important and widely used API of bit arrays is architecture-specific and located as we already know in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file. - -First of all let's look at the two most important functions: - -* `set_bit`; -* `clear_bit`. - -I think that there is no need to explain what these function do. This is already must be clear from their name. Let's look on their implementation. If you will look into the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, you will note that each of these functions represented by two variants: [atomic](https://en.wikipedia.org/wiki/Linearizability) and not. Before we will start to dive into implementations of these functions, first of all we must to know a little about `atomic` operations. - -In simple words atomic operations guarantees that two or more operations will not be performed on the same data concurrently. The `x86` architecture provides a set of atomic instructions, for example [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html) instruction, [cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) instruction and etc. Besides atomic instructions, some of non-atomic instructions can be made atomic with the help of the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction. It is enough to know about atomic operations for now, so we can begin to consider implementation of `set_bit` and `clear_bit` functions. - -First of all, let's start to consider `non-atomic` variants of this function. Names of non-atomic `set_bit` and `clear_bit` starts from double underscore. As we already know, all of these functions are defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and the first function is `__set_bit`: - -```C -static inline void __set_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); -} -``` - -As we can see it takes two arguments: - -* `nr` - number of bit in a bit array. -* `addr` - address of a bit array where we need to set bit. - -Note that the `addr` parameter is defined with `volatile` keyword which tells to compiler that value maybe changed by the given address. The implementation of the `__set_bit` is pretty easy. As we can see, it just contains one line of [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) code. In our case we are using the [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) instruction which selects a bit which is specified with the first operand (`nr` in our case) from the bit array, stores the value of the selected bit in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) flags register and set this bit. - -Note that we can see usage of the `nr`, but there is `addr` here. You already might guess that the secret is in `ADDR`. The `ADDR` is the macro which is defined in the same header code file and expands to the string which contains value of the given address and `+m` constraint: - -```C -#define ADDR BITOP_ADDR(addr) -#define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) -``` - -Besides the `+m`, we can see other constraints in the `__set_bit` function. Let's look on they and try to understand what do they mean: - -* `+m` - represents memory operand where `+` tells that the given operand will be input and output operand; -* `I` - represents integer constant; -* `r` - represents register operand - -Besides these constraint, we also can see - the `memory` keyword which tells compiler that this code will change value in memory. That's all. Now let's look at the same function but at `atomic` variant. It looks more complex that its `non-atomic` variant: - -```C -static __always_inline void -set_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "orb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)CONST_MASK(nr)) - : "memory"); - } else { - asm volatile(LOCK_PREFIX "bts %1,%0" - : BITOP_ADDR(addr) : "Ir" (nr) : "memory"); - } -} -``` - -First of all note that this function takes the same set of parameters that `__set_bit`, but additionally marked with the `__always_inline` attribute. The `__always_inline` is macro which defined in the [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) and just expands to the `always_inline` attribute: - -```C -#define __always_inline inline __attribute__((always_inline)) -``` - -which means that this function will be always inlined to reduce size of the Linux kernel image. Now let's try to understand implementation of the `set_bit` function. First of all we check a given number of bit at the beginning of the `set_bit` function. The `IS_IMMEDIATE` macro defined in the same [header](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) file and expands to the call of the builtin [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) function: - -```C -#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) -``` - -The `__builtin_constant_p` builtin function returns `1` if the given parameter is known to be constant at compile-time and returns `0` in other case. We no need to use slow `bts` instruction to set bit if the given number of bit is known in compile time constant. We can just apply [bitwise or](https://en.wikipedia.org/wiki/Bitwise_operation#OR) for byte from the give address which contains given bit and masked number of bits where high bit is `1` and other is zero. In other case if the given number of bit is not known constant at compile-time, we do the same as we did in the `__set_bit` function. The `CONST_MASK_ADDR` macro: - -```C -#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) -``` - -expands to the give address with offset to the byte which contains a given bit. For example we have address `0x1000` and the number of bit is `0x9`. So, as `0x9` is `one byte + one bit` our address with be `addr + 1`: - -```python ->>> hex(0x1000 + (0x9 >> 3)) -'0x1001' -``` - -The `CONST_MASK` macro represents our given number of bit as byte where high bit is `1` and other bits are `0`: - -```C -#define CONST_MASK(nr) (1 << ((nr) & 7)) -``` - -```python ->>> bin(1 << (0x9 & 7)) -'0b10' -``` - -In the end we just apply bitwise `or` for these values. So, for example if our address will be `0x4097` and we need to set `0x9` bit: - -```python ->>> bin(0x4097) -'0b100000010010111' ->>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7))) -'0b100010' -``` - -the `ninth` bit will be set. - -Note that all of these operations are marked with `LOCK_PREFIX` which is expands to the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction which guarantees atomicity of this operation. - -As we already know, besides the `set_bit` and `__set_bit` operations, the Linux kernel provides two inverse functions to clear bit in atomic and non-atomic context. They are `clear_bit` and `__clear_bit`. Both of these functions are defined in the same [header file](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) and takes the same set of arguments. But not only arguments are similar. Generally these functions are very similar on the `set_bit` and `__set_bit`. Let's look on the implementation of the non-atomic `__clear_bit` function: - -```C -static inline void __clear_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); -} -``` - -Yes. As we see, it takes the same set of arguments and contains very similar block of inline assembler. It just uses the [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) instruction instead of `bts`. As we can understand form the function's name, it clears a given bit by the given address. The `btr` instruction acts like `btr`. This instruction also selects a given bit which is specified in the first operand, stores its value in the `CF` flag register and clears this bit in the given bit array which is specifed with second operand. - -The atomic variant of the `__clear_bit` is `clear_bit`: - -```C -static __always_inline void -clear_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "andb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)~CONST_MASK(nr))); - } else { - asm volatile(LOCK_PREFIX "btr %1,%0" - : BITOP_ADDR(addr) - : "Ir" (nr)); - } -} -``` - -and as we can see it is very similar on `set_bit` and just contains two differences. The first difference it uses `btr` instruction to clear bit when the `set_bit` uses `bts` instruction to set bit. The second difference it uses negated mask and `and` instruction to clear bit in the given byte when the `set_bit` uses `or` instruction. - -That's all. Now we can set and clear bit in any bit array and and we can go to other operations on bitmasks. - -Most widely used operations on a bit arrays are set and clear bit in a bit array in the Linux kernel. But besides this operations it is useful to do additional operations on a bit array. Yet another widely used operation in the Linux kernel - is to know is a given bit set or not in a bit array. We can achieve this with the help of the `test_bit` macro. This macro is defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and expands to the call of the `constant_test_bit` or `variable_test_bit` depends on bit number: - -```C -#define test_bit(nr, addr) \ - (__builtin_constant_p((nr)) \ - ? constant_test_bit((nr), (addr)) \ - : variable_test_bit((nr), (addr))) -``` - -So, if the `nr` is known in compile time constant, the `test_bit` will be expanded to the call of the `constant_test_bit` function or `variable_test_bit` in other case. Now let's look at implementations of these functions. Let's start from the `variable_test_bit`: - -```C -static inline int variable_test_bit(long nr, volatile const unsigned long *addr) -{ - int oldbit; - - asm volatile("bt %2,%1\n\t" - "sbb %0,%0" - : "=r" (oldbit) - : "m" (*(unsigned long *)addr), "Ir" (nr)); - - return oldbit; -} -``` - -The `variable_test_bit` function takes similar set of arguments as `set_bit` and other function take. We also may see inline assembly code here which executes [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) and [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) instruction. The `bt` or `bit test` instruction selects a given bit which is specified with first operand from the bit array which is specified with the second operand and stores its value in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) bit of flags register. The second `sbb` instruction substracts first operand from second and subscrtact value of the `CF`. So, here write a value of a given bit number from a given bit array to the `CF` bit of flags register and execute `sbb` instruction which calculates: `00000000 - CF` and writes the result to the `oldbit`. - -The `constant_test_bit` function does the same as we saw in the `set_bit`: - -```C -static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr) -{ - return ((1UL << (nr & (BITS_PER_LONG-1))) & - (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; -} -``` - -It generates a byte where high bit is `1` and other bits are `0` (as we saw in `CONST_MASK`) and applies bitwise [and](https://en.wikipedia.org/wiki/Bitwise_operation#AND) to the byte which contains a given bit number. - -The next widely used bit array related operation is to change bit in a bit array. The Linux kernel provides two helper for this: - -* `__change_bit`; -* `change_bit`. - -As you already can guess, these two variants are atomic and non-atomic as for example `set_bit` and `__set_bit`. For the start, let's look at the implementation of the `__change_bit` function: - -```C -static inline void __change_bit(long nr, volatile unsigned long *addr) -{ - asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); -} -``` - -Pretty easy, is not it? The implementation of the `__change_bit` is the same as `__set_bit`, but instead of `bts` instruction, we are using [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html). This instruction selects a given bit from a given bit array, stores its value in the `CF` and changes its value by the applying of complement operation. So, a bit with value `1` will be `0` and vice versa: - -```python ->>> int(not 1) -0 ->>> int(not 0) -1 -``` - -The atomic version of the `__change_bit` is the `change_bit` function: - -```C -static inline void change_bit(long nr, volatile unsigned long *addr) -{ - if (IS_IMMEDIATE(nr)) { - asm volatile(LOCK_PREFIX "xorb %1,%0" - : CONST_MASK_ADDR(nr, addr) - : "iq" ((u8)CONST_MASK(nr))); - } else { - asm volatile(LOCK_PREFIX "btc %1,%0" - : BITOP_ADDR(addr) - : "Ir" (nr)); - } -} -``` - -It is similar on `set_bit` function, but also has two differences. The first difference is `xor` operation instead of `or` and the second is `bts` instead of `bts`. - -For this moment we know the most important architecture-specific operations with bit arrays. Time to look at generic bitmap API. - -Common bit operations -================================================================================ - -Besides the architecture-specific API from the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, the Linux kernel provides common API for manipulation of bit arrays. As we know from the beginning of this part, we can find it in the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and additionally in the * [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) source code file. But before these source code files let's look into the [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) header file which provides a set of useful macro. Let's look on some of they. - -First of all let's look at following four macros: - -* `for_each_set_bit` -* `for_each_set_bit_from` -* `for_each_clear_bit` -* `for_each_clear_bit_from` - -All of these macros provide iterator over certain set of bits in a bit array. The first macro iterates over bits which are set, the second does the same, but starts from a certain bits. The last two macros do the same, but iterates over clear bits. Let's look on implementation of the `for_each_set_bit` macro: - -```C -#define for_each_set_bit(bit, addr, size) \ - for ((bit) = find_first_bit((addr), (size)); \ - (bit) < (size); \ - (bit) = find_next_bit((addr), (size), (bit) + 1)) -``` - -As we may see it takes three arguments and expands to the loop from first set bit which is returned as result of the `find_first_bit` function and to the last bit number while it is less than given size. - -Besides these four macros, the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) provides API for rotation of `64-bit` or `32-bit` values and etc. - -The next [header](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) file which provides API for manipulation with a bit arrays. For example it provdes two functions: - -* `bitmap_zero`; -* `bitmap_fill`. - -To clear a bit array and fill it with `1`. Let's look on the implementation of the `bitmap_zero` function: - -```C -static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) -{ - if (small_const_nbits(nbits)) - *dst = 0UL; - else { - unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); - memset(dst, 0, len); - } -} -``` - -First of all we can see the check for `nbits`. The `small_const_nbits` is macro which defined in the same header [file](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) and looks: - -```C -#define small_const_nbits(nbits) \ - (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) -``` - -As we may see it checks that `nbits` is known constant in compile time and `nbits` value does not overflow `BITS_PER_LONG` or `64`. If bits number does not overflow amount of bits in a `long` value we can just set to zero. In other case we need to calculate how many `long` values do we need to fill our bit array and fill it with [memset](http://man7.org/linux/man-pages/man3/memset.3.html). - -The implementation of the `bitmap_fill` function is similar on implementation of the `biramp_zero` function, except we fill a given bit array with `0xff` values or `0b11111111`: - -```C -static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) -{ - unsigned int nlongs = BITS_TO_LONGS(nbits); - if (!small_const_nbits(nbits)) { - unsigned int len = (nlongs - 1) * sizeof(unsigned long); - memset(dst, 0xff, len); - } - dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); -} -``` - -Besides the `bitmap_fill` and `bitmap_zero` functions, the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file provides `bitmap_copy` which is similar on the `bitmap_zero`, but just uses [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) instead of [memset](http://man7.org/linux/man-pages/man3/memset.3.html). Also it provides bitwise operations for bit array like `bitmap_and`, `bitmap_or`, `bitamp_xor` and etc. We will not consider implementation of these functions because it is easy to understand implementations of these functions if you understood all from this part. Anyway if you are interested how did these function implemented, you may open [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and start to research. - -That's all. - -Links -================================================================================ - -* [bitmap](https://en.wikipedia.org/wiki/Bit_array) -* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure) -* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) -* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) -* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) -* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) -* [API](https://en.wikipedia.org/wiki/Application_programming_interface) -* [atomic operations](https://en.wikipedia.org/wiki/Linearizability) -* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html) -* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) -* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html) -* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html) -* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html) -* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html) -* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html) -* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html) -* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) -* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html) -* [CF](https://en.wikipedia.org/wiki/FLAGS_register) -* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) -* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) - - ------------------------------------------------------------------------------- - -via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md - -作者:[0xAX][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twitter.com/0xAX diff --git a/sources/tech/20160516 Scaling Collaboration in DevOps.md b/sources/tech/20160516 Scaling Collaboration in DevOps.md deleted file mode 100644 index bee9ab5415..0000000000 --- a/sources/tech/20160516 Scaling Collaboration in DevOps.md +++ /dev/null @@ -1,68 +0,0 @@ -Translating by Bestony -Scaling Collaboration in DevOps -================================= - -![](http://devops.com/wp-content/uploads/2016/05/ScalingCollaboration.jpg) - -Those familiar with DevOps generally agree that it is equally as much about culture as it is about technology. There are certainly tools and practices involved in the effective implementation of DevOps, but the foundation of DevOps success is how well [teams and individuals collaborate][1] across the enterprise to get things done more rapidly, efficiently and effectively. - -Most DevOps platforms and tools are designed with scalability in mind. DevOps environments often run in the cloud and tend to be volatile. It’s important for the software that supports DevOps to be able to scale in real time to address spikes and lulls in demand. The same thing is true for the human element as well, but scaling collaboration is a whole different story. - -Collaboration across the enterprise is critical for DevOps success. Great code and development needs to make it over the finish line to production to benefit customers. The challenge organizations face is how to do that seamlessly and with as much speed and automation as possible without sacrificing quality or performance. How can businesses streamline code development and deployment, while maintaining visibility, governance and compliance? - -### Emerging Trends - -First, I want to provide some background and share some data gathered by 451 Research on DevOps and DevOps adoption in general. Cloud, agile and DevOps capabilities are important for organizations today—both in perception and reality. 451 sees enterprise adoption of these things, as well as container technologies, growing—including increased usage in production environments. - -There are a number of advantages to embracing these technologies and methodologies, such as increased flexibility and speed, reduction of costs, improvements in resilience and reliability, and fitness for new or emerging applications. According to 451 Research, organizations also face some barriers including a lack of familiarity and required skills internally, the immaturity of these emerging technologies, and cost and security concerns. - -In the “[Voice of the Enterprise: SDI Q4 2015 survey][2],” 451 Research found that more than half of the respondents (51.7 percent) consider themselves to be late adopters, or even the last adopters of new technology. The flip side of that is that almost half (48.3 percent) label themselves as first or early adopters. - -Those general sentiments are reflected in the survey responses to other questions. When asked about implementation of containers, 50.3 percent stated it is not in their plans at all, while the remaining 49.7 percent are in some state of planning, pilot or active use of container technologies. Nearly two-thirds (65.1 percent) indicated that they use agile development methodologies for application development, but only 39.6 percent responded that they’ve embraced DevOps approaches. Nevertheless, while agile software development has been in the industry for years, 451 notes the impressive adoption of containers and DevOps, given they are emergent trends. - -When asked what the top three IT pain points are, the leading responses were cost or budget, insufficient staff and legacy software issues. As organizations move to cloud, DevOps, and containers issues such as these will need to be addressed, along with how to scale both technologies and collaboration effectively. - -### The Current State - -The industry—driven in large part by the DevOps revolution—is in the midst of a sea change, where software development is becoming more highly integrated across the entire business. The creation of software is less segregated and is more and more a function of collaboration and socialization. - -Concepts and methodologies that were novel or niche just a few years ago have matured quickly to become the mainstream technologies and frameworks that are driving value today. Businesses rely on concepts such as agile, lean, virtualization, cloud, automation and microservices to streamline development and enable them to work more effectively and efficiently at the same time. - -To adapt and evolve, enterprises need to accomplish a number of key tasks. The challenge companies face today is how to accelerate development while reducing costs. Organizations need to eliminate the barriers that exist between IT and the rest of the business, and work cooperatively toward a strategy that provides more effectiveness in a technology-driven, competitive environment. - -Agile, cloud, DevOps and containers all play a role in that process, but the one thing that binds it all is effective collaboration. Each of these technologies and methodologies provides unique benefits, but the real value comes from the organization as a whole—and the tools and platforms used by the organization—being able to collaborate at scale. Successful DevOps implementations also require participation from other stakeholders beyond development and IT operations teams, including security, database, storage and line-of-business teams. - -### Collaboration-as-a-Platform - -There are services and platforms online—such as GitHub—that facilitate and streamline collaboration. The online platform functions as a code repository, but the value extends beyond just providing a place to store code. - -Such a [collaboration platform][4] helps developers and teams collaborate more effectively because it provides a community where the code and process can be shared and discussed. Managers can monitor progress and track what code is shipping next. Developers can experiment with new ideas in a safe environment before taking those experiments to a live production environment, and new ideas and experiments can be effectively communicated to the appropriate teams. - -One of the keys to more agile development and DevOps is to allow developers to test things and gather relevant feedback quickly. The goal is to produce quality code and features faster, not to waste time setting up and managing infrastructure or scheduling more meetings to talk about it. The GitHub platform, for example, enables more effective and scalable collaboration because code review can occur when it is most convenient for the participants. There is no need to try and coordinate and schedule code review meetings, so the developers can continue to work uninterrupted, resulting in greater productivity and job satisfaction. - -Steven Anderson of Sendachi noted that GitHub is a collaboration platform, but it’s also a place for your tools to work with you, too. This means it can help not only with collaboration and continuous integration, but also with code quality. - -One of the benefits of a collaboration platform is that large teams of developers can be broken down into smaller teams that can focus more efficiently on specific components. It also allows things such as document sharing alongside code development to blur the lines between technical and non-technical contributions and enable increased collaboration and visibility. - -### Collaboration is Key - -The importance of collaboration can’t be stressed enough. It is a key tenet of DevOps culture, and it’s vital to agile development and maintaining a competitive edge in today’s world. Executive or management support and internal evangelism are important. Organizations also need to embrace the culture shift—blending skills across functional areas toward a common goal. - -With that culture established, though, effective collaboration is crucial. A collaboration platform is an essential element of collaborating at scale because it streamlines productivity and reduces redundancy and effort, and yields higher quality results at the same time. - - --------------------------------------------------------------------------------- - -via: http://devops.com/2016/05/16/scaling-collaboration-devops/ - -作者:[TONY BRADLEY][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://devops.com/author/tonybsg/ -[1]: http://devops.com/2014/12/15/four-strategies-supporting-devops-collaboration/ -[2]: https://451research.com/ -[3]: https://451research.com/customer-insight-voice-of-the-enterprise-overview -[4]: http://devops.com/events/analytics-of-collaboration-on-github/ diff --git a/sources/tech/20160518 Python 3: An Intro to Encryption.md b/sources/tech/20160518 Python 3: An Intro to Encryption.md deleted file mode 100644 index f80702a771..0000000000 --- a/sources/tech/20160518 Python 3: An Intro to Encryption.md +++ /dev/null @@ -1,279 +0,0 @@ -[Translating by cposture] -Python 3: An Intro to Encryption -=================================== - -Python 3 doesn’t have very much in its standard library that deals with encryption. Instead, you get hashing libraries. We’ll take a brief look at those in the chapter, but the primary focus will be on the following 3rd party packages: PyCrypto and cryptography. We will learn how to encrypt and decrypt strings with both of these libraries. - ---- - -### Hashing - -If you need secure hashes or message digest algorithms, then Python’s standard library has you covered in the **hashlib** module. It includes the FIPS secure hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 as well as RSA’s MD5 algorithm. Python also supports the adler32 and crc32 hash functions, but those are in the **zlib** module. - -One of the most popular uses of hashes is storing the hash of a password instead of the password itself. Of course, the hash has to be a good one or it can be decrypted. Another popular use case for hashes is to hash a file and then send the file and its hash separately. Then the person receiving the file can run a hash on the file to see if it matches the hash that was sent. If it does, then that means no one has changed the file in transit. - - -Let’s try creating an md5 hash: - -``` ->>> import hashlib ->>> md5 = hashlib.md5() ->>> md5.update('Python rocks!') -Traceback (most recent call last): - File "", line 1, in - md5.update('Python rocks!') -TypeError: Unicode-objects must be encoded before hashing ->>> md5.update(b'Python rocks!') ->>> md5.digest() -b'\x14\x82\xec\x1b#d\xf6N}\x16*+[\x16\xf4w' -``` - -Let’s take a moment to break this down a bit. First off, we import **hashlib** and then we create an instance of an md5 HASH object. Next we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s **digest** method to get our hash. If you prefer the hex digest, we can do that too: - -``` ->>> md5.hexdigest() -'1482ec1b2364f64e7d162a2b5b16f477' -``` - -There’s actually a shortcut method of creating a hash, so we’ll look at that next when we create our sha512 hash: - -``` ->>> sha = hashlib.sha1(b'Hello Python').hexdigest() ->>> sha -'422fbfbc67fe17c86642c5eaaa48f8b670cbed1b' -``` - -As you can see, we can create our hash instance and call its digest method at the same time. Then we print out the hash to see what it is. I chose to use the sha1 hash as it has a nice short hash that will fit the page better. But it’s also less secure, so feel free to try one of the others. - ---- - -### Key Derivation - -Python has pretty limited support for key derivation built into the standard library. In fact, the only method that hashlib provides is the **pbkdf2_hmac** method, which is the PKCS#5 password-based key derivation function 2. It uses HMAC as its psuedorandom function. You might use something like this for hashing your password as it supports a salt and iterations. For example, if you were to use SHA-256 you would need a salt of at least 16 bytes and a minimum of 100,000 iterations. - -As a quick aside, a salt is just random data that you use as additional input into your hash to make it harder to “unhash” your password. Basically it protects your password from dictionary attacks and pre-computed rainbow tables. - -Let’s look at a simple example: - -``` ->>> import binascii ->>> dk = hashlib.pbkdf2_hmac(hash_name='sha256', - password=b'bad_password34', - salt=b'bad_salt', - iterations=100000) ->>> binascii.hexlify(dk) -b'6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02' -``` - -Here we create a SHA256 hash on a password using a lousy salt but with 100,000 iterations. Of course, SHA is not actually recommended for creating keys of passwords. Instead you should use something like **scrypt** instead. Another good option would be the 3rd party package, bcrypt. It is designed specifically with password hashing in mind. - ---- - -### PyCryptodome - -The PyCrypto package is probably the most well known 3rd party cryptography package for Python. Sadly PyCrypto’s development stopping in 2012. Others have continued to release the latest version of PyCryto so you can still get it for Python 3.5 if you don’t mind using a 3rd party’s binary. For example, I found some binary Python 3.5 wheels for PyCrypto on Github (https://github.com/sfbahr/PyCrypto-Wheels). - -Fortunately there is a fork of the project called PyCrytodome that is a drop-in replacement for PyCrypto. To install it for Linux, you can use the following pip command: - - -``` -pip install pycryptodome -``` - -Windows is a bit different: - -``` -pip install pycryptodomex -``` - -If you run into issues, it’s probably because you don’t have the right dependencies installed or you need a compiler for Windows. Check out the PyCryptodome [website][1] for additional installation help or to contact support. - -Also worth noting is that PyCryptodome has many enhancements over the last version of PyCrypto. It is well worth your time to visit their home page and see what new features exist. - -### Encrypting a String - -Once you’re done checking their website out, we can move on to some examples. For our first trick, we’ll use DES to encrypt a string: - -``` ->>> from Crypto.Cipher import DES ->>> key = 'abcdefgh' ->>> def pad(text): - while len(text) % 8 != 0: - text += ' ' - return text ->>> des = DES.new(key, DES.MODE_ECB) ->>> text = 'Python rocks!' ->>> padded_text = pad(text) ->>> encrypted_text = des.encrypt(text) -Traceback (most recent call last): - File "", line 1, in - encrypted_text = des.encrypt(text) - File "C:\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\blockalgo.py", line 244, in encrypt - return self._cipher.encrypt(plaintext) -ValueError: Input strings must be a multiple of 8 in length ->>> encrypted_text = des.encrypt(padded_text) ->>> encrypted_text -b'>\xfc\x1f\x16x\x87\xb2\x93\x0e\xfcH\x02\xd59VQ' -``` - -This code is a little confusing, so let’s spend some time breaking it down. First off, it should be noted that the key size for DES encryption is 8 bytes, which is why we set our key variable to a size letter string. The string that we will be encrypting must be a multiple of 8 in length, so we create a function called **pad** that can pad any string out with spaces until it’s a multiple of 8. Next we create an instance of DES and some text that we want to encrypt. We also create a padded version of the text. Just for fun, we attempt to encrypt the original unpadded variant of the string which raises a **ValueError**. Here we learn that we need that padded string after all, so we pass that one in instead. As you can see, we now have an encrypted string! - -Of course the example wouldn’t be complete if we didn’t know how to decrypt our string: - -``` ->>> des.decrypt(encrypted_text) -b'Python rocks! ' -``` - -Fortunately, that is very easy to accomplish as all we need to do is call the **decrypt** method on our des object to get our decrypted byte string back. Our next task is to learn how to encrypt and decrypt a file with PyCrypto using RSA. But first we need to create some RSA keys! - -### Create an RSA Key - -If you want to encrypt your data with RSA, then you’ll need to either have access to a public / private RSA key pair or you will need to generate your own. For this example, we will just generate our own. Since it’s fairly easy to do, we will do it in Python’s interpreter: - -``` ->>> from Crypto.PublicKey import RSA ->>> code = 'nooneknows' ->>> key = RSA.generate(2048) ->>> encrypted_key = key.exportKey(passphrase=code, pkcs=8, - protection="scryptAndAES128-CBC") ->>> with open('/path_to_private_key/my_private_rsa_key.bin', 'wb') as f: - f.write(encrypted_key) ->>> with open('/path_to_public_key/my_rsa_public.pem', 'wb') as f: - f.write(key.publickey().exportKey()) -``` - -First we import **RSA** from **Crypto.PublicKey**. Then we create a silly passcode. Next we generate an RSA key of 2048 bits. Now we get to the good stuff. To generate a private key, we need to call our RSA key instance’s **exportKey** method and give it our passcode, which PKCS standard to use and which encryption scheme to use to protect our private key. Then we write the file out to disk. - -Next we create our public key via our RSA key instance’s **publickey** method. We used a shortcut in this piece of code by just chaining the call to exportKey with the publickey method call to write it to disk as well. - -### Encrypting a File - -Now that we have both a private and a public key, we can encrypt some data and write it to a file. Here’s a pretty standard example: - -``` -from Crypto.PublicKey import RSA -from Crypto.Random import get_random_bytes -from Crypto.Cipher import AES, PKCS1_OAEP - -with open('/path/to/encrypted_data.bin', 'wb') as out_file: - recipient_key = RSA.import_key( - open('/path_to_public_key/my_rsa_public.pem').read()) - session_key = get_random_bytes(16) - - cipher_rsa = PKCS1_OAEP.new(recipient_key) - out_file.write(cipher_rsa.encrypt(session_key)) - - cipher_aes = AES.new(session_key, AES.MODE_EAX) - data = b'blah blah blah Python blah blah' - ciphertext, tag = cipher_aes.encrypt_and_digest(data) - - out_file.write(cipher_aes.nonce) - out_file.write(tag) - out_file.write(ciphertext) -``` - -The first three lines cover our imports from PyCryptodome. Next we open up a file to write to. Then we import our public key into a variable and create a 16-byte session key. For this example we are going to be using a hybrid encryption method, so we use PKCS#1 OAEP, which is Optimal asymmetric encryption padding. This allows us to write a data of an arbitrary length to the file. Then we create our AES cipher, create some data and encrypt the data. This will return the encrypted text and the MAC. Finally we write out the nonce, MAC (or tag) and the encrypted text. - -As an aside, a nonce is an arbitrary number that is only used for crytographic communication. They are usually random or pseudorandom numbers. For AES, it must be at least 16 bytes in length. Feel free to try opening the encrypted file in your favorite text editor. You should just see gibberish. - -Now let’s learn how to decrypt our data: - -``` -from Crypto.PublicKey import RSA -from Crypto.Cipher import AES, PKCS1_OAEP - -code = 'nooneknows' - -with open('/path/to/encrypted_data.bin', 'rb') as fobj: - private_key = RSA.import_key( - open('/path_to_private_key/my_rsa_key.pem').read(), - passphrase=code) - - enc_session_key, nonce, tag, ciphertext = [ fobj.read(x) - for x in (private_key.size_in_bytes(), - 16, 16, -1) ] - - cipher_rsa = PKCS1_OAEP.new(private_key) - session_key = cipher_rsa.decrypt(enc_session_key) - - cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce) - data = cipher_aes.decrypt_and_verify(ciphertext, tag) - -print(data) -``` - -If you followed the previous example, this code should be pretty easy to parse. In this case, we are opening our encrypted file for reading in binary mode. Then we import our private key. Note that when you import the private key, you must give it your passcode. Otherwise you will get an error. Next we read in our file. You will note that we read in the private key first, then the next 16 bytes for the nonce, which is followed by the next 16 bytes which is the tag and finally the rest of the file, which is our data. - -Then we need to decrypt our session key, recreate our AES key and decrypt the data. - -You can use PyCryptodome to do much, much more. However we need to move on and see what else we can use for our cryptographic needs in Python. - ---- - -### The cryptography package - -The **cryptography** package aims to be “cryptography for humans” much like the **requests** library is “HTTP for Humans”. The idea is that you will be able to create simple cryptographic recipes that are safe and easy-to-use. If you need to, you can drop down to low=level cryptographic primitives, which require you to know what you’re doing or you might end up creating something that’s not very secure. - -If you are using Python 3.5, you can install it with pip, like so: - -``` -pip install cryptography -``` - -You will see that cryptography installs a few dependencies along with itself. Assuming that they all completed successfully, we can try encrypting some text. Let’s give the **Fernet** symmetric encryption algorithm. The Fernet algorithm guarantees that any message you encrypt with it cannot be manipulated or read without the key you define. Fernet also support key rotation via **MultiFernet**. Let’s take a look at a simple example: - -``` ->>> from cryptography.fernet import Fernet ->>> cipher_key = Fernet.generate_key() ->>> cipher_key -b'APM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o=' ->>> cipher = Fernet(cipher_key) ->>> text = b'My super secret message' ->>> encrypted_text = cipher.encrypt(text) ->>> encrypted_text -(b'gAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f' - b'6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=') ->>> decrypted_text = cipher.decrypt(encrypted_text) ->>> decrypted_text -b'My super secret message' -``` - -First off we need to import Fernet. Next we generate a key. We print out the key to see what it looks like. As you can see, it’s a random byte string. If you want, you can try running the **generate_key** method a few times. The result will always be different. Next we create our Fernet cipher instance using our key. - -Now we have a cipher we can use to encrypt and decrypt our message. The next step is to create a message worth encrypting and then encrypt it using the **encrypt** method. I went ahead and printed our the encrypted text so you can see that you can no longer read the text. To **decrypt** our super secret message, we just call decrypt on our cipher and pass it the encrypted text. The result is we get a plain text byte string of our message. - ---- - -### Wrapping Up - -This chapter barely scratched the surface of what you can do with PyCryptodome and the cryptography packages. However it does give you a decent overview of what can be done with Python in regards to encrypting and decrypting strings and files. Be sure to read the documentation and start experimenting to see what else you can do! - ---- - -### Related Reading - -PyCrypto Wheels for Python 3 on [github][2] - -PyCryptodome [documentation][3] - -Python’s Cryptographic [Services][4] - -The cryptography package’s [website][5] - ------------------------------------------------------------------------------- - -via: http://www.blog.pythonlibrary.org/2016/05/18/python-3-an-intro-to-encryption/ - -作者:[Mike][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.blog.pythonlibrary.org/author/mld/ -[1]: http://pycryptodome.readthedocs.io/en/latest/ -[2]: https://github.com/sfbahr/PyCrypto-Wheels -[3]: http://pycryptodome.readthedocs.io/en/latest/src/introduction.html -[4]: https://docs.python.org/3/library/crypto.html -[5]: https://cryptography.io/en/latest/ diff --git a/sources/tech/20160519 The future of sharing: integrating Pydio and ownCloud.md b/sources/tech/20160519 The future of sharing: integrating Pydio and ownCloud.md deleted file mode 100644 index 0461fda34d..0000000000 --- a/sources/tech/20160519 The future of sharing: integrating Pydio and ownCloud.md +++ /dev/null @@ -1,65 +0,0 @@ -The future of sharing: integrating Pydio and ownCloud -========================================================= - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_darwincloud_520x292_0311LL.png?itok=5yWIaEDe) ->Image by : -opensource.com - -The open source file sharing ecosystem accommodates a large variety of projects, each supplying their own solution, and each with a different approach. There are a lot of reasons to choose an open source solution rather than commercial solutions like Dropbox, Google Drive, iCloud, or OneDrive. These solutions offer to take away worries about managing your data but come with certain limitations, including a lack of control and integration into existing infrastructure. - -There are quite a few file sharing and sync alternatives available to users, including ownCloud and Pydio. - -### Pydio - -The Pydio (Put your data in orbit) project was founded by musician Charles du Jeu, who needed a way to share large audio files with his bandmates. [Pydio][1] is a file sharing and sync solution, with multiple storage backends, designed with developers and system administrators in mind. It has over one million downloads worldwide and has been translated into 27 languages. - -Open source from the very start, the project grew organically on [SourceForge][2] and now finds its home on [GitHub][3]. - -The user interface is based on Google's [Material Design][4]. Users can use an existing legacy file infrastructure or set up Pydio with an on-premise approach, and use web, desktop, and mobile applications to manage their assets everywhere. For administrators, the fine-grained access rights are a powerful tool for configuring access to assets.​ - -On the [Pydio community page][5], you will find several resources to get you up to speed quickly. The Pydio website gives some clear guidelines on [how to contribute][6] to the Pydio repositories on GitHub. The [forum][7] includes sections for developers and community. - -### ownCloud - -[ownCloud][8] has over 8 million users worldwide and is an open source, self-hosted file sync and sharing technology. There are sync clients for all major platforms as well as WebDAV through a web interface. ownCloud has an easy to use interface, powerful administrator tools, and extensive sharing and collaboration features—designed to give users control over their data. - -ownCloud's open architecture is extensible via an API and offers a platform for apps. Over 300 applications have been written, featuring capabilities like handling calendar, contacts, mail, music, passwords, notes, and many other types of data. ownCloud provides security, scales from a Raspberry Pi to a cluster with petabytes of storage and millions of users, and is developed by an international community of hundreds of contributors. - -### Federated sharing - -File sharing is starting to shift toward teamwork, and standardization provides a solid basis for such collaboration. - -Federated sharing, a new open standard supported by the [OpenCloudMesh][9] project, is a step in that direction. Among other things, it allows for the sharing of files and folders between servers that support this, like Pydio and ownCloud instances. - -First introduced in ownCloud 7, this server-to-server sharing allows you to mount file shares from remote servers, in effect creating your own cloud of clouds. You can create direct share links with users on other servers that support federated cloud sharing. - -Implementing this new API allows for deeper integration between storage solutions while maintaining the security, control, and attributes of the original platforms. - -"Exchanging and sharing files is something that is essential today and tomorrow," ownCloud founder Frank Karlitschek said. "Because of that, it is important to do this in a federated and distributed way without centralized data silos. The number one design goal [of federated sharing] is to enable sharing in the most seamless and easiest way while protecting the security and privacy of the users." - -### What's next? - -An initiative like OpenCloudMesh will extend this new open standard of file sharing through cooperation of institutions and companies like Pydio and ownCloud. ownCloud 9 has already introduced the ability for federated servers to exchange user lists, enabling the same seamless auto-complete experience you have with users on your own server. In the future, the idea of having a (federated!) set of central address book servers that can be used to search for others' federated cloud IDs might bring inter-cloud collaboration to an even higher level. - -The initiative will undoubtedly contribute to already growing open technical community within which members can easily discuss, develop, and contribute to the "OCM sharing API" as a vendor-neutral protocol. All leading partners of the OCM project are fully committed to the open API design principle and welcome other open source file share and sync communities to participate and join the connected cloud. - --------------------------------------------------------------------------------- - -via: https://opensource.com/business/16/5/sharing-files-pydio-owncloud - -作者:[ben van 't ende][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/benvantende -[1]: https://pydio.com/ -[2]: https://sourceforge.net/projects/ajaxplorer/ -[3]: https://github.com/pydio/ -[4]: https://www.google.com/design/spec/material-design/introduction.html -[5]: https://pydio.com/en/community -[6]: https://pydio.com/en/community/contribute -[7]: https://pydio.com/forum/f -[8]: https://owncloud.org/ -[9]: https://wiki.geant.org/display/OCM/Open+Cloud+Mesh diff --git a/sources/tech/20160524 Test Fedora 24 Beta in an OpenStack cloud.md b/sources/tech/20160524 Test Fedora 24 Beta in an OpenStack cloud.md deleted file mode 100644 index c550880223..0000000000 --- a/sources/tech/20160524 Test Fedora 24 Beta in an OpenStack cloud.md +++ /dev/null @@ -1,77 +0,0 @@ -Test Fedora 24 Beta in an OpenStack cloud -=========================================== - -![](https://major.io/wp-content/uploads/2012/01/fedorainfinity.png) - -Although there are a few weeks remaining before [Fedora 24][1] is released, you can test out the Fedora 24 Beta release today! This is a great way to get [a sneak peek at new features][2] and help find bugs that still need a fix. - -The [Fedora Cloud][3] image is available for download from your favorite [local mirror][4] or directly from [Fedora’s servers][5]. In this post, I’ll show you how to import this image into an OpenStack environment and begin testing Fedora 24 Beta. - -One last thing: this is beta software. It has been reliable for me so far, but your experience may vary. I would recommend waiting for the final release before deploying any mission critical applications on it. - -### Importing the image - -The older glance client (version 1) allows you to import an image from a URL that is reachable from your OpenStack environment. This is helpful since my OpenStack cloud has a much faster connection to the internet (1 Gbps) than my home does (~ 20 mbps upload speed). However, the functionality to import from a URL was [removed in version 2 of the glance client][6]. The [OpenStackClient][7] doesn’t offer the feature either. - -There are two options here: - -- Install an older version of the glance client -- Use Horizon (the web dashboard) - -Getting an older version of glance client installed is challenging. The OpenStack requirements file for the liberty release [leaves the version of glance client without a maximum version cap][8] and it’s difficult to get all of the dependencies in order to make the older glance client work. - -Let’s use Horizon instead so we can get back to the reason for the post. - -### Adding an image in Horizon - -Log into the Horizon panel and click Compute > Images. Click + Create Image at the top right of the page and a new window should appear. Add this information in the window: - -- **Name**: Fedora 24 Cloud Beta -- **Image Source**: Image Location -- **Image Location**: http://mirrors.kernel.org/fedora/releases/test/24_Beta/CloudImages/x86_64/images/Fedora-Cloud-Base-24_Beta-1.6.x86_64.qcow2 -- **Format**: QCOW2 – QEMU Emulator -- **Copy Data**: ensure the box is checked - -When you’re finished, the window should look like this: - -![](https://major.io/wp-content/uploads/2016/05/horizon_image.png) - -Click Create Image and the images listing should show Saving for a short period of time. Once it switches to Active, you’re ready to build an instance. - -### Building the instance - -Since we’re already in Horizon, we can finish out the build process there. - -On the image listing page, find the row with the image we just uploaded and click Launch Instance on the right side. A new window will appear. The Image Name drop down should already have the Fedora 24 Beta image selected. From here, just choose an instance name, select a security group and keypair (on the Access & Security tab), and a network (on the Networking tab). Be sure to choose a flavor that has some available storage as well (m1.tiny is not enough). - -Click Launch and wait for the instance to boot. - -Once the instance build has finished, you can connect to the instance over ssh as the fedora user. If your [security group allows the connection][9] and your keypair was configured correctly, you should be inside your new Fedora 24 Beta instance! - -Not sure what to do next? Here are some suggestions: - -- Update all packages and reboot (to ensure that you are testing the latest updates) -- Install some familiar applications and verify that they work properly -- Test out your existing automation or configuration management tools -- Open bug tickets! - --------------------------------------------------------------------------------- - -via: https://major.io/2016/05/24/test-fedora-24-beta-openstack-cloud/ - -作者:[major.io][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://major.io/about-the-racker-hacker/ -[1]: https://fedoraproject.org/wiki/Releases/24/Schedule -[2]: https://fedoraproject.org/wiki/Releases/24/ChangeSet -[3]: https://getfedora.org/en/cloud/ -[4]: https://admin.fedoraproject.org/mirrormanager/mirrors/Fedora/24/x86_64 -[5]: https://getfedora.org/en/cloud/download/ -[6]: https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability -[7]: http://docs.openstack.org/developer/python-openstackclient/ -[8]: https://github.com/openstack/requirements/blob/stable/liberty/global-requirements.txt#L159 -[9]: https://major.io/2016/05/16/troubleshooting-openstack-network-connectivity/ diff --git a/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md new file mode 100644 index 0000000000..95122771ac --- /dev/null +++ b/sources/tech/20160524 Writing online multiplayer game with python and asyncio - part 1.md @@ -0,0 +1,73 @@ +xinglianfly translate +Writing online multiplayer game with python and asyncio - part 1 +=================================================================== + +Have you ever combined async with Python? Here I’ll tell you how to do it and show it on a [working example][1] - a popular Snake game, designed for multiple players. + +[Play gmae][2] + +### 1. Introduction + +Massive multiplayer online games are undoubtedly one of the main trends of our century, in both tech and cultural domains. And while for a long time writing a server for a MMO game was associated with massive budgets and complex low-level programming techniques, things are rapidly changing in the recent years. Modern frameworks based on dynamic languages allow handling thousands of parallel user connections on moderate hardware. At the same time, HTML 5 and WebSockets standards enabled the creation of real-time graphics-based game clients that run directly in web browser, without any extensions. + +Python may be not the most popular tool for creating scalable non-blocking servers, especially comparing to node.js popularity in this area. But the latest versions of Python are aimed to change this. The introduction of [asyncio][3] library and a special [async/await][4] syntax makes asynchronous code look as straightforward as regular blocking code, which now makes Python a worthy choice for asynchronous programming. So I will try to utilize these new features to demonstrate a way to create an online multiplayer game. + +### 2. Getting asynchronous + +A game server should handle a maximum possible number of parallel users' connections and process them all in real time. And a typical solution - creating threads, doesn't solve a problem in this case. Running thousands of threads requires CPU to switch between them all the time (it is called context switching), which creates big overhead, making it very ineffective. Even worse with processes, because, in addition, they do occupy too much memory. In Python there is even one more problem - regular Python interpreter (CPython) is not designed to be multithreaded, it aims to achieve maximum performance for single-threaded apps instead. That's why it uses GIL (global interpreter lock), a mechanism which doesn't allow multiple threads to run Python code at the same time, to prevent uncontrolled usage of the same shared objects. Normally the interpreter switches to another thread when currently running thread is waiting for something, usually a response from I/O (like a response from web server for example). This allows having non-blocking I/O operations in your app, because every operation blocks only one thread instead of blocking the whole server. However, it also makes general multithreading idea nearly useless, because it doesn't allow you to execute python code in parallel, even on multi-core CPU. While at the same time it is completely possible to have non-blocking I/O in one single thread, thus eliminating the need of heavy context-switching. + +Actually, a single-threaded non-blocking I/O is a thing you can do in pure python. All you need is a standard [select][5] module which allows you to write an event loop waiting for I/O from non-blocking sockets. However, this approach requires you to define all the app logic in one place, and soon your app becomes a very complex state-machine. There are frameworks that simplify this task, most popular are [tornado][6] and [twisted][7]. They are utilized to implement complex protocols using callback methods (and this is similar to node.js). The framework runs its own event loop invoking your callbacks on the defined events. And while this may be a way to go for some, it still requires programming in callback style, what makes your code fragmented. Compare this to just writing synchronous code and running multiple copies concurrently, like we would do with normal threads. Why wouldn't this be possible in one thread? + +And this is where the concept of microthreads come in. The idea is to have concurrently running tasks in one thread. When you call a blocking function in one task, behind the scenes it calls a "manager" (or "scheduler") that runs an event loop. And when there is some event ready to process, a manager passes execution to a task waiting for it. That task will also run until it reaches a blocking call, and then it will return execution to a manager again. + +>Microthreads are also called lightweight threads or green threads (a term which came from Java world). Tasks which are running concurrently in pseudo-threads are called tasklets, greenlets or coroutines. + +One of the first implementations of microthreads in Python was [Stackless Python][8]. It got famous because it is used in a very successful online game [EVE online][9]. This MMO game boasts about a persistent universe, where thousands of players are involved in different activities, all happening in the real time. Stackless is a standalone Python interpreter which replaces standard function calling stack and controls the flow directly to allow minimum possible context-switching expenses. Though very effective, this solution remained less popular than "soft" libraries that work with a standard interpreter. Packages like [eventlet][10] and [gevent][11] come with patching of a standard I/O library in the way that I/O function pass execution to their internal event loop. This allows turning normal blocking code into non-blocking in a very simple way. The downside of this approach is that it is not obvious from the code, which calls are non-blocking. A newer version of Python introduced native coroutines as an advanced form of generators. Later in Python 3.4 they included asyncio library which relies on native coroutines to provide single-thread concurrency. But only in python 3.5 coroutines became an integral part of python language, described with the new keywords async and await. Here is a simple example, which illustrates using asyncio to run concurrent tasks: + +``` +import asyncio + +async def my_task(seconds): + print("start sleeping for {} seconds".format(seconds)) + await asyncio.sleep(seconds) + print("end sleeping for {} seconds".format(seconds)) + +all_tasks = asyncio.gather(my_task(1), my_task(2)) +loop = asyncio.get_event_loop() +loop.run_until_complete(all_tasks) +loop.close() +``` + +We launch two tasks, one sleeps for 1 second, the other - for 2 seconds. The output is: + +``` +start sleeping for 1 seconds +start sleeping for 2 seconds +end sleeping for 1 seconds +end sleeping for 2 seconds +``` + +As you can see, coroutines do not block each other - the second task starts before the first is finished. This is happening because asyncio.sleep is a coroutine which returns execution to a scheduler until the time will pass. In the next section, we will use coroutine-based tasks to create a game loop. + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ + +作者:[Kyrylo Subbotin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[1]: http://snakepit-game.com/ +[2]: http://snakepit-game.com/ +[3]: https://docs.python.org/3/library/asyncio.html +[4]: https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-492 +[5]: https://docs.python.org/2/library/select.html +[6]: http://www.tornadoweb.org/ +[7]: http://twistedmatrix.com/ +[8]: http://www.stackless.com/ +[9]: http://www.eveonline.com/ +[10]: http://eventlet.net/ +[11]: http://www.gevent.org/ diff --git a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md deleted file mode 100644 index ef713fff0f..0000000000 --- a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md +++ /dev/null @@ -1,304 +0,0 @@ -Translating by strugglingyouth - -Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04 -===================================================================================== - - -The LEMP stack is an acronym which represents is a group of packages (Linux OS, Nginx web server, MySQL\MariaDB database and PHP server-side dynamic programming language) which are used to deploy dynamic web applications and web pages. - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-with-FastCGI-on-Ubuntu-16.04.png) ->Install Nginx with MariaDB 10, PHP 7 and HTTP 2.0 Support on Ubuntu 16.04 - -This tutorial will guide you on how to install a LEMP stack (Nginx with MariaDB and PHP7) on Ubuntu 16.04 server. - -Requirements - -[Installation of Ubuntu 16.04 Server Edition][1] - -### Step 1: Install the Nginx Web Server - -#### 1. Nginx is a modern and resources efficient web server used to display web pages to visitors on the internet. We’ll start by installing Nginx web server from Ubuntu official repositories by using the [apt command line][2]. - -``` -$ sudo apt-get install nginx -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-on-Ubuntu-16.04.png) ->Install Nginx on Ubuntu 16.04 - -#### 2. Next, issue the [netstat][3] and [systemctl][4] commands in order to confirm if Nginx is started and binds on port 80. - -``` -$ netstat -tlpn -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Network-Port-Connection.png) ->Check Nginx Network Port Connection - -``` -$ sudo systemctl status nginx.service -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Service-Status.png) ->Check Nginx Service Status - -Once you have the confirmation that the server is started you can open a browser and navigate to your server IP address or DNS record using HTTP protocol in order to visit Nginx default web page. - -``` -http://IP-Address -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-Nginx-Webpage.png) ->Verify Nginx Webpage - -### Step 2: Enable Nginx HTTP/2.0 Protocol - -#### 3. The HTTP/2.0 protocol which is build by default in the latest release of Nginx binaries on Ubuntu 16.04 works only in conjunction with SSL and promises a huge speed improvement in loading web SSL web pages. - -To enable the protocol in Nginx on Ubuntu 16.04, first navigate to Nginx available sites configuration files and backup the default configuration file by issuing the below command. - -``` -$ cd /etc/nginx/sites-available/ -$ sudo mv default default.backup -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Backup-Nginx-Sites-Configuration-File.png) ->Backup Nginx Sites Configuration File - -#### 4. Then, using a text editor create a new default page with the below instructions: - -``` -server { - listen 443 ssl http2 default_server; - listen [::]:443 ssl http2 default_server; - - root /var/www/html; - - index index.html index.htm index.php; - - server_name 192.168.1.13; - - location / { - try_files $uri $uri/ =404; - } - - ssl_certificate /etc/nginx/ssl/nginx.crt; - ssl_certificate_key /etc/nginx/ssl/nginx.key; - - ssl_protocols TLSv1 TLSv1.1 TLSv1.2; - ssl_prefer_server_ciphers on; - ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; - ssl_dhparam /etc/nginx/ssl/dhparam.pem; - ssl_session_cache shared:SSL:20m; - ssl_session_timeout 180m; - resolver 8.8.8.8 8.8.4.4; - add_header Strict-Transport-Security "max-age=31536000; - #includeSubDomains" always; - - - location ~ \.php$ { - include snippets/fastcgi-php.conf; - fastcgi_pass unix:/run/php/php7.0-fpm.sock; - } - - location ~ /\.ht { - deny all; - } - -} - -server { - listen 80; - listen [::]:80; - server_name 192.168.1.13; - return 301 https://$server_name$request_uri; -} -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-Nginx-HTTP-2-Protocol.png) ->Enable Nginx HTTP 2 Protocol - -The above configuration snippet enables the use of `HTTP/2.0` by adding the http2 parameter to all SSL listen directives. - -Also, the last part of the excerpt enclosed in server directive is used to redirect all non-SSL traffic to SSL/TLS default host. Also, replace the `server_name` directive to match your own IP address or DNS record (FQDN preferably). - -#### 5. Once you finished editing Nginx default configuration file with the above settings, generate and list the SSL certificate file and key by executing the below commands. - -Fill the certificate with your own custom settings and pay attention to Common Name setting to match your DNS FQDN record or your server IP address that will be used to access the web page. - -``` -$ sudo mkdir /etc/nginx/ssl -$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt -$ ls /etc/nginx/ssl/ -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Generate-SSL-Certificate-and-Key.png) ->Generate SSL Certificate and Key for Nginx - -#### 6. Also, create a strong DH cypher, which was changed on the above configuration file on `ssl_dhparam` instruction line, by issuing the below command: - -``` -$ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-Diffie-Hellman-Key.png) ->Create Diffie-Hellman Key - -#### 7. Once the `Diffie-Hellman` key has been created, verify if Nginx configuration file is correctly written and can be applied by Nginx web server and restart the daemon to reflect changes by running the below commands. - -``` -$ sudo nginx -t -$ sudo systemctl restart nginx.service -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Configuration.png) ->Check Nginx Configuration - -#### 8. In order to test if Nginx uses HTTP/2.0 protocol issue the below command. The presence of `h2` advertised protocol confirms that Nginx has been successfully configured to use HTTP/2.0 protocol. All modern up-to-date browsers should support this protocol by default. - -``` -$ openssl s_client -connect localhost:443 -nextprotoneg '' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Test-Nginx-HTTP-2-Protocol.png) ->Test Nginx HTTP 2.0 Protocol - -### Step 3: Install PHP 7 Interpreter - -Nginx can be used with PHP dynamic processing language interpreter to generate dynamic web content with the help of FastCGI process manager obtained by installing the php-fpm binary package from Ubuntu official repositories. - -#### 9. In order to grab PHP7.0 and the additional packages that will allow PHP to communicate with Nginx web server issue the below command on your server console: - -``` -$ sudo apt install php7.0 php7.0-fpm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-PHP-FPM-for-Ngin.png) ->Install PHP 7 and PHP-FPM for Ngin - -#### 10. Once the PHP7.0 interpreter has been successfully installed on your machine, start and check php7.0-fpm daemon by issuing the below command: - -``` -$ sudo systemctl start php7.0-fpm -$ sudo systemctl status php7.0-fpm -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Start-Verify-php-fpm-Service.png) ->Start and Verify php-fpm Service - -#### 11. The current configuration file of Nginx is already configured to use PHP FastCGI process manager in order to server dynamic content. - -The server block that enables Nginx to use PHP interpreter is presented on the below excerpt, so no further modifications of default Nginx configuration file are required. - -``` -location ~ \.php$ { - include snippets/fastcgi-php.conf; - fastcgi_pass unix:/run/php/php7.0-fpm.sock; - } -``` - -Below is a screenshot of what instructions you need to uncomment and modify is case of an original Nginx default configuration file. - - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-PHP-FastCGI-for-Nginx.png) ->Enable PHP FastCGI for Nginx - -#### 12. To test Nginx web server relation with PHP FastCGI process manager create a PHP `info.php` test configuration file by issuing the below command and verify the settings by visiting this configuration file using the below address: `http://IP_or domain/info.php`. - -``` -$ sudo su -c 'echo "" |tee /var/www/html/info.php' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-PHP-Info-File.png) ->Create PHP Info File - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-PHP-FastCGI-Info.png) ->Verify PHP FastCGI Info - -Also check if HTTP/2.0 protocol is advertised by the server by locating the line `$_SERVER[‘SERVER_PROTOCOL’]` on PHP Variables block as illustrated on the below screenshot. - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-HTTP-2.0-Protocol-Info.png) ->Check HTTP 2.0 Protocol Info - -#### 13. In order to install extra PHP7.0 modules use the `apt search php7.0` command to find a PHP module and install it. - -Also, try to install the following PHP modules which can come in handy in case you are planning to [install WordPress][5] or other CMS. - -``` -$ sudo apt install php7.0-mcrypt php7.0-mbstring -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-Modules.png) ->Install PHP 7 Modules - -#### 14. To register the PHP extra modules just restart PHP-FPM daemon by issuing the below command. - -``` -$ sudo systemctl restart php7.0-fpm.service -``` - -### Step 4: Install MariaDB Database - -#### 15. Finally, in order to complete our LEMP stack we need the MariaDB database component to store and manage website data. - -Install MariaDB database management system by running the below command and restart PHP-FPM service in order to use MySQL module to access the database. - -``` -$ sudo apt install mariadb-server mariadb-client php7.0-mysql -$ sudo systemctl restart php7.0-fpm.service -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-MariaDB-for-Nginx.png) ->Install MariaDB for Nginx - -#### 16. To secure the MariaDB installation, run the security script provided by the binary package from Ubuntu repositories which will ask you set a root password, remove anonymous users, disable root login remotely and remove test database. - -Run the script by issuing the below command and answer all questions with yes. Use the below screenshot as a guide. - -``` -$ sudo mysql_secure_installation -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Secure-MariaDB-Installation-for-Nginx.png) ->Secure MariaDB Installation for Nginx - -#### 17. To configure MariaDB so that ordinary users can access the database without system sudo privileges, go to MySQL command line interface with root privileges and run the below commands on MySQL interpreter: - -``` -$ sudo mysql -MariaDB> use mysql; -MariaDB> update user set plugin=’‘ where User=’root’; -MariaDB> flush privileges; -MariaDB> exit -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/MariaDB-User-Permissions.png) ->MariaDB User Permissions - -Finally, login to MariaDB database and run an arbitrary command without root privileges by executing the below command: - -``` -$ mysql -u root -p -e 'show databases' -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-MariaDB-Databases.png) ->Check MariaDB Databases - -That’ all! Now you have a **LEMP** stack configured on **Ubuntu 16.04** server that allows you to deploy complex dynamic web applications that can interact with databases. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 - -作者:[Matei Cezar ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/cezarmatei/ -[1]: http://www.tecmint.com/installation-of-ubuntu-16-04-server-edition/ -[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/ -[3]: http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ -[4]: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ -[5]: http://www.tecmint.com/install-wordpress-using-lamp-or-lemp-on-rhel-centos-fedora/ diff --git a/sources/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md b/sources/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md deleted file mode 100644 index 45cc53157e..0000000000 --- a/sources/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md +++ /dev/null @@ -1,180 +0,0 @@ -HOW TO USE WEBP IMAGES IN UBUNTU LINUX -========================================= - -![](http://itsfoss.com/wp-content/uploads/2016/05/support-webp-ubuntu-linux.jpg) ->Brief: This guide shows you how to view WebP images in Linux and how to convert WebP images to JPEG or PNG format. - -###WHAT IS WEBP? - -It’s been over five years since Google introduced [WebP file format][0] for images. WebP provides lossy and lossless compression and WebP compressed files are around 25% smaller in size when compared to JPEG compression, Google claims. - -Google aimed WebP to become the new standard for images on the web but I don’t see it happening. It’s over five years and it’s still not adopted as a standard except in Google’s ecosystem. But as we know, Google is pushy about its technologies. Few months back Google changed all the images on Google Plus to WebP. - -If you download those images from Google Plus using Google Chrome, you’ll have WebP images, no matter if you had uploaded PNG or JPEG. And that’s not the problem. The actual problem is when you try to open that files in Ubuntu using the default GNOME Image Viewer and you see this error: - ->**Could not find XYZ.webp** ->**Unrecognized image file format** - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-1.png) ->GNOME Image Viewer doesn’t support WebP images - -In this tutorial, we shall see - -- how to add WebP support in Linux -- list of programs that support WebP images -- how to convert WebP images to PNG or JPEG -- how to download WebP images directly as PNG images - -### HOW TO VIEW WEBP IMAGES IN UBUNTU AND OTHER LINUX - -[GNOME Image Viewer][3], the default image viewer in many Linux distributions including Ubuntu, doesn’t support WebP images. There are no plugins available at present that could enable GNOME Image Viewer to add WebP support. - -This means that we simply cannot use GNOME Image Viewer to open WebP files in Linux. A better alternative is [gThumb][4] that supports WebP images by default. - -To install gThumb in Ubuntu and other Ubuntu based Linux distributions, use the command below: - -``` -sudo apt-get install gthumb -``` - -Once installed, you can simply rightly click on the WebP image and select gThumb to open it. You should be able to see it now: - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-2.jpeg) ->WebP image in gThumb - -### MAKE GTHUMB THE DEFAULT APPLICATION FOR WEBP IMAGES IN UBUNTU - -For Ubuntu beginners, if you like to make gThumb the default application for opening WebP files, just follow the steps below: - -#### Step 1: Right click on the WebP image and select Properties. - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png) ->Select Properties from Right Click menu - -#### Step 2: Go to Open With tab, select gThumb and click on Set as default. - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png) ->Make gThumb the default application for WebP images in Ubuntu - -### MAKE GTHUMB THE DEFAULT APPLICATIONS FOR ALL IMAGES - -gThumb has a lot more to offer than Image Viewer. For example, you can do simple editing, add color filters to the images etc. Adding the filter is not as effective as XnRetro, the dedicated tool for [adding Instagram like effects on Linux][5], but the basic filters are available. - -I liked gThumb a lot and decided to make it the default image viewer. If you also want to make gThumb the default application for all kind of images in Ubuntu, follow the steps below: - -####Step 1: Open System Settings - -![](http://itsfoss.com/wp-content/uploads/2014/04/System_Settings_ubuntu_1404.jpeg) - -#### Step 2: Go to Details. - -![](http://itsfoss.com/wp-content/uploads/2013/11/System_settings_Ubuntu_1.jpeg) - -#### Step 3: Select gThumb as the default applications for images here. - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-5.png) - -### ALTERNATIVE PROGRAMS TO OPEN WEBP FILES IN LINUX - -It is possible that you might not like gThumb. If that’s the case, you can choose one of the following applications to view WebP images in Linux: - -- [XnView][6] (Not open source) -- GIMP with unofficial WebP plugin that can be installed via this [PPA][7] that is available until Ubuntu 15.10. I’ll cover this part in another article. -- [Gwenview][8] - -### CONVERT WEBP IMAGES TO PNG AND JPEG IN LINUX - -There are two ways to convert WebP images in Linux: - -- Command line -- GUI - -#### 1. USING COMMAND LINE TO CONVERT WEBP IMAGES IN LINUX - -You need to install WebP tools first. Open a terminal and use the following command: - -``` -sudo apt-get install webp -``` - -##### CONVERT JPEG/PNG TO WEBP - -We’ll use cwebp command (does it mean compress to WebP?) to convert JPEG or PNG files to WebP. The command format is like: - -``` -cwebp -q [image_quality] [JPEG/PNG_filename] -o [WebP_filename] -``` - -For example, you can use the following command: - -``` -cwebp -q 90 example.jpeg -o example.webp -``` - -##### CONVERT WEBP TO JPEG/PNG - -To convert WebP images to JPEG or PNG, we’ll use dwebp command. The command format is: - -``` -dwebp [WebP_filename] -o [PNG_filename] -``` - -An example of this command could be: - -``` -dwebp example.webp -o example.png -``` - -#### 2. USING GUI TOOL TO CONVERT WEBP TO JPEG/PNG - -For this purpose, we will use XnConvert which is a free but not open source application. You can download the installer files from their website: - -[Download XnConvert][1] - -Note that XnConvert is a powerful tool that you can use for batch resizing images. However, in this tutorial, we shall only see how to convert a single WebP image to PNG/JPEG. - -Open XnConvert and select the input file: - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-6.jpeg) - -In the Output tab, select the output format you want it to be converted. Once you have selected the output format, click on Convert. - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-7.jpeg) - -That’s all you need to do to convert WebP images to PNG, JPEg or any other image format of your choice. - -### DOWNLOAD WEBP IMAGES AS PNG DIRECTLY IN CHROME WEB BROWSER - -Probably you don’t like WebP image format at all and you don’t want to install a new software just to view WebP images in Linux. It will be a bigger pain if you have to convert the WebP file for future use. - -An easier and less painful way to deal with is to install a Chrome extension Save Image as PNG. With this extension, you can simply right click on a WebP image and save it as PNG directly. - -![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-8.png) ->Saving WebP image as PNG in Google Chrome - -[Get Save Image as PNG extension][2] - -### WHAT’S YOUR PICK? - -I hope this detailed tutorial helped you to get WebP support on Linux and helped you to convert WebP images. How do you handle WebP images in Linux? Which tool do you use? From the above described methods, which one did you like the most? - - ----------------------- -via: http://itsfoss.com/webp-ubuntu-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Abhishek Prakash][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://itsfoss.com/author/abhishek/ -[0]: https://developers.google.com/speed/webp/ -[1]: http://www.xnview.com/en/xnconvert/#downloads -[2]: https://chrome.google.com/webstore/detail/save-image-as-png/nkokmeaibnajheohncaamjggkanfbphi?utm_source=chrome-ntp-icon -[3]: https://wiki.gnome.org/Apps/EyeOfGnome -[4]: https://wiki.gnome.org/Apps/gthumb -[5]: http://itsfoss.com/add-instagram-effects-xnretro-ubuntu-linux/ -[6]: http://www.xnview.com/en/xnviewmp/#downloads -[7]: https://launchpad.net/~george-edison55/+archive/ubuntu/webp -[8]: https://userbase.kde.org/Gwenview diff --git a/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md b/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md new file mode 100644 index 0000000000..75df13d9b5 --- /dev/null +++ b/sources/tech/20160531 Writing online multiplayer game with python and asyncio - part 2.md @@ -0,0 +1,234 @@ +chunyang-wen translating +Writing online multiplayer game with python and asyncio - Part 2 +================================================================== + +![](https://7webpages.com/media/cache/fd/d1/fdd1f8f8bbbf4166de5f715e6ed0ac00.gif) + +Have you ever made an asynchronous Python app? Here I’ll tell you how to do it and in the next part, show it on a [working example][1] - a popular Snake game, designed for multiple players. + +see the intro and theory about how to [Get Asynchronous [part 1]][2] + +[Play the game][3] + +### 3. Writing game loop + +The game loop is a heart of every game. It runs continuously to get player's input, update state of the game and render the result on the screen. In online games the loop is divided into client and server parts, so basically there are two loops which communicate over the network. Usually, client role is to get player's input, such as keypress or mouse movement, pass this data to a server and get back the data to render. The server side is processing all the data coming from players, updating game's state, doing necessary calculations to render next frame and passes back the result, such as new placement of game objects. It is very important not to mix client and server roles without a solid reason. If you start doing game logic calculations on the client side, you can easily go out of sync with other clients, and your game can also be created by simply passing any data from the client side. + +A game loop iteration is often called a tick. Tick is an event meaning that current game loop iteration is over and the data for the next frame(s) is ready. +In the next examples we will use the same client, which connects to a server from a web page using WebSocket. It runs a simple loop which passes pressed keys' codes to the server and displays all messages that come from the server. [Client source code is located here][4]. + +#### Example 3.1: Basic game loop + +[Example 3.1 source code][5] + +We will use [aiohttp][6] library to create a game server. It allows creating web servers and clients based on asyncio. A good thing about this library is that it supports normal http requests and websockets at the same time. So we don't need other web servers to render game's html page. + +Here is how we run the server: + +``` +app = web.Application() +app["sockets"] = [] + +asyncio.ensure_future(game_loop(app)) + +app.router.add_route('GET', '/connect', wshandler) +app.router.add_route('GET', '/', handle) + +web.run_app(app) +``` + +web.run_app is a handy shortcut to create server's main task and to run asyncio event loop with it's run_forever() method. I suggest you check the source code of this method to see how the server is actually created and terminated. + +An app is a dict-like object which can be used to share data between connected clients. We will use it to store a list of connected sockets. This list is then used to send notification messages to all connected clients. A call to asyncio.ensure_future() will schedule our main game_loop task which sends 'tick' message to clients every 2 seconds. This task will run concurrently in the same asyncio event loop along with our web server. + +There are 2 web request handlers: handle just serves a html page and wshandler is our main websocket server's task which handles interaction with game clients. With every connected client a new wshandler task is launched in the event loop. This task adds client's socket to the list, so that game_loop task may send messages to all the clients. Then it echoes every keypress back to the client with a message. + +In the launched tasks we are running worker loops over the main event loop of asyncio. A switch between tasks happens when one of them uses await statement to wait for a coroutine to finish. For instance, asyncio.sleep just passes execution back to a scheduler for a given amount of time, and ws.receive() is waiting for a message from websocket, while the scheduler may switch to some other task. + +After you open the main page in a browser and connect to the server, just try to press some keys. Their codes will be echoed back from the server and every 2 seconds this message will be overwritten by game loop's 'tick' message which is sent to all clients. + +So we have just created a server which is processing client's keypresses, while the main game loop is doing some work in the background and updates all clients periodically. + +#### Example 3.2: Starting game loop by request + +[Example 3.2 source code][7] + +In the previous example a game loop was running continuously all the time during the life of the server. But in practice, there is usually no sense to run game loop when no one is connected. Also, there may be different game "rooms" running on one server. In this concept one player "creates" a game session (a match in a multiplayer game or a raid in MMO for example) so other players may join it. Then a game loop runs while the game session continues. + +In this example we use a global flag to check if a game loop is running, and we start it when the first player connects. In the beginning, a game loop is not running, so the flag is set to False. A game loop is launched from the client's handler: + +``` + if app["game_is_running"] == False: + asyncio.ensure_future(game_loop(app)) +``` + +This flag is then set to True at the start of game loop() and then back to False in the end, when all clients are disconnected. + +#### Example 3.3: Managing tasks + +[Example 3.3 source code][8] + +This example illustrates working with task objects. Instead of storing a flag, we store game loop's task directly in our application's global dict. This may be not an optimal thing to do in a simple case like this, but sometimes you may need to control already launched tasks. +``` + if app["game_loop"] is None or \ + app["game_loop"].cancelled(): + app["game_loop"] = asyncio.ensure_future(game_loop(app)) +``` + +Here ensure_future() returns a task object that we store in a global dict; and when all users disconnect, we cancel it with + +``` + app["game_loop"].cancel() +``` + +This cancel() call tells scheduler not to pass execution to this coroutine anymore and sets its state to cancelled which then can be checked by cancelled() method. And here is one caveat worth to mention: when you have external references to a task object and exception happens in this task, this exception will not be raised. Instead, an exception is set to this task and may be checked by exception() method. Such silent fails are not useful when debugging a code. Thus, you may want to raise all exceptions instead. To do so you need to call result() method of unfinished task explicitly. This can be done in a callback: + +``` + app["game_loop"].add_done_callback(lambda t: t.result()) +``` + +Also if we are going to cancel this task in our code and we don't want to have CancelledError exception, it has a point checking its "cancelled" state: +``` + app["game_loop"].add_done_callback(lambda t: t.result() + if not t.cancelled() else None) +``` + +Note that this is required only if you store a reference to your task objects. In the previous examples all exceptions are raised directly without additional callbacks. + +#### Example 3.4: Waiting for multiple events + +[Example 3.4 source code][9] + +In many cases, you need to wait for multiple events inside client's handler. Beside a message from a client, you may need to wait for different types of things to happen. For instance, if your game's time is limited, you may wait for a signal from timer. Or, you may wait for a message from other process using pipes. Or, for a message from a different server in the network, using a distributed messaging system. + +This example is based on example 3.1 for simplicity. But in this case we use Condition object to synchronize game loop with connected clients. We do not keep a global list of sockets here as we are using sockets only within the handler. When game loop iteration ends, we notify all clients using Condition.notify_all() method. This method allows implementing publish/subscribe pattern within asyncio event loop. + +To wait for two events in the handler, first, we wrap awaitable objects in a task using ensure_future() + +``` + if not recv_task: + recv_task = asyncio.ensure_future(ws.receive()) + if not tick_task: + await tick.acquire() + tick_task = asyncio.ensure_future(tick.wait()) +``` + +Before we can call Condition.wait(), we need to acquire a lock behind it. That is why, we call tick.acquire() first. This lock is then released after calling tick.wait(), so other coroutines may use it too. But when we get a notification, a lock will be acquired again, so we need to release it calling tick.release() after received notification. + +We are using asyncio.wait() coroutine to wait for two tasks. + +``` + done, pending = await asyncio.wait( + [recv_task, + tick_task], + return_when=asyncio.FIRST_COMPLETED) +``` + +It blocks until either of tasks from the list is completed. Then it returns 2 lists: tasks which are done and tasks which are still running. If the task is done, we set it to None so it may be created again on the next iteration. + +#### Example 3.5: Combining with threads + +[Example 3.5 source code][10] + +In this example we combine asyncio loop with threads by running the main game loop in a separate thread. As I mentioned before, it's not possible to perform real parallel execution of python code with threads because of GIL. So it is not a good idea to use other thread to do heavy calculations. However, there is one reason to use threads with asyncio: this is the case when you need to use other libraries which do not support asyncio. Using these libraries in the main thread will simply block execution of the loop, so the only way to use them asynchronously is to run in a different thread. + +We run game loop using run_in_executor() method of asyncio loop and ThreadPoolExecutor. Note that game_loop() is not a coroutine anymore. It is a function that is executed in another thread. However, we need to interact with the main thread to notify clients on the game events. And while asyncio itself is not threadsafe, it has methods which allow running your code from another thread. These are call_soon_threadsafe() for normal functions and run_coroutine_threadsafe() for coroutines. We will put a code which notifies clients about game's tick to notify() coroutine and runs it in the main event loop from another thread. + +``` +def game_loop(asyncio_loop): + print("Game loop thread id {}".format(threading.get_ident())) + async def notify(): + print("Notify thread id {}".format(threading.get_ident())) + await tick.acquire() + tick.notify_all() + tick.release() + + while 1: + task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop) + # blocking the thread + sleep(1) + # make sure the task has finished + task.result() +``` + +When you launch this example, you will see that "Notify thread id" is equal to "Main thread id", this is because notify() coroutine is executed in the main thread. While sleep(1) call is executed in another thread, and, as a result, it will not block the main event loop. + +#### Example 3.6: Multiple processes and scaling up + +[Example 3.6 source code][11] + +One threaded server may work well, but it is limited to one CPU core. To scale the server beyond one core, we need to run multiple processes containing their own event loops. So we need a way for processes to interact with each other by exchanging messages or sharing game's data. Also in games, it is often required to perform heavy calculations, such as path finding and alike. These tasks are sometimes not possible to complete quickly within one game tick. It is not recommended to perform time-consuming calculations in coroutines, as it will block event processing, so in this case, it may be reasonable to pass the heavy task to other process running in parallel. + +The easiest way to utilize multiple cores is to launch multiple single core servers, like in the previous examples, each on a different port. You can do this with supervisord or similar process-controller system. Then, you may use a load balancer, such as HAProxy, to distribute connecting clients between the processes. There are different ways for processes to interact wich each other. One is to use network-based systems, which allows you to scale to multiple servers as well. There are already existing adapters to use popular messaging and storage systems with asyncio. Here are some examples: + +- [aiomcache][12] for memcached client +- [aiozmq][13] for zeroMQ +- [aioredis][14] for Redis storage and pub/sub + +You can find many other packages like this on github and pypi, most of them have "aio" prefix. + +Using network services may be effective to store persistent data and exchange some kind of messages. But its performance may be not enough if you need to perform real-time data processing that involves inter-process communications. In this case, a more appropriate way may be using standard unix pipes. asyncio has support for pipes and there is a [very low-level example of the server which uses pipes][15] in aiohttp repository. + +In the current example, we will use python's high-level [multiprocessing][16] library to instantiate new process to perform heavy calculations on a different core and to exchange messages with this process using multiprocessing.Queue. Unfortunately, the current implementation of multiprocessing is not compatible with asyncio. So every blocking call will block the event loop. But this is exactly the case where threads will be helpful because if we run multiprocessing code in a different thread, it will not block our main thread. All we need is to put all inter-process communications to another thread. This example illustrates this technique. It is very similar to multi-threading example above, but we create a new process from a thread. + +``` +def game_loop(asyncio_loop): + # coroutine to run in main thread + async def notify(): + await tick.acquire() + tick.notify_all() + tick.release() + + queue = Queue() + + # function to run in a different process + def worker(): + while 1: + print("doing heavy calculation in process {}".format(os.getpid())) + sleep(1) + queue.put("calculation result") + + Process(target=worker).start() + + while 1: + # blocks this thread but not main thread with event loop + result = queue.get() + print("getting {} in process {}".format(result, os.getpid())) + task = asyncio.run_coroutine_threadsafe(notify(), asyncio_loop) + task.result() +``` + +Here we run worker() function in another process. It contains a loop doing heavy calculations and putting results to the queue, which is an instance of multiprocessing.Queue. Then we get the results and notify clients in the main event loop from a different thread, exactly as in the example 3.5. This example is very simplified, it doesn't have a proper termination of the process. Also, in a real game, we would probably use the second queue to pass data to the worker. + +There is a project called [aioprocessing][17], which is a wrapper around multiprocessing that makes it compatible with asyncio. However, it uses exactly the same approach as described in this example - creating processes from threads. It will not give you any advantage, other than hiding these tricks behind a simple interface. Hopefully, in the next versions of Python, we will get a multiprocessing library based on coroutines and supports asyncio. + +>Important! If you are going to run another asyncio event loop in a different thread or sub-process created from main thread/process, you need to create a loop explicitly, using asyncio.new_event_loop(), otherwise, it will not work. + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/ + +作者:[Kyrylo Subbotin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/ +[1]: http://snakepit-game.com/ +[2]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[3]: http://snakepit-game.com/ +[4]: https://github.com/7WebPages/snakepit-game/blob/master/simple/index.html +[5]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_basic.py +[6]: http://aiohttp.readthedocs.org/ +[7]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_handler.py +[8]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_global.py +[9]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_wait.py +[10]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_thread.py +[11]: https://github.com/7WebPages/snakepit-game/blob/master/simple/game_loop_process.py +[12]: https://github.com/aio-libs/aiomcache +[13]: https://github.com/aio-libs/aiozmq +[14]: https://github.com/aio-libs/aioredis +[15]: https://github.com/KeepSafe/aiohttp/blob/master/examples/mpsrv.py +[16]: https://docs.python.org/3.5/library/multiprocessing.html +[17]: https://github.com/dano/aioprocessing diff --git a/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md new file mode 100644 index 0000000000..6ab74e3527 --- /dev/null +++ b/sources/tech/20160602 How to build and deploy a Facebook Messenger bot with Python and Flask.md @@ -0,0 +1,320 @@ +wyangsun translating +How to build and deploy a Facebook Messenger bot with Python and Flask, a tutorial +========================================================================== + +This is my log of how I built a simple Facebook Messenger bot. The functionality is really simple, it’s an echo bot that will just print back to the user what they write. + +This is something akin to the Hello World example for servers, the echo server. + +The goal of the project is not to build the best Messenger bot, but rather to get a feel for what it takes to build a minimal bot and how everything comes together. + +- [Tech Stack][1] +- [Bot Architecture][2] +- [The Bot Server][3] +- [Deploying to Heroku][4] +- [Creating the Facebook App][5] +- [Conclusion][6] + +### Tech Stack + +The tech stack that was used is: + +- [Heroku][7] for back end hosting. The free-tier is more than enough for a tutorial of this level. The echo bot does not require any sort of data persistence so a database was not used. +- [Python][8] was the language of choice. The version that was used is 2.7 however it can easily be ported to Python 3 with minor alterations. +- [Flask][9] as the web development framework. It’s a very lightweight framework that’s perfect for small scale projects/microservices. +- Finally the [Git][10] version control system was used for code maintenance and to deploy to Heroku. +- Worth mentioning: [Virtualenv][11]. This python tool is used to create “environments” clean of python libraries so you can only install the necessary requirements and minimize the app footprint. + +### Bot Architecture + +Messenger bots are constituted by a server that responds to two types of requests: + +- GET requests are being used for authentication. They are sent by Messenger with an authentication code that you register on FB. +- POST requests are being used for the actual communication. The typical workflow is that the bot will initiate the communication by sending the POST request with the data of the message sent by the user, we will handle it, send a POST request of our own back. If that one is completed successfully (a 200 OK status is returned) we also respond with a 200 OK code to the initial Messenger request. +For this tutorial the app will be hosted on Heroku, which provides a nice and easy interface to deploy apps. As mentioned the free tier will suffice for this tutorial. + +After the app has been deployed and is running, we’ll create a Facebook app and link it to our app so that messenger knows where to send the requests that are meant for our bot. + +### The Bot Server +The basic server code was taken from the following [Chatbot][12] project by Github user [hult (Magnus Hult)][13], with a few modifications to the code to only echo messages and a couple bugfixes I came across. This is the final version of the server code: + +``` +from flask import Flask, request +import json +import requests + +app = Flask(__name__) + +# This needs to be filled with the Page Access Token that will be provided +# by the Facebook App that will be created. +PAT = '' + +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' + +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" + +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" + + +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text + +if __name__ == '__main__': + app.run() +``` + +Let’s break down the code. The first part is the imports that will be needed: + +``` +from flask import Flask, request +import json +import requests +``` + +Next we define the two functions (using the Flask specific app.route decorators) that will handle the GET and POST requests to our bot. + +``` +@app.route('/', methods=['GET']) +def handle_verification(): + print "Handling Verification." + if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': + print "Verification successful!" + return request.args.get('hub.challenge', '') + else: + print "Verification failed!" + return 'Error, wrong validation token' +``` + +The verify_token object that is being sent by Messenger will be declared by us when we create the Facebook app. We have to validate the one we are being have against itself. Finally we return the “hub.challenge” back to Messenger. + +The function that handles the POST requests is a bit more interesting. + +``` +@app.route('/', methods=['POST']) +def handle_messages(): + print "Handling Messages" + payload = request.get_data() + print payload + for sender, message in messaging_events(payload): + print "Incoming from %s: %s" % (sender, message) + send_message(PAT, sender, message) + return "ok" +``` + +When called we grab the massage payload, use function messaging_events to break it down and extract the sender user id and the actual message sent, generating a python iterator that we can loop over. Notice that in each request sent by Messenger it is possible to have more than one messages. + +``` +def messaging_events(payload): + """Generate tuples of (sender_id, message_text) from the + provided payload. + """ + data = json.loads(payload) + messaging_events = data["entry"][0]["messaging"] + for event in messaging_events: + if "message" in event and "text" in event["message"]: + yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') + else: + yield event["sender"]["id"], "I can't echo this" +``` + +While iterating over each message we call the send_message function and we perform the POST request back to Messnger using the Facebook Graph messages API. During this time we still have not responded to the original Messenger request which we are blocking. This can lead to timeouts and 5XX errors. + +The above was spotted during an outage due to a bug I came across, which was occurred when the user was sending emojis which are actual unicode ids, however Python was miss-encoding. We ended up sending back garbage. + +This POST request back to Messenger would never finish, and that in turn would cause 5XX status codes to be returned to the original request, rendering the service unusable. + +This was fixed by escaping the messages with `encode('unicode_escape')` and then just before we sent back the message decode it with `decode('unicode_escape')`. + +``` +def send_message(token, recipient, text): + """Send the message text to recipient with id recipient. + """ + + r = requests.post("https://graph.facebook.com/v2.6/me/messages", + params={"access_token": token}, + data=json.dumps({ + "recipient": {"id": recipient}, + "message": {"text": text.decode('unicode_escape')} + }), + headers={'Content-type': 'application/json'}) + if r.status_code != requests.codes.ok: + print r.text +``` + +### Deploying to Heroku + +Once the code was built to my liking it was time for the next step. +Deploy the app. + +Sure, but how? + +I have deployed apps before to Heroku (mainly Rails) however I was always following a tutorial of some sort, so the configuration has already been created. In this case though I had to start from scratch. + +Fortunately it was the official [Heroku documentation][14] to the rescue. The article explains nicely the bare minimum required for running an app. + +Long story short, what we need besides our code are two files. The first file is the “requirements.txt” file which is a list of of the library dependencies required to run the application. + +The second file required is the “Procfile”. This file is there to inform the Heroku how to run our service. Again the bare minimum needed for this file is the following: + +>web: gunicorn echoserver:app + +The way this will be interpreted by heroku is that our app is started by running the echoserver.py file and the app will be using gunicorn as the web server. The reason we are using an additional webserver is performance related and is explained in the above Heroku documentation: + +>Web applications that process incoming HTTP requests concurrently make much more efficient use of dyno resources than web applications that only process one request at a time. Because of this, we recommend using web servers that support concurrent request processing whenever developing and running production services. + +>The Django and Flask web frameworks feature convenient built-in web servers, but these blocking servers only process a single request at a time. If you deploy with one of these servers on Heroku, your dyno resources will be underutilized and your application will feel unresponsive. + +>Gunicorn is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno. It provides a perfect balance of performance, flexibility, and configuration simplicity. + +Going back to our “requirements.txt” file let’s see how it binds with the Virtualenv tool that was mentioned. + +At anytime, your developement machine may have a number of python libraries installed. When deploying applications you don’t want to have these libraries loaded as it makes it hard to make out which ones you actually use. + +What Virtualenv does is create a new blank virtual enviroment so that you can only install the libraries that your app requires. + +You can check which libraries are currently installed by running the following command: + +``` +kostis@KostisMBP ~ $ pip freeze +cycler==0.10.0 +Flask==0.10.1 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +matplotlib==1.5.1 +numpy==1.10.4 +pyparsing==2.1.0 +python-dateutil==2.5.0 +pytz==2015.7 +requests==2.10.0 +scipy==0.17.0 +six==1.10.0 +virtualenv==15.0.1 +Werkzeug==0.11.10 +``` + +Note: The pip tool should already be installed on your machine along with Python. + +If not check the [official site][15] for how to install it. + +Now let’s use Virtualenv to create a new blank enviroment. First we create a new folder for our project, and change dir into it: + +``` +kostis@KostisMBP projects $ mkdir echoserver +kostis@KostisMBP projects $ cd echoserver/ +kostis@KostisMBP echoserver $ +``` + +Now let’s create a new enviroment called echobot. To activate it you run the following source command, and checking with pip freeze we can see that it’s now empty. + +``` +kostis@KostisMBP echoserver $ virtualenv echobot +kostis@KostisMBP echoserver $ source echobot/bin/activate +(echobot) kostis@KostisMBP echoserver $ pip freeze +(echobot) kostis@KostisMBP echoserver $ +``` + +We can start installing the libraries required. The ones we’ll need are flask, gunicorn, and requests and with them installed we create the requirements.txt file: + +``` +(echobot) kostis@KostisMBP echoserver $ pip install flask +(echobot) kostis@KostisMBP echoserver $ pip install gunicorn +(echobot) kostis@KostisMBP echoserver $ pip install requests +(echobot) kostis@KostisMBP echoserver $ pip freeze +click==6.6 +Flask==0.11 +gunicorn==19.6.0 +itsdangerous==0.24 +Jinja2==2.8 +MarkupSafe==0.23 +requests==2.10.0 +Werkzeug==0.11.10 +(echobot) kostis@KostisMBP echoserver $ pip freeze > requirements.txt +``` + +After all the above have been run, we create the echoserver.py file with the python code and the Procfile with the command that was mentioned, and we should end up with the following files/folders: + +``` +(echobot) kostis@KostisMBP echoserver $ ls +Procfile echobot echoserver.py requirements.txt +``` + +We are now ready to upload to Heroku. We need to do two things. The first is to install the Heroku toolbet if it’s not already installed on your system (go to [Heroku][16] for details). The second is to create a new Heroku app through the [web interface][17]. + +Click on the big plus sign on the top right and select “Create new app”. + + + + + + + + +-------------------------------------------------------------------------------- + +via: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/ + +作者:[Konstantinos Tsaprailis][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/kostistsaprailis +[1]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#tech-stack +[2]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#bot-architecture +[3]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#the-bot-server +[4]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#deploying-to-heroku +[5]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#creating-the-facebook-app +[6]: http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/#conclusion +[7]: https://www.heroku.com +[8]: https://www.python.org +[9]: http://flask.pocoo.org +[10]: https://git-scm.com +[11]: https://virtualenv.pypa.io/en/stable +[12]: https://github.com/hult/facebook-chatbot-python +[13]: https://github.com/hult +[14]: https://devcenter.heroku.com/articles/python-gunicorn +[15]: https://pip.pypa.io/en/stable/installing +[16]: https://toolbelt.heroku.com +[17]: https://dashboard.heroku.com/apps + + diff --git a/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md b/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md new file mode 100644 index 0000000000..66548921e2 --- /dev/null +++ b/sources/tech/20160604 Microfluidic cooling may prevent the demise of Moore's Law.md @@ -0,0 +1,67 @@ +Microfluidic cooling may prevent the demise of Moore's Law +============================================================ + +![](http://tr1.cbsistatic.com/hub/i/r/2015/12/09/a7cb82d1-96e8-43b5-bfbd-d4593869b230/resize/620x/9607388a284e3a61a39f4399a9202bd7/networkingistock000042544852agsandrew.jpg) +>Image: iStock/agsandrew + +Existing technology's inability to keep microchips cool is fast becoming the number one reason why [Moore's Law][1] may soon meet its demise. + +In the ongoing need for digital speed, scientists and engineers are working hard to squeeze more transistors and support circuitry onto an already-crowded piece of silicon. However, as complex as that seems, it pales in comparison to the [problem of heat buildup][2]. + +"Right now, we're limited in the power we can put into microchips," says John Ditri, principal investigator at Lockheed Martin in [this press release][3]. "One of the biggest challenges is managing the heat. If you can manage the heat, you can use fewer chips, and that means using less material, which results in cost savings as well as reduced system size and weight. If you manage the heat and use the same number of chips, you'll get even greater performance in your system." + +Resistance to the flow of electrons through silicon causes the heat, and packing so many transistors in such a small space creates enough heat to destroy components. One way to eliminate heat buildup is to reduce the flow of electrons by [using photonics at the chip level][4]. However, photonic technology is not without its set of problems. + +SEE: [Silicon photonics will revolutionize data centers in 2015][5] + +### Microfluid cooling might be the answer + +To seek out other solutions, the Defense Advanced Research Projects Agency (DARPA) has initiated a program called [ICECool Applications][6] (Intra/Interchip Enhanced Cooling). "ICECool is exploring disruptive thermal technologies that will mitigate thermal limitations on the operation of military electronic systems while significantly reducing the size, weight, and power consumption," explains the [GSA website FedBizOpps.gov][7]. + +What is unique about this method of cooling is the push to use a combination of intra- and/or inter-chip microfluidic cooling and on-chip thermal interconnects. + +![](http://tr4.cbsistatic.com/hub/i/r/2016/05/25/fd3d0d17-bd86-4d25-a89a-a7050c4d59c4/resize/300x/e9c18034bde66526310c667aac92fbf5/microcooling-1.png) +>MicroCooling 1 Image: DARPA + +The [DARPA ICECool Application announcement][8] notes, "Such miniature intra- and/or inter-chip passages (see right) may take the form of axial micro-channels, radial passages, and/or cross-flow passages, and may involve micro-pores and manifolded structures to distribute and re-direct liquid flow, including in the form of localized liquid jets, in the most favorable manner to meet the specified heat flux and heat density metrics." + +Using the above technology, engineers at Lockheed Martin have experimentally demonstrated how on-chip cooling is a significant improvement. "Phase I of the ICECool program verified the effectiveness of Lockheed's embedded microfluidic cooling approach by showing a four-times reduction in thermal resistance while cooling a thermal demonstration die dissipating 1 kW/cm2 die-level heat flux with multiple local 30 kW/cm2 hot spots," mentions the Lockheed Martin press release. + +In phase II of the Lockheed Martin project, the engineers focused on RF amplifiers. The press release continues, "Utilizing its ICECool technology, the team has been able to demonstrate greater than six times increase in RF output power from a given amplifier while still running cooler than its conventionally cooled counterpart." + +### Moving to production + +Confident of the technology, Lockheed Martin is already designing and building a functional microfluidic cooled transmit antenna. Lockheed Martin is also collaborating with Qorvo to integrate its thermal solution with Qorvo's high-performance [GaN process][9]. + +The authors of the research paper [DARPA's Intra/Interchip Enhanced Cooling (ICECool) Program][10] suggest ICECool Applications will produce a paradigm shift in the thermal management of electronic systems. "ICECool Apps performers will define and demonstrate intra-chip and inter-chip thermal management approaches that are tailored to specific applications and this approach will be consistent with the materials sets, fabrication processes, and operating environment of the intended application." + +If this microfluidic technology is as successful as scientists and engineers suggest, it seems Moore's Law does have a fighting chance. + +For more about networking, subscribe to our Data Centers newsletter. + +[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fmicrofluidic-cooling-may-prevent-the-demise-of-moores-law%2F&) + +-------------------------------------------------------------------------------- + +via: http://www.techrepublic.com/article/microfluidic-cooling-may-prevent-the-demise-of-moores-law/ + +作者:[Michael Kassner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.techrepublic.com/search/?a=michael+kassner +[1]: http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html +[2]: https://books.google.com/books?id=mfec2Zw_b7wC&pg=PA154&lpg=PA154&dq=does+heat+destroy+transistors&source=bl&ots=-aNdbMD7FD&sig=XUUiaYG_6rcxHncx4cI4Cqe3t20&hl=en&sa=X&ved=0ahUKEwif4M_Yu_PMAhVL7oMKHW3GC3cQ6AEITTAH#v=onepage&q=does%20heat%20destroy%20transis +[3]: http://www.lockheedmartin.com/us/news/press-releases/2016/march/160308-mst-cool-technology-turns-down-the-heat-on-high-tech-equipment.html +[4]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/ +[5]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/ +[6]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1 +[7]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1 +[8]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1 +[9]: http://electronicdesign.com/communications/what-s-difference-between-gaas-and-gan-rf-power-amplifiers +[10]: http://www.csmantech.org/Digests/2013/papers/050.pdf + + + diff --git a/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md b/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md new file mode 100644 index 0000000000..3c38af72ac --- /dev/null +++ b/sources/tech/20160606 Writing online multiplayer game with python and asyncio - part 3.md @@ -0,0 +1,138 @@ +chunyang-wen translating +Writing online multiplayer game with python and asyncio - Part 3 +================================================================= + +![](https://7webpages.com/media/cache/17/81/178135a6db5074c72a1394d31774c658.gif) + +In this series, we are making an asynchronous Python app on the example of a multiplayer [Snake game][1]. The previous article focused on [Writing Game Loop][2] and Part 1 was covering how to [Get Asynchronous][3]. + +You can find the code [here][4]. + +### 4. Making a complete game + +![](https://7webpages.com/static/img/14chs7.gif) + +#### 4.1 Project's overview + +In this part, we will review a design of a complete online game. It is a classic snake game with added multiplayer. You can try it yourself at (). A source code is located [in github repository][5]. The game consists of the following files: + +- [server.py][6] - a server handling main game loop and connections. +- [game.py][7] - a main Game class, which implements game's logic and most of the game's network protocol. +- [player.py][8] - Player class, containing individual player's data and snake's representation. This one is responsible for getting player's input and moving the snake accordingly. +- [datatypes.py][9] - basic data structures. +- [settings.py][10] - game settings, and it has descriptions in commentaries. +- [index.html][11] - all html and javascript client part in one file. + +#### 4.2 Inside a game loop + +Multiplayer snake game is a good example to learn because of its simplicity. All snakes move to one position every single frame, and frames are changing at a very slow rate, allowing you to watch how game engine is working actually. There is no instant reaction to player's keypresses because of the slow speed. A pressed key is remembered and then taken into account while calculating the next frame at the end of game loop's iteration. + +> Modern action games are running at much higher frame rates and often frame rates of server and client are not equal. Client frame rate usually depends on the client hardware performance, while server frame rate is fixed. A client may render several frames after getting the data corresponding to one "game tick". This allows to create smooth animations, which are only limited by client's performance. In this case, a server should pass not only current positions of the objects but also their moving directions, speeds and velocities. And while client frame rate is called FPS (frames per second), sever frame rate is called TPS (ticks per second). In this snake game example both values are equal, and one frame displayed by a client is calculated within one server's tick. + +We will use textmode-like play field, which is, in fact, a html table with one-char cells. All objects of the game are displayed with characters of different colors placed in table's cells. Most of the time client passes pressed keys' codes to the server and gets back play field updates with every "tick". An update from server consists of messages representing characters to render along with their coordinates and colors. So we are keeping all game logic on the server and we are sending to client only rendering data. In addition, we minimize the possibilities to hack the game by substituting its information sent over the network. + +#### 4.3 How does it work? + +The server in this game is close to Example 3.2 for simplicity. But instead of having a global list of connected websockets, we have one server-wide Game object. A Game instance contains a list of Player objects (inside self._players attribute) which represents players connected to this game, their personal data and websocket objects. Having all game-related data in a Game object also allows us to have multiple game rooms if we want to add such feature. In this case, we need to maintain multiple Game objects, one per game started. + +All interactions between server and clients are done with messages encoded in json. Message from the client containing only a number is interpreted as a code of the key pressed by the player. Other messages from client are sent in the following format: + +``` +[command, arg1, arg2, ... argN ] +``` + +Messages from server are sent as a list because there is often a bunch of messages to send at once (rendering data mostly): + +``` +[[command, arg1, arg2, ... argN ], ... ] +``` + +At the end of every game loop iteration, the next frame is calculated and sent to all the clients. Of course, we are not sending complete frame every time, but only a list of changes for the next frame. + +Note that players are not joining the game immediately after connecting to the server. The connection starts in "spectator" mode, so one can watch how others are playing. if the game is already started, or a "game over" screen from the previous game session. Then a player may press "Join" button to join the existing game or to create a new game if the game is not currently running (no other active players). In the later case, the play field is cleared before the start. + +The play field is stored in Game._world attribute, which is a 2d array made of nested lists. It is used to keep game field's state internally. Each element of an array represents a field's cell which is then rendered to a html table cell. It has a type of Char, which is a namedtuple consisting of a character and color. It is important to keep play field in sync with all the connected clients, so all updates to the play field should be made only along with sending corresponding messages to the clients. This is performed by Game.apply_render() method. It receives a list of Draw objects, which is then used to update play field internally and also to send render message to clients. + +We are using namedtuple not only because it is a good way to represent simple data structures, but also because it takes less space comparing to dict when sending in a json message. If you are sending complex data structures in a real game app, it is recommended to serialize them into a plain and shorter format or even pack in a binary format (such as bson instead of json) to minimize network traffic. + +ThePlayer object contains snake's representation in a deque object. This data type is similar to a list but is more effective for adding and removing elements on its sides, so it is ideal to represent a moving snake. The main method of the class is Player.render_move(), it returns rendering data to move player's snake to the next position. Basically, it renders snake's head in the new position and removes the last element where the tail was in the previous frame. In case the snake has eaten a digit and has to grow, a tail is not moving for a corresponding number of frames. The snake rendering data is used in Game.next_frame() method of the main class, which implements all game logic. This method renders all snake moves and checks for obstacles in front of every snake and also spawns digits and "stones". It is called directly from game_loop() to generate the next frame at every "tick". + +In case there is an obstacle in front of snake's head, a Game.game_over() method is called from Game.next_frame(). It notifies all connected clients about the dead snake (which is turned into stones by player.render_game_over() method) and updates top scores table. Player object's alive flag is set to False, so this player will be skipped when rendering the next frames, until joining the game once again. In case there are no more snakes alive, a "game over" message is rendered at the game field. Also, the main game loop will stop and set game.running flag to False, which will cause a game field to be cleared when some player will press "Join" button next time. + +Spawning of digits and stones is also happening while rendering every next frame, and it is determined by random values. A chance to spawn a digit or a stone can be changed in settings.py along with some other values. Note that digit spawning is happening for every live snake in the play field, so the more snakes are there, the more digits will appear, and they all will have enough food to consume. + +#### 4.4 Network protocol +List of messages sent from client + +Command | Parameters |Description +:-- |:-- |:-- +new_player | [name] |Setting player's nickname +join | |Player is joining the game + + +List of messages sent from server + +Command | Parameters |Description +:-- |:-- |:-- +handshake |[id] |Assign id to a player +world |[[(char, color), ...], ...] |Initial play field (world) map +reset_world | |Clean up world map, replacing all characters with spaces +render |[x, y, char, color] |Display character at position +p_joined |[id, name, color, score] |New player joined the game +p_gameover |[id] |Game ended for a player +p_score |[id, score] |Setting score for a player +top_scores |[[name, score, color], ...] |Update top scores table + +Typical messages exchange order + +Client -> Server |Server -> Client |Server -> All clients |Commentaries +:-- |:-- |:-- |:-- +new_player | | |Name passed to server + |handshake | |ID assigned + |world | |Initial world map passed + |top_scores | |Recent top scores table passed +join | | |Player pressed "Join", game loop started + | |reset_world |Command clients to clean up play field + | |render, render, ... |First game tick, first frame rendered +(key code) | | |Player pressed a key + | |render, render, ... |Second frame rendered + | |p_score |Snake has eaten a digit + | |render, render, ... |Third frame rendered + | | |... Repeat for a number of frames ... + | |p_gameover |Snake died when trying to eat an obstacle + | |top_scores |Updated top scores table (if updated) + +### 5. Conclusion + +To tell the truth, I really enjoy using the latest asynchronous capabilities of Python. The new syntax really makes a difference, so async code is now easily readable. It is obvious which calls are non-blocking and when the green thread switching is happening. So now I can claim with confidence that Python is a good tool for asynchronous programming. + +SnakePit has become very popular at 7WebPages team, and if you decide to take a break at your company, please, don’t forget to leave a feedback for us, say, on [Twitter][12] or [Facebook][13] . + +Get to know more from: + + + +-------------------------------------------------------------------------------- + +via: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-part-3/ + +作者:[Saheetha Shameer][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-part-3/ +[1]: http://snakepit-game.com/ +[2]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-and-asyncio-writing-game-loop/ +[3]: https://7webpages.com/blog/writing-online-multiplayer-game-with-python-asyncio-getting-asynchronous/ +[4]: https://github.com/7WebPages/snakepit-game +[5]: https://github.com/7WebPages/snakepit-game +[6]: https://github.com/7WebPages/snakepit-game/blob/master/server.py +[7]: https://github.com/7WebPages/snakepit-game/blob/master/game.py +[8]: https://github.com/7WebPages/snakepit-game/blob/master/player.py +[9]: https://github.com/7WebPages/snakepit-game/blob/master/datatypes.py +[10]: https://github.com/7WebPages/snakepit-game/blob/master/settings.py +[11]: https://github.com/7WebPages/snakepit-game/blob/master/index.html +[12]: https://twitter.com/7WebPages +[13]: https://www.facebook.com/7WebPages/ diff --git a/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md b/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md new file mode 100644 index 0000000000..65c8657dff --- /dev/null +++ b/sources/tech/20160608 Implementing Mandatory Access Control with SELinux or AppArmor in Linux.md @@ -0,0 +1,248 @@ +Implementing Mandatory Access Control with SELinux or AppArmor in Linux +=========================================================================== + +To overcome the limitations of and to increase the security mechanisms provided by standard ugo/rwx permissions and [access control lists][1], the United States National Security Agency (NSA) devised a flexible Mandatory Access Control (MAC) method known as SELinux (short for Security Enhanced Linux) in order to restrict among other things, the ability of processes to access or perform other operations on system objects (such as files, directories, network ports, etc) to the least permission possible, while still allowing for later modifications to this model. + +![](http://www.tecmint.com/wp-content/uploads/2016/06/SELinux-AppArmor-Security-Hardening-Linux.png) +>SELinux and AppArmor Security Hardening Linux + +Another popular and widely-used MAC is AppArmor, which in addition to the features provided by SELinux, includes a learning mode that allows the system to “learn” how a specific application behaves, and to set limits by configuring profiles for safe application usage. + +In CentOS 7, SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by default (more on this in the next section), as opposed to openSUSE and Ubuntu which use AppArmor. + +In this article we will explain the essentials of SELinux and AppArmor and how to use one of these tools for your benefit depending on your chosen distribution. + +### Introduction to SELinux and How to Use it on CentOS 7 + +Security Enhanced Linux can operate in two different ways: + +- Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that control the security engine. +- Permissive: SELinux does not deny access, but denials are logged for actions that would have been denied if running in enforcing mode. + +SELinux can also be disabled. Although it is not an operation mode itself, it is still an option. However, learning how to use this tool is better than just ignoring it. Keep it in mind! + +To display the current mode of SELinux, use getenforce. If you want to toggle the operation mode, use setenforce 0 (to set it to Permissive) or setenforce 1 (Enforcing). + +Since this change will not survive a reboot, you will need to edit the /etc/selinux/config file and set the SELINUX variable to either enforcing, permissive, or disabled in order to achieve persistence across reboots: + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Enable-Disable-SELinux-Mode.png) +>How to Enable and Disable SELinux Mode + +On a side note, if getenforce returns Disabled, you will have to edit /etc/selinux/config with the desired operation mode and reboot. Otherwise, you will not be able to set (or toggle) the operation mode with setenforce. + +One of the typical uses of setenforce consists of toggling between SELinux modes (from enforcing to permissive or the other way around) to troubleshoot an application that is misbehaving or not working as expected. If it works after you set SELinux to Permissive mode, you can be confident you’re looking at a SELinux permissions issue. + +Two classic cases where we will most likely have to deal with SELinux are: + +- Changing the default port where a daemon listens on. +- Setting the DocumentRoot directive for a virtual host outside of /var/www/html. + +Let’s take a look at these two cases using the following examples. + +#### EXAMPLE 1: Changing the default port for the sshd daemon + +One of the first thing most system administrators do in order to secure their servers is change the port where the SSH daemon listens on, mostly to discourage port scanners and external attackers. To do this, we use the Port directive in `/etc/ssh/sshd_config` followed by the new port number as follows (we will use port 9999 in this case): + +``` +Port 9999 +``` + +After attempting to restart the service and checking its status we will see that it failed to start: + +``` +# systemctl restart sshd +# systemctl status sshd +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-sshd-Service-Status.png) +>Check SSH Service Status + +If we take a look at /var/log/audit/audit.log, we will see that sshd was prevented from starting on port 9999 by SELinux because that is a reserved port for the JBoss Management service (SELinux log messages include the word “AVC” so that they might be easily identified from other messages): + +``` +# cat /var/log/audit/audit.log | grep AVC | tail -1 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-Linux-Audit-Logs.png) +>Check Linux Audit Logs + +At this point most people would probably disable SELinux but we won’t. We will see that there’s a way for SELinux, and sshd listening on a different port, to live in harmony together. Make sure you have the policycoreutils-python package installed and run: + +``` +# yum install policycoreutils-python +``` + +To view a list of the ports where SELinux allows sshd to listen on. In the following image we can also see that port 9999 was reserved for another service and thus we can’t use it to run another service for the time being: + +``` +# semanage port -l | grep ssh +``` + +Of course we could choose another port for SSH, but if we are certain that we will not need to use this specific machine for any JBoss-related services, we can then modify the existing SELinux rule and assign that port to SSH instead: + +``` +# semanage port -m -t ssh_port_t -p tcp 9999 +``` + +After that, we can use the first semanage command to check if the port was correctly assigned, or the -lC options (short for list custom): + +``` +# semanage port -lC +# semanage port -l | grep ssh +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Assign-Port-to-SSH.png) +>Assign Port to SSH + +We can now restart SSH and connect to the service using port 9999. Note that this change WILL survive a reboot. + +#### EXAMPLE 2: Choosing a DocumentRoot outside /var/www/html for a virtual host + +If you need to [set up a Apache virtual host][2] using a directory other than /var/www/html as DocumentRoot (say, for example, `/websrv/sites/gabriel/public_html`): + +``` +DocumentRoot “/websrv/sites/gabriel/public_html” +``` + +Apache will refuse to serve the content because the index.html has been labeled with the default_t SELinux type, which Apache can’t access: + +``` +# wget http://localhost/index.html +# ls -lZ /websrv/sites/gabriel/public_html/index.html +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Labeled-default_t-SELinux-Type.png) +>Labeled as default_t SELinux Type + +As with the previous example, you can use the following command to verify that this is indeed a SELinux-related issue: + +``` +# cat /var/log/audit/audit.log | grep AVC | tail -1 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-Logs-for-SELinux-Issues.png) +>Check Logs for SELinux Issues + +To change the label of /websrv/sites/gabriel/public_html recursively to httpd_sys_content_t, do: + +``` +# semanage fcontext -a -t httpd_sys_content_t "/websrv/sites/gabriel/public_html(/.*)?" +``` + +The above command will grant Apache read-only access to that directory and its contents. + +Finally, to apply the policy (and make the label change effective immediately), do: + +``` +# restorecon -R -v /websrv/sites/gabriel/public_html +``` + +Now you should be able to access the directory: + +``` +# wget http://localhost/index.html +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Access-Apache-Directory.png) +>Access Apache Directory + +For more information on SELinux, refer to the Fedora 22 [SELinux and Administrator guide][3]. + + +### Introduction to AppArmor and How to Use it on OpenSUSE and Ubuntu + +The operation of AppArmor is based on profiles defined in plain text files where the allowed permissions and access control rules are set. Profiles are then used to place limits on how applications interact with processes and files in the system. + +A set of profiles is provided out-of-the-box with the operating system, whereas others can be put in place either automatically by applications when they are installed or manually by the system administrator. + +Like SELinux, AppArmor runs profiles in two modes. In enforce mode, applications are given the minimum permissions that are necessary for them to run, whereas in complain mode AppArmor allows an application to take restricted actions and saves the “complaints” resulting from that operation to a log (/var/log/kern.log, /var/log/audit/audit.log, and other logs inside /var/log/apparmor). + +These logs will show through lines with the word audit in them errors that would occur should the profile be run in enforce mode. Thus, you can try out an application in complain mode and adjust its behavior before running it under AppArmor in enforce mode. + +The current status of AppArmor can be shown using: + +``` +$ sudo apparmor_status +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Check-AppArmor-Status.png) +>Check AppArmor Status + +The image above indicates that the profiles /sbin/dhclient, /usr/sbin/, and /usr/sbin/tcpdump are in enforce mode (that is true by default in Ubuntu). + +Since not all applications include the associated AppArmor profiles, the apparmor-profiles package, which provides other profiles that have not been shipped by the packages they provide confinement for. By default, they are configured to run in complain mode so that system administrators can test them and choose which ones are desired. + +We will make use of apparmor-profiles since writing our own profiles is out of the scope of the LFCS [certification][4]. However, since profiles are plain text files, you can view them and study them in preparation to create your own profiles in the future. + +AppArmor profiles are stored inside /etc/apparmor.d. Let’s take a look at the contents of that directory before and after installing apparmor-profiles: + +``` +$ ls /etc/apparmor.d +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/View-AppArmor-Directory-Content.png) +>View AppArmor Directory Content + +If you execute sudo apparmor_status again, you will see a longer list of profiles in complain mode. You can now perform the following operations: + +To switch a profile currently in enforce mode to complain mode: + +``` +$ sudo aa-complain /path/to/file +``` + +and the other way around (complain –> enforce): + +``` +$ sudo aa-enforce /path/to/file +``` + +Wildcards are allowed in the above cases. For example, + +``` +$ sudo aa-complain /etc/apparmor.d/* +``` + +will place all profiles inside /etc/apparmor.d into complain mode, whereas + +``` +$ sudo aa-enforce /etc/apparmor.d/* +``` + +will switch all profiles to enforce mode. + +To entirely disable a profile, create a symbolic link in the /etc/apparmor.d/disabled directory: + +``` +$ sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/ +``` + +For more information on AppArmor, please refer to the [official AppArmor wiki][5] and to the documentation [provided by Ubuntu][6]. + +### Summary + +In this article we have gone through the basics of SELinux and AppArmor, two well-known MACs. When to use one or the other? To avoid difficulties, you may want to consider sticking with the one that comes with your chosen distribution. In any event, they will help you place restrictions on processes and access to system resources to increase the security in your servers. + +Do you have any questions, comments, or suggestions about this article? Feel free to let us know using the form below. Don’t hesitate to let us know if you have any questions or comments. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/secure-files-using-acls-in-linux/ +[2]: http://www.tecmint.com/apache-virtual-hosting-in-centos/ +[3]: https://docs.fedoraproject.org/en-US/Fedora/22/html/SELinux_Users_and_Administrators_Guide/index.html +[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[5]: http://wiki.apparmor.net/index.php/Main_Page +[6]: https://help.ubuntu.com/community/AppArmor + + + diff --git a/sources/tech/20160618 An Introduction to Mocking in Python.md b/sources/tech/20160618 An Introduction to Mocking in Python.md new file mode 100644 index 0000000000..e9c5c847cb --- /dev/null +++ b/sources/tech/20160618 An Introduction to Mocking in Python.md @@ -0,0 +1,490 @@ +An Introduction to Mocking in Python +===================================== + +This article is about mocking in python, + +**How to Run Unit Tests Without Testing Your Patience** + +More often than not, the software we write directly interacts with what we would label as “dirty” services. In layman’s terms: services that are crucial to our application, but whose interactions have intended but undesired side-effects—that is, undesired in the context of an autonomous test run.For example: perhaps we’re writing a social app and want to test out our new ‘Post to Facebook feature’, but don’t want to actually post to Facebook every time we run our test suite. + +The Python unittest library includes a subpackage named unittest.mock—or if you declare it as a dependency, simply mock—which provides extremely powerful and useful means by which to mock and stub out these undesired side-effects. + +>Source | + +Note: mock is [newly included][1] in the standard library as of Python 3.3; prior distributions will have to use the Mock library downloadable via [PyPI][2]. + +### Fear System Calls + +To give you another example, and one that we’ll run with for the rest of the article, consider system calls. It’s not difficult to see that these are prime candidates for mocking: whether you’re writing a script to eject a CD drive, a web server which removes antiquated cache files from /tmp, or a socket server which binds to a TCP port, these calls all feature undesired side-effects in the context of your unit-tests. + +>As a developer, you care more that your library successfully called the system function for ejecting a CD as opposed to experiencing your CD tray open every time a test is run. + +As a developer, you care more that your library successfully called the system function for ejecting a CD (with the correct arguments, etc.) as opposed to actually experiencing your CD tray open every time a test is run. (Or worse, multiple times, as multiple tests reference the eject code during a single unit-test run!) + +Likewise, keeping your unit-tests efficient and performant means keeping as much “slow code” out of the automated test runs, namely filesystem and network access. + +For our first example, we’ll refactor a standard Python test case from original form to one using mock. We’ll demonstrate how writing a test case with mocks will make our tests smarter, faster, and able to reveal more about how the software works. + +### A Simple Delete Function + +We all need to delete files from our filesystem from time to time, so let’s write a function in Python which will make it a bit easier for our scripts to do so. + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os + +def rm(filename): + os.remove(filename) +``` + +Obviously, our rm method at this point in time doesn’t provide much more than the underlying os.remove method, but our codebase will improve, allowing us to add more functionality here. + +Let’s write a traditional test case, i.e., without mocks: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import rm + +import os.path +import tempfile +import unittest + +class RmTestCase(unittest.TestCase): + + tmpfilepath = os.path.join(tempfile.gettempdir(), "tmp-testfile") + + def setUp(self): + with open(self.tmpfilepath, "wb") as f: + f.write("Delete me!") + + def test_rm(self): + # remove the file + rm(self.tmpfilepath) + # test that it was actually removed + self.assertFalse(os.path.isfile(self.tmpfilepath), "Failed to remove the file.") +``` + +Our test case is pretty simple, but every time it is run, a temporary file is created and then deleted. Additionally, we have no way of testing whether our rm method properly passes the argument down to the os.remove call. We can assume that it does based on the test above, but much is left to be desired. + +Refactoring with MocksLet’s refactor our test case using mock: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import rm + +import mock +import unittest + +class RmTestCase(unittest.TestCase): + + @mock.patch('mymodule.os') + def test_rm(self, mock_os): + rm("any path") + # test that rm called os.remove with the right parameters + mock_os.remove.assert_called_with("any path") +``` + +With these refactors, we have fundamentally changed the way that the test operates. Now, we have an insider, an object we can use to verify the functionality of another. + +### Potential Pitfalls + +One of the first things that should stick out is that we’re using the mock.patch method decorator to mock an object located at mymodule.os, and injecting that mock into our test case method. Wouldn’t it make more sense to just mock os itself, rather than the reference to it at mymodule.os? + +Well, Python is somewhat of a sneaky snake when it comes to imports and managing modules. At runtime, the mymodule module has its own os which is imported into its own local scope in the module. Thus, if we mock os, we won’t see the effects of the mock in the mymodule module. + +The mantra to keep repeating is this: + +> Mock an item where it is used, not where it came from. + +If you need to mock the tempfile module for myproject.app.MyElaborateClass, you probably need to apply the mock to myproject.app.tempfile, as each module keeps its own imports. + +With that pitfall out of the way, let’s keep mocking. + +### Adding Validation to ‘rm’ + +The rm method defined earlier is quite oversimplified. We’d like to have it validate that a path exists and is a file before just blindly attempting to remove it. Let’s refactor rm to be a bit smarter: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import os.path + +def rm(filename): + if os.path.isfile(filename): + os.remove(filename) +``` + +Great. Now, let’s adjust our test case to keep coverage up. + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import rm + +import mock +import unittest + +class RmTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # set up the mock + mock_path.isfile.return_value = False + + rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + rm("any path") + + mock_os.remove.assert_called_with("any path") +``` + +Our testing paradigm has completely changed. We now can verify and validate internal functionality of methods without any side-effects. + +### File-Removal as a Service + +So far, we’ve only been working with supplying mocks for functions, but not for methods on objects or cases where mocking is necessary for sending parameters. Let’s cover object methods first. + +We’ll begin with a refactor of the rm method into a service class. There really isn’t a justifiable need, per se, to encapsulate such a simple function into an object, but it will at the very least help us demonstrate key concepts in mock. Let’s refactor: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import os.path + +class RemovalService(object): + """A service for removing objects from the filesystem.""" + + def rm(filename): + if os.path.isfile(filename): + os.remove(filename) +``` + +### You’ll notice that not much has changed in our test case: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import RemovalService + +import mock +import unittest + +class RemovalServiceTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # instantiate our service + reference = RemovalService() + + # set up the mock + mock_path.isfile.return_value = False + + reference.rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + reference.rm("any path") + + mock_os.remove.assert_called_with("any path") +``` + +Great, so we now know that the RemovalService works as planned. Let’s create another service which declares it as a dependency: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import os.path + +class RemovalService(object): + """A service for removing objects from the filesystem.""" + + def rm(self, filename): + if os.path.isfile(filename): + os.remove(filename) + + +class UploadService(object): + + def __init__(self, removal_service): + self.removal_service = removal_service + + def upload_complete(self, filename): + self.removal_service.rm(filename) +``` + +Since we already have test coverage on the RemovalService, we’re not going to validate internal functionality of the rm method in our tests of UploadService. Rather, we’ll simply test (without side-effects, of course) that UploadService calls the RemovalService.rm method, which we know “just works™” from our previous test case. + +There are two ways to go about this: + +1. Mock out the RemovalService.rm method itself. +2. Supply a mocked instance in the constructor of UploadService. + +As both methods are often important in unit-testing, we’ll review both. + +### Option 1: Mocking Instance Methods + +The mock library has a special method decorator for mocking object instance methods and properties, the @mock.patch.object decorator: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import RemovalService, UploadService + +import mock +import unittest + +class RemovalServiceTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # instantiate our service + reference = RemovalService() + + # set up the mock + mock_path.isfile.return_value = False + + reference.rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + reference.rm("any path") + + mock_os.remove.assert_called_with("any path") + + +class UploadServiceTestCase(unittest.TestCase): + + @mock.patch.object(RemovalService, 'rm') + def test_upload_complete(self, mock_rm): + # build our dependencies + removal_service = RemovalService() + reference = UploadService(removal_service) + + # call upload_complete, which should, in turn, call `rm`: + reference.upload_complete("my uploaded file") + + # check that it called the rm method of any RemovalService + mock_rm.assert_called_with("my uploaded file") + + # check that it called the rm method of _our_ removal_service + removal_service.rm.assert_called_with("my uploaded file") +``` + +Great! We’ve validated that the UploadService successfully calls our instance’s rm method. Notice anything interesting in there? The patching mechanism actually replaced the rm method of all RemovalService instances in our test method. That means that we can actually inspect the instances themselves. If you want to see more, try dropping in a breakpoint in your mocking code to get a good feel for how the patching mechanism works. + +### Pitfall: Decorator Order + +When using multiple decorators on your test methods, order is important, and it’s kind of confusing. Basically, when mapping decorators to method parameters, [work backwards][3]. Consider this example: + +``` +@mock.patch('mymodule.sys') + @mock.patch('mymodule.os') + @mock.patch('mymodule.os.path') + def test_something(self, mock_os_path, mock_os, mock_sys): + pass +``` + +Notice how our parameters are matched to the reverse order of the decorators? That’s partly because of [the way that Python works][4]. With multiple method decorators, here’s the order of execution in pseudocode: + +``` +patch_sys(patch_os(patch_os_path(test_something))) +``` + +Since the patch to sys is the outermost patch, it will be executed last, making it the last parameter in the actual test method arguments. Take note of this well and use a debugger when running your tests to make sure that the right parameters are being injected in the right order. + +### Option 2: Creating Mock Instances + +Instead of mocking the specific instance method, we could instead just supply a mocked instance to UploadService with its constructor. I prefer option 1 above, as it’s a lot more precise, but there are many cases where option 2 might be efficient or necessary. Let’s refactor our test again: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from mymodule import RemovalService, UploadService + +import mock +import unittest + +class RemovalServiceTestCase(unittest.TestCase): + + @mock.patch('mymodule.os.path') + @mock.patch('mymodule.os') + def test_rm(self, mock_os, mock_path): + # instantiate our service + reference = RemovalService() + + # set up the mock + mock_path.isfile.return_value = False + + reference.rm("any path") + + # test that the remove call was NOT called. + self.assertFalse(mock_os.remove.called, "Failed to not remove the file if not present.") + + # make the file 'exist' + mock_path.isfile.return_value = True + + reference.rm("any path") + + mock_os.remove.assert_called_with("any path") + + +class UploadServiceTestCase(unittest.TestCase): + + def test_upload_complete(self, mock_rm): + # build our dependencies + mock_removal_service = mock.create_autospec(RemovalService) + reference = UploadService(mock_removal_service) + + # call upload_complete, which should, in turn, call `rm`: + reference.upload_complete("my uploaded file") + + # test that it called the rm method + mock_removal_service.rm.assert_called_with("my uploaded file") +``` + +In this example, we haven’t even had to patch any functionality, we simply create an auto-spec for the RemovalService class, and then inject this instance into our UploadService to validate the functionality. + +The [mock.create_autospec][5] method creates a functionally equivalent instance to the provided class. What this means, practically speaking, is that when the returned instance is interacted with, it will raise exceptions if used in illegal ways. More specifically, if a method is called with the wrong number of arguments, an exception will be raised. This is extremely important as refactors happen. As a library changes, tests break and that is expected. Without using an auto-spec, our tests will still pass even though the underlying implementation is broken. + +### Pitfall: The mock.Mock and mock.MagicMock Classes + +The mock library also includes two important classes upon which most of the internal functionality is built upon: [mock.Mock][6] and mock.MagicMock. When given a choice to use a mock.Mock instance, a mock.MagicMock instance, or an auto-spec, always favor using an auto-spec, as it helps keep your tests sane for future changes. This is because mock.Mock and mock.MagicMock accept all method calls and property assignments regardless of the underlying API. Consider the following use case: + +``` +class Target(object): + def apply(value): + return value + +def method(target, value): + return target.apply(value) +``` + +We can test this with a mock.Mock instance like this: + +``` +class MethodTestCase(unittest.TestCase): + + def test_method(self): + target = mock.Mock() + + method(target, "value") + + target.apply.assert_called_with("value") +``` + +This logic seems sane, but let’s modify the Target.apply method to take more parameters: + +``` +class Target(object): + def apply(value, are_you_sure): + if are_you_sure: + return value + else: + return None +``` + +Re-run your test, and you’ll find that it still passes. That’s because it isn’t built against your actual API. This is why you should always use the create_autospec method and the autospec parameter with the @patch and @patch.object decorators. + +### Real-World Example: Mocking a Facebook API Call + +To finish up, let’s write a more applicable real-world example, one which we mentioned in the introduction: posting a message to Facebook. We’ll write a nice wrapper class and a corresponding test case. + +``` +import facebook + +class SimpleFacebook(object): + + def __init__(self, oauth_token): + self.graph = facebook.GraphAPI(oauth_token) + + def post_message(self, message): + """Posts a message to the Facebook wall.""" + self.graph.put_object("me", "feed", message=message) +``` + +Here’s our test case, which checks that we post the message without actually posting the message: + +``` +import facebook +import simple_facebook +import mock +import unittest + +class SimpleFacebookTestCase(unittest.TestCase): + + @mock.patch.object(facebook.GraphAPI, 'put_object', autospec=True) + def test_post_message(self, mock_put_object): + sf = simple_facebook.SimpleFacebook("fake oauth token") + sf.post_message("Hello World!") + + # verify + mock_put_object.assert_called_with(message="Hello World!") +``` + +As we’ve seen so far, it’s really simple to start writing smarter tests with mock in Python. + +### Mocking in python Conclusion + +Python’s mock library, if a little confusing to work with, is a game-changer for [unit-testing][7]. We’ve demonstrated common use-cases for getting started using mock in unit-testing, and hopefully this article will help [Python developers][8] overcome the initial hurdles and write excellent, tested code. + +-------------------------------------------------------------------------------- + +via: http://slviki.com/index.php/2016/06/18/introduction-to-mocking-in-python/ + +作者:[Dasun Sucharith][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.slviki.com/ +[1]: http://www.python.org/dev/peps/pep-0417/ +[2]: https://pypi.python.org/pypi/mock +[3]: http://www.voidspace.org.uk/python/mock/patch.html#nesting-patch-decorators +[4]: http://docs.python.org/2/reference/compound_stmts.html#function-definitions +[5]: http://www.voidspace.org.uk/python/mock/helpers.html#autospeccing +[6]: http://www.voidspace.org.uk/python/mock/mock.html +[7]: http://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters +[8]: http://www.toptal.com/python + + + + + + + + + diff --git a/sources/tech/20160621 Basic Linux Networking Commands You Should Know.md b/sources/tech/20160621 Basic Linux Networking Commands You Should Know.md new file mode 100644 index 0000000000..48d12dd363 --- /dev/null +++ b/sources/tech/20160621 Basic Linux Networking Commands You Should Know.md @@ -0,0 +1,146 @@ +translating by hkurj +Basic Linux Networking Commands You Should Know +================================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Basic-Networking-Commands-Linux.jpg) + +Brief: A collection of most important and yet basic Linux networking commands an aspiring Linux SysAdmin and Linux enthusiasts must know. + +It’s not every day at It’s FOSS that we talk about the “command line side” of Linux. Basically, I focus more on the desktop side of Linux. But as some of you readers pointed out in the internal survey (exclusive for It’s FOSS newsletter subscribers), that you would like to learn some command line tricks as well. Cheat sheets were also liked and encouraged by most readers. + +For this purpose, I have compiled a list of the basic networking commands in Linux. It’s not a tutorial that teaches you how to use these commands, rather, it’s a collection of commands and their short explanation. So if you already have some experience with these commands, you can use it for quickly remembering the commands. + +You can bookmark this page for quick reference or even download all the commands in PDF for offline access. + +I had this list of Linux networking commands when I was a student of Communication System Engineering. It helped me to get the top score in Computer Networks course. I hope it helps you in the same way. + +>Exclusive bonus: [Download Linux Networking Commands Cheat Sheet][1] for future reference. You can print it or save it for offline viewing. + +### List of basic networking commands in Linux + +I used FreeBSD in the computer networking course but the UNIX commands should work the same in Linux also. + +#### Connectivity: + +- ping —- sends an ICMP echo message (one packet) to a host. This may go continually until you hit Control-C. Ping means a packet was sent from your machine via ICMP, and echoed at the IP level. ping tells you if the other Host is Up. + +- telnet host —- talk to “hosts” at the given port number. By default, the telnet port is port 23. Few other famous ports are: +``` + 7 – echo port, + 25 – SMTP, use to send mail + 79 – Finger, provides information on other users of the network +``` + +Use control-] to get out of telnet. + +#### Arp: + +Arp is used to translate IP addresses into Ethernet addresses. Root can add and delete arp entries. Deleting them can be useful if an arp entry is malformed or just wrong. Arp entries explicitly added by root are permanent — they can also be by proxy. The arp table is stored in the kernel and manipulated dynamically. Arp entries are cached and will time out and are deleted normally in 20 minutes. + +- arp –a : Prints the arp table +- arp –s [pub] to add an entry in the table +- arp –a –d to delete all the entries in the ARP table + +#### Routing: + +- netstat –r —- Print routing tables. The routing tables are stored in the kernel and used by ip to route packets to non-local networks. +- route add —- The route command is used for setting a static (non-dynamic by hand route) route path in the route tables. All the traffic from this PC to that IP/SubNet will go through the given Gateway IP. It can also be used for setting a default route; i.e., send all packets to a particular gateway, by using 0.0.0.0 in the pace of IP/SubNet. +- routed —– The BSD daemon that does dynamic routing. Started at boot. This runs the RIP routing protocol. ROOT ONLY. You won’t be able to run this without root access. +- gated —– Gated is an alternative routing daemon to RIP. It uses the OSPF, EGP, and RIP protocols in one place. ROOT ONLY. +- traceroute —- Useful for tracing the route of IP packets. The packet causes messages to be sent back from all gateways in between the source and destination by increasing the number of hopes by 1 each time. +- netstat –rnf inet : it displays the routing tables of IPv4 +- sysctl net.inet.ip.forwarding=1 : to enable packets forwarding (to turn a host into a router) +- route add|delete [-net|-host] (ex. route add 192.168.20.0/24 192.168.30.4) to add a route +- route flush : it removes all the routes +- route add -net 0.0.0.0 192.168.10.2 : to add a default route +- routed -Pripv2 –Pno_rdisc –d [-s|-q] to execute routed daemon with RIPv2 protocol, without ICMP auto-discovery, in foreground, in supply or in quiet mode +- route add 224.0.0.0/4 127.0.0.1 : it defines the route used from RIPv2 +- rtquery –n : to query the RIP daemon on a specific host (manually update the routing table) + +#### Others: + +- nslookup —- Makes queries to the DNS server to translate IP to a name, or vice versa. eg. nslookup facebook.com will gives you the IP of facebook.com +- ftp water —– Transfer files to host. Often can use login=“anonymous” , p/w=“guest” +- rlogin -l —– Logs into the host with a virtual terminal like telnet + +#### Important Files: + +``` +/etc/hosts —- names to ip addresses +/etc/networks —- network names to ip addresses +/etc/protocols —– protocol names to protocol numbers +/etc/services —- tcp/udp service names to port numbers +``` + +#### Tools and network performance analysis + +- ifconfig
[up] : start the interface +- ifconfig [down|delete] : stop the interface +- ethereal & : it allows you open ethereal background not foreground +- tcpdump –i -vvv : tool to capture and analyze packets +- netstat –w [seconds] –I [interface] : display network settings and statistics +- udpmt –p [port] –s [bytes] target_host : it creates UDP traffic +- udptarget –p [port] : it’s able to receive UDP traffic +- tcpmt –p [port] –s [bytes] target_host : it creates TCP traffic +- tcptarget –p [port] it’s able to receive TCP traffic +- ifconfig netmask [up] : it allows to subnet the sub-networks + + + +#### Switching: + +- ifconfig sl0 srcIP dstIP : configure a serial interface (do “slattach –l /dev/ttyd0” before, and “sysctl net.inet.ip.forwarding=1“ after) +- telnet 192.168.0.254 : to access the switch from a host in its subnetwork +- sh ru or show running-configuration : to see the current configurations +- configure terminal : to enter in configuration mode +- exit : in order to go to the lower configuration mode + +#### VLAN: + +- vlan n : it creates a VLAN with ID n +- no vlan N : it deletes the VLAN with ID N +- untagged Y : it adds the port Y to the VLAN N +- ifconfig vlan0 create : it creates vlan0 interface +- ifconfig vlan0 vlan ID vlandev em0 : it associates vlan0 interface on top of em0, and set the tags to ID +- ifconfig vlan0 [up] : to turn on the virtual interface +- tagged Y : it adds to the port Y the support of tagged frames for the current VLAN + +#### UDP/TCP + +- socklab udp – it executes socklab with udp protocol +- sock – it creates a udp socket, it’s equivalent to type sock udp and bind +- sendto – emission of data packets +- recvfrom – it receives data from socket +- socklab tcp – it executes socklab with tcp protocol +- passive – it creates a socket in passive mode, it’s equivalent to socklab, sock tcp, bind, listen +- accept – it accepts an incoming connection (it can be done before or after creating the incoming connection) +- connect – these two commands are equivalent to socklab, sock tcp, bind, connect +- close – it closes the connection +- read – to read bytes on the socket +- write (ex. write ciao, ex. write #10) to write “ciao” or to write 10 bytes on the socket + +#### NAT/Firewall + +- rm /etc/resolv.conf – it prevent address resolution and make sure your filtering and firewall rules works properly +- ipnat –f file_name – it writes filtering rules into file_name +- ipnat –l – it gives the list of active rules +- ipnat –C –F – it re-initialize the rules table +- map em0 192.168.1.0/24 -> 195.221.227.57/32 em0 : mapping IP addresses to the interface +- map em0 192.168.1.0/24 -> 195.221.227.57/32 portmap tcp/udp 20000:50000 : mapping with port +- ipf –f file_name : it writes filtering rules into file_name +- ipf –F –a : it resets the rule table +- ipfstat –I : it grants access to a few information on filtered packets, as well as active filtering rules + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/basic-linux-networking-commands/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[Abhishek Prakash][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://drive.google.com/open?id=0By49_3Av9sT1cDdaZnh4cHB4aEk diff --git a/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md b/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md new file mode 100644 index 0000000000..1ec72a3c90 --- /dev/null +++ b/sources/tech/20160621 Container technologies in Fedora - systemd-nspawn.md @@ -0,0 +1,106 @@ +Container technologies in Fedora: systemd-nspawn +=== + +Welcome to the “Container technologies in Fedora” series! This is the first article in a series of articles that will explain how you can use the various container technologies available in Fedora. This first article will deal with `systemd-nspawn`. + +### What is a container? + +A container is a user-space instance which can be used to run a program or an operating system in isolation from the system hosting the container (called the host system). The idea is very similar to a `chroot` or a [virtual machine][1]. The processes running in a container are managed by the same kernel as the host operating system, but they are isolated from the host file system, and from the other processes. + + +### What is systemd-nspawn? + +The systemd project considers container technologies as something that should fundamentally be part of the desktop and that should integrate with the rest of the user’s systems. To this end, systemd provides `systemd-nspawn`, a tool which is able to create containers using various Linux technologies. It also provides some container management tools. + +In many ways, `systemd-nspawn` is similar to `chroot`, but is much more powerful. It virtualizes the file system, process tree, and inter-process communication of the guest system. Much of its appeal lies in the fact that it provides a number of tools, such as `machinectl`, for managing containers. Containers run by `systemd-nspawn` will integrate with the systemd components running on the host system. As an example, journal entries can be logged from a container in the host system’s journal. + +In Fedora 24, `systemd-nspawn` has been split out from the systemd package, so you’ll need to install the `systemd-container` package. As usual, you can do that with a `dnf install systemd-container`. + +### Creating the container + +Creating a container with `systemd-nspawn` is easy. Let’s say you have an application made for Debian, and it doesn’t run well anywhere else. That’s not a problem, we can make a container! To set up a container with the latest version of Debian (at this point in time, Jessie), you need to pick a directory to set up your system in. I’ll be using `~/DebianJessie` for now. + +Once the directory has been created, you need to run `debootstrap`, which you can install from the Fedora repositories. For Debian Jessie, you run the following command to initialize a Debian file system. + +``` +$ debootstrap --arch=amd64 stable ~/DebianJessie +``` + +This assumes your architecture is x86_64. If it isn’t, you must change `amd64` to the name of your architecture. You can find your machine’s architecture with `uname -m`. + +Once your root directory is set up, you will start your container with the following command. + +``` +$ systemd-nspawn -bD ~/DebianJessie +``` + +You’ll be up and running within seconds. You’ll notice something as soon as you try to log in: you can’t use any accounts on your system. This is because systemd-nspawn virtualizes users. The fix is simple: remove -b from the previous command. You’ll boot directly to the root shell in the container. From there, you can just use passwd to set a password for root, or you can use adduser to add a new user. As soon as you’re done with that, go ahead and put the -b flag back. You’ll boot to the familiar login console and you log in with the credentials you set. + +All of this applies for any distribution you would want to run in the container, but you need to create the system using the correct package manager. For Fedora, you would use DNF instead of debootstrap. To set up a minimal Fedora system, you can run the following command, replacing the absolute path with wherever you want the container to be. + +``` +$ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release +``` + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-04-14.png) + +### Setting up the network + +You’ll notice an issue if you attempt to start a service that binds to a port currently in use on your host system. Your container is using the same network interface. Luckily, `systemd-nspawn` provides several ways to achieve separate networking from the host machine. + +#### Local networking + +The first method uses the `--private-network` flag, which only creates a loopback device by default. This is ideal for environments where you don’t need networking, such as build systems and other continuous integration systems. + +#### Multiple networking interfaces + +If you have multiple network devices, you can give one to the container with the `--network-interface` flag. To give `eno1` to my container, I would add the flag `--network-interface=eno1`. While an interface is assigned to a container, the host can’t use it at the same time. When the container is completely shut down, it will be available to the host again. + +#### Sharing network interfaces + +For those of us who don’t have spare network devices, there are other options for providing access to the container. One of those is the `--port` flag. This forwards a port on the container to the host. The format is `protocol:host:container`, where protocol is either `tcp` or `udp`, `host` is a valid port number on the host, and `container` is a valid port on the container. You can omit the protocol and specify only `host:container`. I often use something similar to `--port=2222:22`. + +You can enable complete, host-only networking with the `--network-veth` flag, which creates a virtual Ethernet interface between the host and the container. You can also bridge two connections with `--network-bridge`. + +### Using systemd components + +If the system in your container has D-Bus, you can use systemd’s provided utilities to control and monitor your container. Debian doesn’t include dbus in the base install. If you want to use it with Debian Jessie, you’ll want to run `apt install dbus`. + +#### machinectl + +To easily manage containers, systemd provides the machinectl utility. Using machinectl, you can log in to a container with machinectl login name, check the status with machinectl status name, reboot with machinectl reboot name, or power it off with machinectl poweroff name. + +### Other systemd commands + +Most systemd commands, such as journalctl, systemd-analyze, and systemctl, support containers with the `--machine` option. For example, if you want to see the journals of a container named “foobar”, you can use journalctl `--machine=foobar`. You can also see the status of a service running in this container with `systemctl --machine=foobar` status service. + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-09-25.png) + +### Working with SELinux + +If you’re running with SELinux enforcing (the default in Fedora), you’ll need to set the SELinux context for your container. To do that, you need to run the following two commands on the host system. + +``` +$ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?" +$ restorecon -R /path/to/container/ +``` + +Make sure you replace “/path/to/container” with the path to your container. For my container, “DebianJessie”, I would run the following: + +``` +$ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?" +$ restorecon -R /home/johnmh/DebianJessie/ +``` + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ + +作者:[John M. Harris, Jr.][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/ +[1]: https://en.wikipedia.org/wiki/Virtual_machine diff --git a/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md b/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md new file mode 100644 index 0000000000..d9431155ff --- /dev/null +++ b/sources/tech/20160621 Docker Datacenter in AWS and Azure in Few Clicks.md @@ -0,0 +1,139 @@ +translated by pspkforever +DOCKER DATACENTER IN AWS AND AZURE IN A FEW CLICKS +=================================================== + +Introducing Docker Datacenter AWS Quickstart and Azure Marketplace Templates production-ready, high availability deployments in just a few clicks. + +The Docker Datacenter AWS Quickstart uses a CloudFormation templates and pre-built templates on Azure Marketplace to make it easier than ever to deploy an enterprise CaaS Docker environment on public cloud infrastructures. + +The Docker Datacenter Container as a Service (CaaS) platform for agile application development provides container and cluster orchestration and management that is simple, secure and scalable for enterprises of any size. With our new cloud templates pre-built for Docker Datacenter, developers and IT operations can frictionlessly move dockerized applications to an Amazon EC2 or Microsoft Azure environment without any code changes. Now businesses can quickly realize greater efficiency of computing and operations resources and Docker supported container management and orchestration in just a few steps. + +### What is Docker Datacenter? + +Docker Datacenter includes Docker Universal Control Plane, Docker Trusted Registry (DTR), CS Docker Engine with commercial support & subscription to align to your application SLAs: + +- Docker Universal Control Plane (UCP), an enterprise-grade cluster management solution that helps you manage your whole cluster from a single pane of glass +- Docker Trusted Registry (DTR), an image storage solution that helps securely store and manage the Docker images. +- Commercially Supported (CS) Docker Engines + +![](http://img.scoop.it/lVraAJgJbjAKqfWCLtLuZLnTzqrqzN7Y9aBZTaXoQ8Q=) + +### Deploy on AWS in a single click with the Docker Datacenter AWS Quick Start + +With AWS Quick Start reference deployments you can rapidly deploy Docker containers on the AWS cloud, adhering to Docker and AWS best practices. The Docker Datacenter Quick Start uses CloudFormation templates that are modular and customizable so you can layer additional functionality on top or modify them for your own Docker deployments. + +[Docker Datacenter for AWS Quickstart](https://youtu.be/aUx7ZdFSkXU) + +#### Architecture + +![](http://img.scoop.it/sZ3_TxLba42QB-r_6vuApLnTzqrqzN7Y9aBZTaXoQ8Q=) + +The AWS Cloudformation starts the installation process by creating all the required AWS resources such as the VPC, security groups, public and private subnets, internet gateways, NAT gateways, and S3 bucket. + +It then launches the first UCP controller instances and goes through the installation process of Docker engine and UCP containers. It backs the Root CAs created by the first UCP controllers to S3. Once the first UCP controller is up and running, the process of creating the other UCP controllers, the UCP cluster nodes, and the first DTR replica is triggered. Similar to the first UCP controller node, all other nodes are started by installing Docker Commercially Supported engine, followed by running the UCP and DTR containers to join the cluster. Two ELBs, one for UCP and one for DTR, are launched and automatically configured to provide resilient load balancing across the two AZs. + +Additionally, UCP controllers and nodes are launched in an ASG to provide scaling functionality if needed. This architecture ensures that both UCP and DTR instances are spread across both AZs to ensure resiliency and high-availability. Route53 is used to dynamically register and configure UCP and DTR in your private or public HostedZone. + +![](http://img.scoop.it/HM7Ag6RFvMXvZ_iBxRgKo7nTzqrqzN7Y9aBZTaXoQ8Q=) + +### Key functionality of this Quickstart templates includes the following: + +- Creates a New VPC, Private and Public Subnets in different AZs, ELBs, NAT Gateways, Internet Gateways, AutoScaling Groups- all based on AWS best practices +- Creates an S3 bucket for DDC to be used for cert backup and DTR image storage ( requires additional configuration in DTR ) +- Deploys 3 UCP Controllers across multiple AZs within your VPC +- Creates a UCP ELB with preconfigured health checks +- Creates a DNS record and attaches it to UCP ELB +- Deploys a scalable cluster of UCP nodes +- Backs up UCP Root CAs to S3 +- Create a 3 DTR Replicas across multiple AZs within your VPC +- Creates a DTR with preconfigured health checks +- Creates a DNS record and attaches it to DTR ELB + +[Download the AWS Quick Start Guide to Learn More](https://s3.amazonaws.com/quickstart-reference/docker/latest/doc/docker-datacenter-on-the-aws-cloud.pdf) + +### Getting Started with Docker Datacenter for AWS + +1. Go to [Docker Store][1] to get your [30 day free trial][2] or [contact sales][4]. +2. At confirmation, you’ll be prompted to “Launch Stack” and you’ll be directed to the AWS Cloudformation portal. +3. Confirm your AWS Region that you’d like to launch this stack in +4. Provide the required parameters +5. Confirm and Launch. +6. Once complete, click on outputs tab to see the URLs of UCP/DTR, default username, and password, and S3 bucket name. + +[Request up to $2000 AWS credit for Docker Datacenter](https://aws.amazon.com/mp/contactdocker/) + +### Deploy on Azure with pre-built templates on Azure Marketplace + +Docker Datacenter is available as pre-built template on Azure Marketplace for you to run instantly on Azure across various datacenters globally. Customers can choose to deploy Docker Datacenter from various VM choices offered on Azure as it fits their needs. + +#### Architecture + +![](http://img.scoop.it/V9SpuBCoAnUnkRL3J-FRFLnTzqrqzN7Y9aBZTaXoQ8Q=) + +The Azure deployment process begins by entering some basic information about the user including, the admin username for ssh-ing into all the nodes (OS level admin user) and the name of the resource group. You can think of the resource group as a collection of resources that has a lifecycle and deployment boundary. You can read more about resource groups here: + +Next you will enter the details of the cluster, including: VM size for UCP controllers, Number of Controllers (default is 3), VM size for UCP nodes, Number of UCP nodes (default is 1, max of 10), VM size for DTR nodes, Number of DTR nodes (default is 3), Virtual Network Name and Address (ex. 10.0.0.1/19). Regarding networking, you will have 2 subnets: the first subnet is for the UCP controller nodes and the second subnet is for the DTR and UCP nodes. + +Lastly you will click OK to complete deployment. Provisioning should take about 15-19 minutes for small clusters with a few additional minutes for larger ones. + +![](http://img.scoop.it/DXPM5-GXP0j2kEhno0kdRLnTzqrqzN7Y9aBZTaXoQ8Q=) + +![](http://img.scoop.it/321ElkCf6rqb7u_-nlGPtrnTzqrqzN7Y9aBZTaXoQ8Q=) + +#### How to Deploy in Azure + +1. Register for [a 30 day free trial][5] license of Docker Datacenter or [contact sales][6]. +2. [Go to Docker Datacenter on the Microsoft Azure Marketplace][7] +3. [Review Deployment Documents][8] + +Get Started by registering for a Docker Datacenter license and you’ll be prompted with the ability to launch either the AWS or Azure templates. + +- [Get a 30 day trial license][9] +- [Understand Docker Datacenter architecture with this video][10] +- [Watch demo videos][11] +- [Get $75 AWS credit torwards your deployment][12] + +### Learn More about Docker + +- New to Docker? Try our 10 min [online tutorial][20] +- Share images, automate builds, and more with [a free Docker Hub account][21] +- Read the Docker [1.12 Release Notes][22] +- Subscribe to [Docker Weekly][23] +- Sign up for upcoming [Docker Online Meetups][24] +- Attend upcoming [Docker Meetups][25] +- Watch [DockerCon EU 2015 videos][26] +- Start [contributing to Docker][27] + + + +-------------------------------------------------------------------------------- + +via: https://blog.docker.com/2016/06/docker-datacenter-aws-azure-cloud/ + +作者:[Trisha McCanna][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.docker.com/author/trisha/ +[1]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial +[2]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial +[4]: https://goto.docker.com/contact-us.html +[5]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial +[6]: https://goto.docker.com/contact-us.html +[7]: https://azure.microsoft.com/en-us/marketplace/partners/docker/dockerdatacenterdocker-datacenter/ +[8]: https://success.docker.com/Datacenter/Apply/Docker_Datacenter_on_Azure +[9]: http://www.docker.com/trial +[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8tFH7xzI5Y-vP48wUiuXfH +[11]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8a8JaIOA5Q757KiqEjPKWr +[12]: https://aws.amazon.com/quickstart/promo/ +[20]: https://docs.docker.com/engine/understanding-docker/ +[21]: https://hub.docker.com/ +[22]: https://docs.docker.com/release-notes/ +[23]: https://www.docker.com/subscribe_newsletter/ +[24]: http://www.meetup.com/Docker-Online-Meetup/ +[25]: https://www.docker.com/community/meetup-groups +[26]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv +[27]: https://docs.docker.com/contributing/contributing/ + diff --git a/sources/tech/20160621 Flatpak brings standalone apps to Linux.md b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md new file mode 100644 index 0000000000..c2c1b51e7b --- /dev/null +++ b/sources/tech/20160621 Flatpak brings standalone apps to Linux.md @@ -0,0 +1,35 @@ +翻译中:by zky001 +Flatpak brings standalone apps to Linux +=== + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg) + +The development team behind [Flatpak][1] has [just announced the general availability][2] of the Flatpak desktop application framework. Flatpak (which was also known during development as xdg-app) provides the ability for an application — bundled as a Flatpak — to be installed and run easily and consistently on many different Linux distributions. Applications bundled as Flatpaks also have the ability to be sandboxed for security, isolating them from your operating system, and other applications. Check out the [Flatpak website][3], and the [press release][4] for more information on the tech that makes up the Flatpak framework. + +### Installing Flatpak on Fedora + +For users wanting to run applications bundled as Flatpaks, installation on Fedora is easy, with Flatpak already available in the official Fedora 23 and Fedora 24 repositories. The Flatpak website has [full details on installation on Fedora][5], as well as how to install on Arch, Debian, Mageia, and Ubuntu. [Many applications][6] have builds already bundled with Flatpak — including LibreOffice, and nightly builds of popular graphics applications Inkscape and GIMP. + +### For Application Developers + +If you are an application developer, the Flatpak website also contains some great resources on getting started [bundling and distributing your applications with Flatpak][7]. These resources contain information on using Flakpak SDKs to build standalone, sandboxed Flatpak applications. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-flatpak/ + +作者:[Ryan Lerch][a] +译者:[zky001](https://github.com/zky001) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/introducing-flatpak/ +[1]: http://flatpak.org/ +[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[3]: http://flatpak.org/ +[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html +[5]: http://flatpak.org/getting.html +[6]: http://flatpak.org/apps.html +[7]: http://flatpak.org/developer.html diff --git a/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md new file mode 100644 index 0000000000..71a23511e9 --- /dev/null +++ b/sources/tech/20160623 72% Of The People I Follow On Twitter Are Men.md @@ -0,0 +1,88 @@ +Translating by Flowsnow! +72% Of The People I Follow On Twitter Are Men +=============================================== + +![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/abacus.jpg) + +At least, that's my estimate. Twitter does not ask users their gender, so I [have written a program that guesses][1] based on their names. Among those who follow me, the distribution is even worse: 83% are men. None are gender-nonbinary as far as I can tell. + +The way to fix the first number is not mysterious: I should notice and seek more women experts tweeting about my interests, and follow them. + +The second number, on the other hand, I can merely influence, but I intend to improve it as well. My network on Twitter should represent of the software industry's diverse future, not its unfair present. + +### How Did I Measure It? + +I set out to estimate the gender distribution of the people I follow—my "friends" in Twitter's jargon—and found it surprisingly hard. [Twitter analytics][2] readily shows me the converse, an estimate of my followers' gender: + +![](https://emptysqua.re/blog/gender-of-twitter-users-i-follow/twitter-analytics.png) + +So, Twitter analytics divides my followers' accounts among male, female, and unknown, and tells me the ratio of the first two groups. (Gender-nonbinary folk are absent here—they're lumped in with the Twitter accounts of organizations, and those whose gender is simply unknown.) But Twitter doesn't tell me the ratio of my friends. That [which is measured improves][3], so I searched for a service that would measure this number for me, and found [FollowerWonk][4]. + +FollowerWonk guesses my friends are 71% men. Is this a good guess? For the sake of validation, I compare FollowerWonk's estimate of my followers to Twitter's estimate: + +**Twitter analytics** + + |men |women +:-- |:-- |:-- +Followers |83% |17% + +**FollowerWonk** + + |men |women +:-- |:-- |:-- +Followers |81% |19% +Friends I follow |72% |28% + +My followers show up 81% male here, close to the Twitter analytics number. So far so good. If FollowerWonk and Twitter agree on the gender ratio of my followers, that suggests FollowerWonk's estimate of the people I follow (which Twitter doesn't analyze) is reasonably good. With it, I can make a habit of measuring my numbers, and improve them. + +At $30 a month, however, checking my friends' gender distribution with FollowerWonk is a pricey habit. I don't need all its features anyhow. Can I solve only the gender-distribution problem economically? + +Since FollowerWonk's numbers seem reasonable, I tried to reproduce them. Using Python and [some nice Philadelphians' Twitter][5] API wrapper, I began downloading the profiles of all my friends and followers. I immediately found that Twitter's rate limits are miserly, so I randomly sampled only a subset of users instead. + +I wrote a rudimentary program that searches for a pronoun announcement in each of my friends' profiles. For example, a profile description that includes "she/her" probably belongs to a woman, a description with "they/them" is probably nonbinary. But most don't state their pronouns: for these, the best gender-correlated information is the "name" field: for example, @gvanrossum's name field is "Guido van Rossum", and the first name "Guido" suggests that @gvanrossum is male. Where pronouns were not announced, I decided to use first names to estimate my numbers. + +My script passes parts of each name to the SexMachine library to guess gender. [SexMachine][6] has predictable downfalls, like mistaking "Brooklyn Zen Center" for a woman named "Brooklyn", but its estimates are as good as FollowerWonk's and Twitter's: + + + + |nonbinary |men |women |no gender,unknown +:-- |:-- |:-- |:-- |:-- +Friends I follow |1 |168 |66 |173 + |0% |72% |28% | +Followers |0 |459 |108 |433 + |0% |81% |19% | + +(Based on all 408 friends and a sample of 1000 followers.) + +### Know Your Number + +I want you to check your Twitter network's gender distribution, too. So I've deployed "Proportional" to PythonAnywhere's handy service for $10 a month: + +> + +The application may rate-limit you or otherwise fail, so use it gently. The [code is on GitHub][7]. It includes a command-line tool, as well. + +Who is represented in your network on Twitter? Are you speaking and listening to the same unfairly distributed group who have been talking about software for the last few decades, or does your network look like the software industry of the future? Let's know our numbers and improve them. + + + + + +-------------------------------------------------------------------------------- + +via: https://emptysqua.re/blog/gender-of-twitter-users-i-follow/ + +作者:[A. Jesse Jiryu Davis][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/AJesseJiryuDavis/ +[1]: https://www.proporti.onl/ +[2]: https://analytics.twitter.com/ +[3]: http://english.stackexchange.com/questions/14952/that-which-is-measured-improves +[4]: https://moz.com/followerwonk/ +[5]: https://github.com/bear/python-twitter/graphs/contributors +[6]: https://pypi.python.org/pypi/SexMachine/ +[7]: https://github.com/ajdavis/twitter-gender-distribution diff --git a/sources/tech/20160623 Advanced Image Processing with Python.md b/sources/tech/20160623 Advanced Image Processing with Python.md new file mode 100644 index 0000000000..0a3a722845 --- /dev/null +++ b/sources/tech/20160623 Advanced Image Processing with Python.md @@ -0,0 +1,124 @@ +Johnny-Liao translating... + +Advanced Image Processing with Python +====================================== + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/Image-Search-Engine.png) + +Building an image processing search engine is no easy task. There are several concepts, tools, ideas and technologies that go into it. One of the major image-processing concepts is reverse image querying (RIQ) or reverse image search. Google, Cloudera, Sumo Logic and Birst are among the top organizations to use reverse image search. Great for analyzing images and making use of mined data, RIQ provides a very good insight into analytics. + +### Top Companies and Reverse Image Search + +There are many top tech companies that are using RIQ to best effect. For example, Pinterest first brought in visual search in 2014. It subsequently released a white paper in 2015, revealing the architecture. Reverse image search enabled Pinterest to obtain visual features from fashion objects and display similar product recommendations. + +As is generally known, Google images uses reverse image search allowing users to upload an image and then search for connected images. The submitted image is analyzed and a mathematical model made out of it, by advanced algorithm use. The image is then compared with innumerable others in the Google databases before results are matched and similar results obtained. + +**Here is a graph representation from the OpenCV 2.4.9 Features Comparison Report:** + +![](http://www.cuelogic.com/blog/wp-content/uploads/2016/06/search-engine-graph.jpg) + +### Algorithms & Python Libraries + +Before we get down to the workings of it, let us rush through the main elements that make building an image processing search engine with Python possible: + +### Patented Algorithms + +#### SIFT (Scale-Invariant Feature Transform) Algorithm + +1. A patented technology with nonfree functionality that uses image identifiers in order to identify a similar image, even those clicked from different angles, sizes, depths and scale, that they are included in the search results. Check the detailed video on SIFT here. +2. SIFT correctly matches the search criteria with a large database of features from many images. +3. Matching same images with different viewpoints and matching invariant features to obtain search results is another SIFT feature. Read more about scale-invariant keypoints here. + +#### SURF (Speeded Up Robust Features) Algorithm + +1. [SURF][1] is also patented with nonfree functionality and a more ‘speeded’ up version of SIFT. Unlike SIFT, SURF approximates Laplacian of Gaussian (unlike SIFT) with Box Filter. + +2. SURF relies on the determinant of Hessian Matrix for both its location and scale. + +3. Rotation invariance is not a requisite in many applications. So not finding this orientation speeds up the process. + +4. SURF includes several features that the speed improved in each step. Three times faster than SIFT, SURF is great with rotation and blurring. It is not as great in illumination and viewpoint change though. + +5. Open CV, a programming function library provides SURF functionalities. SURF.compute() and SURF. Detect() can be used to find descriptors and keypoints. Read more about SURF [here][2]. + +### Open Source Algorithms + +#### KAZE Algorithm + +1.KAZE is a open source 2D multiscale and novel feature detection and description algorithm in nonlinear scale spaces. Efficient techniques in Additive Operator Splitting (AOS) and variable conductance diffusion is used to build the nonlinear scale space. + +2. Multiscale image processing basics are simple – Creating an image’s scale space while filtering original image with right function over enhancing time or scale. + +#### AKAZE (Accelerated-KAZE) Algorithm + +1. As the name suggests, this is a faster mode to image search, finding matching keypoints between two images. AKAZE uses a binary descriptor and nonlinear scale space that balances accuracy and speed. + +#### BRISK (Binary Robust Invariant Scalable Keypoints) Algorithm + +1. BRISK is great for description, keypoint detection and matching. + +2. An algorithm that is highly adaptive, scale-space FAST-based detector along with a bit-string descriptor, helps speed up the search significantly. + +3. Scale-space keypoint detection and keypoint description helps optimize the performance with relation to the task at hand. + +#### FREAK (Fast Retina Keypoint) + +1. This is a novel keypoint descriptor inspired by the human eye.A binary strings cascade is efficiently computed by an image intensity comparison. The FREAK algorithm allows faster computing with lower memory load as compared to BRISK, SURF and SIFT. + +#### ORB (Oriented FAST and Rotated BRIEF) + +1.A fast binary descriptor, ORB is resistant to noise and rotation invariant. ORB builds on the FAST keypoint detector and the BRIEF descriptor, elements attributed to its low cost and good performance. + +2. Apart from the fast and precise orientation component, efficiently computing the oriented BRIEF, analyzing variance and co-relation of oriented BRIEF features, is another ORB feature. + +### Python Libraries + +#### Open CV + +1. OpenCV is available for both academic and commercial use. A open source machine learning and computer vision library, OpenCV makes it easy for organizations to utilize and modify code. + +2. Over 2500 optimized algorithms, including state-of-the-art machine learning and computer vision algorithms serve various image search purposes – face detection, object identification, camera movement tracking, finding similar images from image database, following eye movements, scenery recognition, etc. + +3. Top companies like Google, IBM, Yahoo, IBM, Sony, Honda, Microsoft and Intel make wide use of OpenCV. + +4. OpenCV uses Python, Java, C, C++ and MATLAB interfaces while supporting Windows, Linux, Mac OS and Android. + +#### Python Imaging Library (PIL) + +1. The Python Imaging Library (PIL) supports several file formats while providing image processing and graphics solutions.The open source PIL adds image processing capabilities to your Python interpreter. +2. The standard procedure for image manipulation include image enhancing, transparency and masking handling, image filtering, per-pixel manipulation, etc. + +For detailed statistics and graphs, view the OpenCV 2.4.9 Features Comparison Report [here][3]. + +### Building an Image Search Engine + +An image search engine helps pick similar images from a prepopulated set of image base. The most popular among these is Google’s well known image search engine. For starters, there are various approaches to build a system like this. To mention a few: + +1.Using image extraction, image description extraction, meta data extraction and search result extraction to build an image search engine. +2. Define your image descriptor, dataset indexing, define your similarity metric and then search and rank. +3. Select image to be searched, select directory for carrying out search, search directory for all pictures, create picture feature index, evaluate same feature for search picture, match pictures in search and obtain matched pictures. + +Our approach basically began with comparing grayscaled versions of the images, gradually moving on to complex feature matching algorithms like SIFT and SURF, and then finally settling down to am open source solution called BRISK. All these algorithms give efficient results with minor changes in performance and latency. An engine built on these algorithms have numerous applications like analyzing graphic data for popularity statistics, identification of objects in graphic contents, and many more. + +**Example**: An image search engine needs to be build by an IT company for a client. So if a brand logo image is submitted in the search, all related brand image searches show up as results. The obtained results can also be used for analytics by the client, allowing them to estimate the brand popularity as per the geographic location. Its still early days though, RIQ or reverse image search has not been exploited to its full extent yet. + +This concludes our article on building an image search engine using Python. Check our blog section out for the latest on technology and programming. + +Statistics Source: OpenCV 2.4.9 Features Comparison Report (computer-vision-talks.com) + +(Guidance and additional inputs by Ananthu Nair.) + +-------------------------------------------------------------------------------- + +via: http://www.cuelogic.com/blog/advanced-image-processing-with-python/ + +作者:[Snehith Kumbla][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.cuelogic.com/blog/author/snehith-kumbla/ +[1]: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html +[2]: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf +[3]: https://docs.google.com/spreadsheets/d/1gYJsy2ROtqvIVvOKretfxQG_0OsaiFvb7uFRDu5P8hw/edit#gid=10 diff --git a/sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md b/sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md new file mode 100644 index 0000000000..4bf5ccaf42 --- /dev/null +++ b/sources/tech/20160624 DAISY A Linux-compatible text format for the visually impaired.md @@ -0,0 +1,46 @@ +DAISY : A Linux-compatible text format for the visually impaired +================================================================= + + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/osdc-lead_books.png?itok=K8wqfPT5) +>Image by : +Image by Kate Ter Haar. Modified by opensource.com. CC BY-SA 2.0. + +If you're blind or visually impaired like I am, you usually require various levels of hardware or software to do things that people who can see take for granted. One among these is specialized formats for reading print books: Braille (if you know how to read it) or specialized text formats such as DAISY. + +### What is DAISY? + +DAISY stands for Digital Accessible Information System. It's an open standard used almost exclusively by the blind to read textbooks, periodicals, newspapers, fiction, you name it. It was founded in the mid '90s by [The DAISY Consortium][1], a group of organizations dedicated to producing a set of standards that would allow text to be marked up in a way that would make it easy to read, skip around in, annotate, and otherwise manipulate text in much the same way a sighted user would. + +The current version of DAISY 3.0, was released in mid-2005 and is a complete rewrite of the standard. It was created with the goal of making it much easier to write books complying with it. It's worth noting that DAISY can support plain text only, audio recordings (in PCM Wave or MPEG Layer III format) only, or a combination of text and audio. Specialized software can read these books and allow users to set bookmarks and navigate a book as easily as a sighted person would with a print book. + +### How does DAISY work? + +DAISY, regardless of the specific version, works a bit like this: You have your main navigation file (ncc.html in DAISY 2.02) that contains metadata about the book, such as author's name, copyright date, how many pages the book has, etc. This file is a valid XML document in the case of DAISY 3.0, with DTD (document type definition) files being highly recommended to be included with each book. + +In the navigation control file is markup describing precise positions—either text caret offsets in the case of text navigation or time down to the millisecond in the case of audio recordings—that allows the software to skip to that exact point in the book much as a sighted person would turn to a chapter page. It's worth noting that this navigation control file only contains positions for the main, and largest, elements of a book. + +The smaller elements are handled by SMIL (synchronized multimedia integration language) files. These files contain position points for each chapter in the book. The level of navigation depends heavily on how well the book was marked up. Think of it like this: If a print book has no chapter headings, you will have a hard time figuring out which chapter you're in. If a DAISY book is badly marked up, you might only be able to navigate to the start of the book, or possibly only to the table of contents. If a book is marked up badly enough (or missing markup entirely), your DAISY reading software is likely to simply ignore it. + +### Why the need for specialized software? + +You may be wondering why, if DAISY is little more than HTML, XML, and audio files, you would need specialized software to read and manipulate it. Technically speaking, you don't. The specialized software is mostly for convenience. In Linux, for example, a simple web browser can be used to open the books and read them. If you click on the XML file in a DAISY 3 book, all the software will generally do is read the spines of the books you give it access to and create a list of them that you click on to open. If a book is badly marked up, it won't show up in this list. + +Producing DAISY is another matter entirely, and usually requires either specialized software or enough knowledge of the specifications to modify general-purpose software to parse it. + +### Conclusion + +Fortunately, DAISY is a dying standard. While it is very good at what it does, the need for specialized software to produce it has set us apart from the normal sighted world, where readers use a variety of formats to read their books electronically. This is why the DAISY consortium has succeeded DAISY with EPUB, version 3, which supports what are called media overlays. This is basically an EPUB book with optional audio or video. Since EPUB shares a lot of DAISY's XML markup, some software that can read DAISY can see EPUB books but usually cannot read them. This means that once the websites that provide books for us switch over to this open format, we will have a much larger selection of software to read our books. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/5/daisy-linux-compatible-text-format-visually-impaired + +作者:[Kendell Clark][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kendell-clark +[1]: http://www.daisy.org diff --git a/sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md b/sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md new file mode 100644 index 0000000000..b7b3e8d5f1 --- /dev/null +++ b/sources/tech/20160626 Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi.md @@ -0,0 +1,407 @@ +Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi +=================================================================== + +![](http://www.movingelectrons.net/images/bkup_photos_main.jpg) +>Backup Photos While Traveling - Gear. + +### Introduction + +I’ve been on a quest to finding the ideal travel photo backup solution for a long time. Relying on just tossing your SD cards in your camera bag after they are full during a trip is a risky move that leaves you too exposed: SD cards can be lost or stolen, data can get corrupted or cards can get damaged in transit. Backing up to another medium - even if it’s just another SD card - and leaving that in a safe(r) place while traveling is the best practice. Ideally, backing up to a remote location would be the way to go, but that may not be practical depending on where you are traveling to and Internet availability in the region. + +My requirements for the ideal backup procedure are: + +1. Use an iPad to manage the process instead of a laptop. I like to travel light and since most of my trips are business related (i.e. non-photography related), I’d hate to bring my personal laptop along with my business laptop. My iPad, however; is always with me, so using it as a tool just makes sense. +2. Use as few hardware devices as practically possible. +3. Connection between devices should be secure. I’ll be using this setup in hotels and airports, so closed and encrypted connection between devices is ideal. +4. The whole process should be sturdy and reliable. I’ve tried other options using router/combo devices and [it didn’t end up well][1]. + +### The Setup + +I came up with a setup that meets the above criteria and is also flexible enough to expand on it in the future. It involves the use of the following gear: + +1. [iPad Pro 9.7][2] inches. It’s the most powerful, small and lightweight iOS device at the time of writing. Apple pencil is not really needed, but it’s part of my gear as I so some editing on the iPad Pro while on the road. All the heavy lifting will be done by the Raspberry Pi, so any other device capable of connecting through SSH would fit the bill. +2. [Raspberry Pi 3][3] with Raspbian installed. +3. [Micro SD card][4] for Raspberry Pi and a Raspberry Pi [box/case][5]. +5. [128 GB Pen Drive][6]. You can go bigger, but 128 GB is enough for my user case. You can also get a portable external hard drive like [this one][7], but the Raspberry Pi may not provide enough power through its USB port, which means you would have to get a [powered USB hub][8], along with the needed cables, defeating the purpose of having a lightweight and minimalistic setup. +6. [SD card reader][9] +7. [SD Cards][10]. I use several as I don’t wait for one to fill up before using a different one. That allows me to spread photos I take on a single trip amongst several cards. + +The following diagram shows how these devices will be interacting with each other. + +![](http://www.movingelectrons.net/images/bkup_photos_diag.jpg) +>Backup Photos While Traveling - Process Diagram. + +The Raspberry Pi will be configured to act as a secured Hot Spot. It will create its own WPA2-encrypted WiFi network to which the iPad Pro will connect. Although there are many online tutorials to create an Ad Hoc (i.e. computer-to-computer) connection with the Raspberry Pi, which is easier to setup; that connection is not encrypted and it’s relatively easy for other devices near you to connect to it. Therefore, I decided to go with the WiFi option. + +The camera’s SD card will be connected to one of the Raspberry Pi’s USB ports through an SD card reader. Additionally, a high capacity Pen Drive (128 GB in my case) will be permanently inserted in one of the USB ports on the Raspberry Pi. I picked the [Sandisk Ultra Fit][11] because of its tiny size. The main idea is to have the Raspberry Pi backup the photos from the SD Card to the Pen Drive with the help of a Python script. The backup process will be incremental, meaning that only changes (i.e. new photos taken) will be added to the backup folder each time the script runs, making the process really fast. This is a huge advantage if you take a lot of photos or if you shoot in RAW format. The iPad will be used to trigger the Python script and to browse the SD Card and Pen Drive as needed. + +As an added benefit, if the Raspberry Pi is connected to Internet through a wired connection (i.e. through the Ethernet port), it will be able to share the Internet connection with the devices connected to its WiFi network. + +### 1. Raspberry Pi Configuration + +This is the part where we roll up our sleeves and get busy as we’ll be using Raspbian’s command-line interface (CLI) . I’ll try to be as descriptive as possible so it’s easy to go through the process. + +#### Install and Configure Raspbian + +Connect a keyboard, mouse and an LCD monitor to the Raspberry Pi. Insert the Micro SD in the Raspberry Pi’s slot and proceed to install Raspbian per the instructions in the [official site][12]. + +After the installation is done, go to the CLI (Terminal in Raspbian) and type: + +``` +sudo apt-get update +sudo apt-get upgrade +``` + +This will upgrade all software on the machine. I configured the Raspberry Pi to connect to the local network and changed the default password as a safety measure. + +By default SSH is enabled on Raspbian, so all sections below can be done from a remote machine. I also configured RSA authentication, but that’s optional. More info about it [here][13]. + +This is a screenshot of the SSH connection to the Raspberry Pi from [iTerm][14] on Mac: + +##### Creating Encrypted (WPA2) Access Point + +The installation was made based on [this][15] article, it was optimized for my user case. + +##### 1. Install Packages + +We need to type the following to install the required packages: + +``` +sudo apt-get install hostapd +sudo apt-get install dnsmasq +``` + +hostapd allows to use the built-in WiFi as an access point. dnsmasq is a combined DHCP and DNS server that’s easy to configure. + +##### 2. Edit dhcpcd.conf + +Connect to the Raspberry Pi through Ethernet. Interface configuration on the Raspbery Pi is handled by dhcpcd, so first we tell it to ignore wlan0 as it will be configured with a static IP address. + +Open up the dhcpcd configuration file with sudo nano `/etc/dhcpcd.conf` and add the following line to the bottom of the file: + +``` +denyinterfaces wlan0 +``` + +Note: This must be above any interface lines that may have been added. + +##### 3. Edit interfaces + +Now we need to configure our static IP. To do this, open up the interface configuration file with sudo nano `/etc/network/interfaces` and edit the wlan0 section so that it looks like this: + +``` +allow-hotplug wlan0 +iface wlan0 inet static + address 192.168.1.1 + netmask 255.255.255.0 + network 192.168.1.0 + broadcast 192.168.1.255 +# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf +``` + +Also, the wlan1 section was edited to be: + +``` +#allow-hotplug wlan1 +#iface wlan1 inet manual +# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf +``` + +Important: Restart dhcpcd with sudo service dhcpcd restart and then reload the configuration for wlan0 with `sudo ifdown eth0; sudo ifup wlan0`. + +##### 4. Configure Hostapd + +Next, we need to configure hostapd. Create a new configuration file with `sudo nano /etc/hostapd/hostapd.conf with the following contents: + +``` +interface=wlan0 + +# Use the nl80211 driver with the brcmfmac driver +driver=nl80211 + +# This is the name of the network +ssid=YOUR_NETWORK_NAME_HERE + +# Use the 2.4GHz band +hw_mode=g + +# Use channel 6 +channel=6 + +# Enable 802.11n +ieee80211n=1 + +# Enable QoS Support +wmm_enabled=1 + +# Enable 40MHz channels with 20ns guard interval +ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] + +# Accept all MAC addresses +macaddr_acl=0 + +# Use WPA authentication +auth_algs=1 + +# Require clients to know the network name +ignore_broadcast_ssid=0 + +# Use WPA2 +wpa=2 + +# Use a pre-shared key +wpa_key_mgmt=WPA-PSK + +# The network passphrase +wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE + +# Use AES, instead of TKIP +rsn_pairwise=CCMP +``` + +Now, we also need to tell hostapd where to look for the config file when it starts up on boot. Open up the default configuration file with `sudo nano /etc/default/hostapd` and find the line `#DAEMON_CONF=""` and replace it with `DAEMON_CONF="/etc/hostapd/hostapd.conf"`. + +##### 5. Configure Dnsmasq + +The shipped dnsmasq config file contains tons of information on how to use it, but we won’t be using all the options. I’d recommend moving it (rather than deleting it), and creating a new one with + +``` +sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig +sudo nano /etc/dnsmasq.conf +``` + +Paste the following into the new file: + +``` +interface=wlan0 # Use interface wlan0 +listen-address=192.168.1.1 # Explicitly specify the address to listen on +bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere +server=8.8.8.8 # Forward DNS requests to Google DNS +domain-needed # Don't forward short names +bogus-priv # Never forward addresses in the non-routed address spaces. +dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time +``` + +##### 6. Set up IPV4 forwarding + +One of the last things that we need to do is to enable packet forwarding. To do this, open up the sysctl.conf file with `sudo nano /etc/sysctl.conf`, and remove the # from the beginning of the line containing `net.ipv4.ip_forward=1`. This will enable it on the next reboot. + +We also need to share our Raspberry Pi’s internet connection to our devices connected over WiFi by the configuring a NAT between our wlan0 interface and our eth0 interface. We can do this by writing a script with the following lines. + +``` +sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE +sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT +sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT +``` + +I named the script hotspot-boot.sh and made it executable with: + +``` +sudo chmod 755 hotspot-boot.sh +``` + +The script should be executed when the Raspberry Pi boots. There are many ways to accomplish this, and this is the way I went with: + +1. Put the file in `/home/pi/scripts`. +2. Edit the rc.local file by typing `sudo nano /etc/rc.local` and place the call to the script before the line that reads exit 0 (more information [here][16]). + +This is how the rc.local file looks like after editing it. + +``` +#!/bin/sh -e +# +# rc.local +# +# This script is executed at the end of each multiuser runlevel. +# Make sure that the script will "exit 0" on success or any other +# value on error. +# +# In order to enable or disable this script just change the execution +# bits. +# +# By default this script does nothing. + +# Print the IP address +_IP=$(hostname -I) || true +if [ "$_IP" ]; then + printf "My IP address is %s\n" "$_IP" +fi + +sudo /home/pi/scripts/hotspot-boot.sh & + +exit 0 + +``` + +#### Installing Samba and NTFS Compatibility. + +We also need to install the following packages to enable the Samba protocol and allow the File Browser App to see the connected devices to the Raspberry Pi as shared folders. Also, ntfs-3g provides NTFS compatibility in case we decide to connect a portable hard drive to the Raspberry Pi. + +``` +sudo apt-get install ntfs-3g +sudo apt-get install samba samba-common-bin +``` + +You can follow [this][17] article for details on how to configure Samba. + +Important Note: The referenced article also goes through the process of mounting external hard drives on the Raspberry Pi. We won’t be doing that because, at the time of writing, the current version of Raspbian (Jessie) auto-mounts both the SD Card and the Pendrive to `/media/pi/` when the device is turned on. The article also goes over some redundancy features that we won’t be using. + +### 2. Python Script + +Now that the Raspberry Pi has been configured, we need to work on the script that will actually backup/copy our photos. Note that this script just provides certain degree of automation to the backup process. If you have a basic knowledge of using the Linux/Raspbian CLI, you can just SSH into the Raspberry Pi and copy yourself all photos from one device to the other by creating the needed folders and using either the cp or the rsync command. We’ll be using the rsync method on the script as it’s very reliable and allows for incremental backups. + +This process relies on two files: the script itself and the configuration file `backup_photos.conf`. The latter just have a couple of lines indicating where the destination drive (Pendrive) is mounted and what folder has been mounted to. This is what it looks like: + +``` +mount folder=/media/pi/ +destination folder=PDRIVE128GB +``` + +Important: Do not add any additional spaces between the `=` symbol and the words to both sides of it as the script will break (definitely an opportunity for improvement). + +Below is the Python script, which I named `backup_photos.py` and placed in `/home/pi/scripts/`. I included comments in between the lines of code to make it easier to follow. + +``` +#!/usr/bin/python3 + +import os +import sys +from sh import rsync + +''' +The script copies an SD Card mounted on /media/pi/ to a folder with the same name +created in the destination drive. The destination drive's name is defined in +the .conf file. + + +Argument: label/name of the mounted SD Card. +''' + +CONFIG_FILE = '/home/pi/scripts/backup_photos.conf' +ORIGIN_DEV = sys.argv[1] + +def create_folder(path): + + print ('attempting to create destination folder: ',path) + if not os.path.exists(path): + try: + os.mkdir(path) + print ('Folder created.') + except: + print ('Folder could not be created. Stopping.') + return + else: + print ('Folder already in path. Using that instead.') + + + +confFile = open(CONFIG_FILE,'rU') +#IMPORTANT: rU Opens the file with Universal Newline Support, +#so \n and/or \r is recognized as a new line. + +confList = confFile.readlines() +confFile.close() + + +for line in confList: + line = line.strip('\n') + + try: + name , value = line.split('=') + + if name == 'mount folder': + mountFolder = value + elif name == 'destination folder': + destDevice = value + + + except ValueError: + print ('Incorrect line format. Passing.') + pass + + +destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV +create_folder(destFolder) + +print ('Copying files...') + +# Comment out to delete files that are not in the origin: +# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder) +rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder) + +print ('Done.') +``` + +### 3. iPad Pro Configuration + +Since all the heavy-lifting will be done on the Raspberry Pi and no files will be transferred through the iPad Pro, which was a huge disadvantage in [one of the workflows I tried before][18]; we just need to install [Prompt 2][19] on the iPad to access the Raspeberry Pi through SSH. Once connected, you can either run the Python script or copy the files manually. + +![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_prompt.jpg) +>SSH Connection to Raspberry Pi From iPad Using Prompt. + +Since we installed Samba, we can access USB devices connected to the Raspberry Pi in a more graphical way. You can stream videos, copy and move files between devices. [File Browser][20] is perfect for that. + +### 4. Putting it All Together + +Let’s suppose that `SD32GB-03` is the label of an SD card connected to one of the USB ports on the Raspberry Pi. Also, let’s suppose that `PDRIVE128GB` is the label of the Pendrive, also connected to the device and defined on the `.conf` file as indicated above. If we wanted to backup the photos on the SD Card, we would need to go through the following steps: + +1. Turn on Raspberry Pi so that drives are mounted automatically. +2. Connect to the WiFi network generated by the Raspberry Pi. +3. Connect to the Raspberry Pi through SSH using the [Prompt][21] App. +4. Type the following once you are connected: + +``` +python3 backup_photos.py SD32GB-03 +``` + +The first backup my take some minutes depending on how much of the card is used. That means you need to keep the connection alive to the Raspberry Pi from the iPad. You can get around this by using the [nohup][22] command before running the script. + +``` +nohup python3 backup_photos.py SD32GB-03 & +``` + +![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_finished.png) +>iTerm Screenshot After Running Python Script. + +### Further Customization + +I installed a VNC server to access Raspbian’s graphical interface from another computer or the iPad through [Remoter App][23]. I’m looking into installing [BitTorrent Sync][24] for backing up photos to a remote location while on the road, which would be the ideal setup. I’ll expand this post once I have a workable solution. + +Feel free to either include your comments/questions below or reach out to me. My contact info is at the footer of this page. + + +-------------------------------------------------------------------------------- + +via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html + +作者:[Editor][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html +[1]: http://bit.ly/1MVVtZi +[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20 +[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20 +[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20 +[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20 +[6]: http://amzn.to/293kPqX +[7]: http://amzn.to/290syFY +[8]: http://amzn.to/290syFY +[9]: http://amzn.to/290syFY +[10]: http://amzn.to/290syFY +[11]: http://amzn.to/293kPqX +[12]: https://www.raspberrypi.org/downloads/noobs/ +[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md +[14]: https://www.iterm2.com/ +[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ +[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md +[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/ +[18]: http://bit.ly/1MVVtZi +[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH +[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH +[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH +[22]: https://en.m.wikipedia.org/wiki/Nohup +[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH +[24]: https://getsync.com/ diff --git a/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md new file mode 100644 index 0000000000..a2521673c9 --- /dev/null +++ b/sources/tech/20160628 How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04.md @@ -0,0 +1,101 @@ +How To Setup Open Source Discussion Platform Discourse On Ubuntu Linux 16.04 +=============================================================================== + +Discourse is an open source discussion platform, that can work as a mailing list, a chat room and a forum as well. It is a popular tool and modern day implementation of a successful discussion platform. On server side, it is built using Ruby on Rails and uses Postgres on the backend, it also makes use of Redis caching to reduce the loading times, while on client’s side end, it runs in browser using Java Script. It is a pretty well optimized and well structured tool. It also offers converter plugins to migrate your existing discussion boards / forums like vBulletin, phpBB, Drupal, SMF etc to Discourse. In this article, we will be learning how to install Discourse on Ubuntu operating system. + +It is developed by keeping security in mind, so spammers and hackers might not be lucky with this application. It works well with all modern devices, and adjusts its display setting accordingly for mobile devices and tablets. + +### Installing Discourse on Ubuntu 16.04 + +Let’s get started ! the minimum system RAM to run Discourse is 1 GB and the officially supported installation process for Discourse requires dockers to be installed on our Linux system. Besides dockers, it also requires Git. We can fulfill these two requirements by simply running the following command on our system’s terminal. + +``` +wget -qO- https://get.docker.com/ | sh +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/124.png) + +It shouldn’t take longer to complete the installation for Docker and Git, as soon its installation process is complete, create a directory for Discourse inside /var partition of your system (You can choose any other partition here too). + +``` +mkdir /var/discourse +``` + +Now clone the Discourse’s Github repository to this newly created directory. + +``` +git clone https://github.com/discourse/discourse_docker.git /var/discourse +``` + +Go into the cloned directory. + +``` +cd /var/discourse +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/314.png) + +You should be able to locate “discourse-setup” script file here, simply run this script to initiate the installation wizard for Discourse. + +``` +./discourse-setup +``` + +**Side note: Please make sure you have a ready email server setup before attempting install for discourse.** + +Installation wizard will ask you following six questions. + +``` +Hostname for your Discourse? +Email address for admin account? +SMTP server address? +SMTP user name? +SMTP port [587]: +SMTP password? []: +``` + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/411.png) + +Once you supply these information, it will ask for the confirmation, if everything is fine, hit “Enter” and installation process will take off. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/511.png) + +Sit back and relax! it will take sweet amount of time to complete the installation, grab a cup of coffee, and keep an eye for any error messages. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/610.png) + +Here is how the successful completion of the installation process should look alike. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/710.png) + +Now launch your web browser, if the hostname for discourse installation resolves properly to IP, then you can use your hostname in browser , otherwise use your IP address to launch the Discourse page. Here is what you should see: + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/85.png) + +That’s it, create new account by using “Sign Up” option and you should be good to go with your Discourse setup. + +![](http://linuxpitstop.com/wp-content/uploads/2016/06/106.png) + +### Conclusion + +It is an easy to setup application and works flawlessly. It is equipped with all required features of modern day discussion board. It is available under General Public License and is 100% open source product. The simplicity, easy of use, powerful and long feature list are the most important feathers of this tool. Hope you enjoyed this article, Question? do let us know in comments please. + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-discourse-on-ubuntu-linux-16-04/ + +作者:[Aun][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxpitstop.com/author/aun/ + + + + + + + + diff --git a/sources/tech/20160630 What makes up the Fedora kernel.md b/sources/tech/20160630 What makes up the Fedora kernel.md new file mode 100644 index 0000000000..95b61a201a --- /dev/null +++ b/sources/tech/20160630 What makes up the Fedora kernel.md @@ -0,0 +1,33 @@ +What makes up the Fedora kernel? +==================================== + +![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/kernel-945x400.png) + +Every Fedora system runs a kernel. Many pieces of code come together to make this a reality. + +Each release of the Fedora kernel starts with a baseline release from the [upstream community][1]. This is often called a ‘vanilla’ kernel. The upstream kernel is the standard. The goal is to have as much code upstream as possible. This makes it easier for bug fixes and API updates to happen as well as having more people review the code. In an ideal world, Fedora would be able to to take the kernel straight from kernel.org and send that out to all users. + +Realistically, using the vanilla kernel isn’t complete enough for Fedora. Some features Fedora users want may not be available. The [Fedora kernel][2] that users actually receive contains a number of patches on top of the vanilla kernel. These patches are considered ‘out of tree’. Many of these patches will not exist out of tree patches very long. If patches are available to fix an issue, the patches may be pulled in to the Fedora tree so the fix can go out to users faster. When the kernel is rebased to a new version, the patches will be removed if they are in the new version. + +Some patches remain in the Fedora kernel tree for an extended period of time. A good example of patches that fall into this category are the secure boot patches. These patches provide a feature Fedora wants to support even though the upstream community has not yet accepted them. It takes effort to keep these patches up to date so Fedora tries to minimize the number of patches that are carried without being accepted by an upstream kernel maintainer. + +Generally, the best way to get a patch included in the Fedora kernel is to send it to the ]Linux Kernel Mailing List (LKML)][3] first and then ask for it to be included in Fedora. If a patch has been accepted by a maintainer it stands a very high chance of being included in the Fedora kernel tree. Patches that come from places like github which have not been submitted to LKML are unlikely to be taken into the tree. It’s important to send the patches to LKML first to ensure Fedora is carrying the correct patches in its tree. Without the community review, Fedora could end up carrying patches which are buggy and cause problems. + +The Fedora kernel contains code from many places. All of it is necessary to give the best experience possible. + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/makes-fedora-kernel/ + +作者:[Laura Abbott][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/makes-fedora-kernel/ +[1]: http://www.kernel.org/ +[2]: http://pkgs.fedoraproject.org/cgit/rpms/kernel.git/ +[3]: http://www.labbott.name/blog/2015/10/02/the-art-of-communicating-with-lkml/ diff --git a/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md b/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md new file mode 100644 index 0000000000..ad51164ed4 --- /dev/null +++ b/sources/tech/20160701 CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU.md @@ -0,0 +1,43 @@ +(翻译中 by runningwater) +CANONICAL CONSIDERING TO DROP 32 BIT SUPPORT IN UBUNTU +======================================================== + +![](https://itsfoss.com/wp-content/uploads/2016/06/Ubuntu-32-bit-goes-for-a-toss-.jpg) + +Yesterday, developer [Dimitri John Ledkov][1] wrote a message on the [Ubuntu Mailing list][2] calling for the end of i386 support by Ubuntu 18.10. Ledkov argues that more software is being developed with 64-bit support. He is also concerned that it will be difficult to provide security support for the aging i386 architecture. + +Ledkov also argues that building i386 images is not free, but takes quite a bit of Canonical’s resources. + +>Building i386 images is not “for free”, it comes at the cost of utilizing our build farm, QA and validation time. Whilst we have scalable build-farms, i386 still requires all packages, autopackage tests, and ISOs to be revalidated across our infrastructure. As well as take up mirror space & bandwidth. + +Ledkov offers a plan where the 16.10, 17.04, and 17.10 versions of Ubuntu will continue to have i386 kernels, netboot installers, and cloud images, but drop i386 ISO for desktop and server. The 18.04 LTS would then drop support for i386 kernels, netboot installers, and cloud images, but still provide the ability for i386 programs to run on 64-bit architecture. Then, 18.10 would end the i386 port and limit legacy 32-bit applications to snaps, containers, and virtual machines. + +Ledkov’s plan had not been accepted yet, but it shows a definite push toward eliminating 32-bit support. + +### GOOD NEWS + +Don’t despair yet. this will not affect the distros used to resurrect your old system. [Martin Wimpress][3], the creator of [Ubuntu MATE][4], revealed during a discussion on Googl+ that these changes will only affect mainline Ubuntu. + +>The i386 archive will continue to exist into 18.04 and flavours can continue to elect to build i386 isos. There is however a security concern, in that some larger applications (Firefox, Chromium, LibreOffice) are already presenting challenges in terms of applying some security patches to older LTS releases. So flavours are being asked to be mindful of the support period they can reasonably be expected to support i386 versions for. + +### THOUGHTS + +I understand why they need to make this move from a security standpoint, but it’s going to make people move away from mainline Ubuntu to either one of the flavors or a different architecture. Thankfully, we have alternative [lightweight Linux distributions][5]. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-32-bit-support-drop/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 + +作者:[John Paul][a] +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[1]: https://plus.google.com/+DimitriJohnLedkov +[2]: https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2016-June/016661.html +[3]: https://twitter.com/m_wimpress +[4]: http://ubuntu-mate.org/ +[5]: https://itsfoss.com/lightweight-linux-beginners/ diff --git a/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md b/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md new file mode 100644 index 0000000000..b3d40a94c2 --- /dev/null +++ b/sources/tech/20160701 How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS.md @@ -0,0 +1,143 @@ +How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS +======================================================================= + +> am a new Ubuntu Linux 16.04 LTS user. How do I setup a network bridge on the host server powered by Ubuntu 14.04 LTS or 16.04 LTS operating system? + +![](http://s0.cyberciti.org/images/category/old/ubuntu-logo.jpg) + +A Bridged networking is nothing but a simple technique to connect to the outside network through the physical interface. It is useful for LXC/KVM/Xen/Containers virtualization and other virtual interfaces. The virtual interfaces appear as regular hosts to the rest of the network. In this tutorial I will explain how to configure a Linux bridge with bridge-utils (brctl) command line utility on Ubuntu server. + +### Our sample bridged networking + +![](http://s0.cyberciti.org/uploads/faq/2016/07/my-br0-br1-setup.jpg) +>Fig.01: Sample Ubuntu Bridged Networking Setup For Kvm/Xen/LXC Containers (br0) + +In this example eth0 and eth1 is the physical network interface. eth0 connected to the LAN and eth1 is attached to the upstream ISP router/Internet. + +### Install bridge-utils + +Type the following [apt-get command][1] to install the bridge-utils: + +``` +$ sudo apt-get install bridge-utils +``` + +OR + +```` +$ sudo apt install bridge-utils +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/ubuntu-install-bridge-utils.jpg) +>Fig.02: Ubuntu Linux install bridge-utils package + +### Creating a network bridge on the Ubuntu server + +Edit `/etc/network/interfaces` using a text editor such as nano or vi, enter: + +``` +$ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup-1-july-2016 +$ sudo vi /etc/network/interfaces +``` + +Let us setup eth1 and map it to br1, enter (delete or comment out all eth1 entries): + +``` +# br1 setup with static wan IPv4 with ISP router as gateway +auto br1 +iface br1 inet static + address 208.43.222.51 + network 255.255.255.248 + netmask 255.255.255.0 + broadcast 208.43.222.55 + gateway 208.43.222.49 + bridge_ports eth1 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 +``` + +To setup eth0 and map it to br0, enter (delete or comment out all eth1 entries): + +``` +auto br0 +iface br0 inet static + address 10.18.44.26 + netmask 255.255.255.192 + broadcast 10.18.44.63 + dns-nameservers 10.0.80.11 10.0.80.12 + # set static route for LAN + post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1 + post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1 + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 +``` + +### A note about br0 and DHCP + +DHCP config options: + +``` +auto br0 +iface br0 inet dhcp + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 +``` + +Save and close the file. + +### Restart the server or networking service + +You need to reboot the server or type the following command to restart the networking service (this may not work on SSH based session): + +``` +$ sudo systemctl restart networking +``` + +If you are using Ubuntu 14.04 LTS or older not systemd based system, enter: + +``` +$ sudo /etc/init.d/restart networking +``` + +### Verify connectivity + +Use the ping/ip commands to verify that both LAN and WAN interfaces are reachable: +``` +# See br0 and br1 +ip a show +# See routing info +ip r +# ping public site +ping -c 2 cyberciti.biz +# ping lan server +ping -c 2 10.0.80.12 +``` + +Sample outputs: + +![](http://s0.cyberciti.org/uploads/faq/2016/07/br0-br1-eth0-eth1-configured-on-ubuntu.jpg) +>Fig.03: Verify Bridging Ethernet Connections + +Now, you can configure XEN/KVM/LXC containers to use br0 and br1 to reach directly to the internet or private lan. No need to setup special routing or iptables SNAT rules. + + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/how-to-create-bridge-interface-ubuntu-linux/ + +作者:[VIVEK GITE][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/nixcraft +[1]: http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html + diff --git a/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md new file mode 100644 index 0000000000..2dd2ae024f --- /dev/null +++ b/sources/tech/20160705 How to Encrypt a Flash Drive Using VeraCrypt.md @@ -0,0 +1,100 @@ +How to Encrypt a Flash Drive Using VeraCrypt +============================================ + +Many security experts prefer open source software like VeraCrypt, which can be used to encrypt flash drives, because of its readily available source code. + +Encryption is a smart idea for protecting data on a USB flash drive, as we covered in our piece that described ]how to encrypt a flash drive][1] using Microsoft BitLocker. + +But what if you do not want to use BitLocker? + +You may be concerned that because Microsoft's source code is not available for inspection, it could be susceptible to security "backdoors" used by the government or others. Because source code for open source software is widely shared, many security experts feel open source software is far less likely to have any backdoors. + +Fortunately, there are several open source encryption alternatives to BitLocker. + +If you need to be able to encrypt and access files on any Windows machine, as well as computers running Apple OS X or Linux, the open source [VeraCrypt][2] offers an excellent alternative. + +VeraCrypt is derived from TrueCrypt, a well-regarded open source encryption software product that has now been discontinued. But the code for TrueCrypt was audited and no major security flaws were found. In addition, it has since been improved in VeraCrypt. + +Versions exist for Windows, OS X and Linux. + +Encrypting a USB flash drive with VeraCrypt is not as straightforward as it is with BitLocker, but it still only takes a few minutes. + +### Encrypting Flash Drive with VeraCrypt in 8 Steps + +After [downloading VeraCrypt][3] for your operating system: + +Start VeraCrypt, and click on Create Volume to start the VeraCrypt Volume Creation Wizard. + +![](http://www.esecurityplanet.com/imagesvr_ce/6246/Vera0.jpg) + +The VeraCrypt Volume Creation Wizard allows you to create an encrypted file container on the flash drive which sits along with other unencrypted files, or you can choose to encrypt the entire flash drive. For the moment, we will choose to encrypt the entire flash drive. + +![](http://www.esecurityplanet.com/imagesvr_ce/6703/Vera1.jpg) + +On the next screen, choose Standard VeraCrypt Volume. + +![](http://www.esecurityplanet.com/imagesvr_ce/835/Vera2.jpg) + +Select the drive letter of the flash drive you want to encrypt (in this case O:). + +![](http://www.esecurityplanet.com/imagesvr_ce/9427/Vera3.jpg) + +Choose the Volume Creation Mode. If your flash drive is empty or you want to delete everything it contains, choose the first option. If you want to keep any existing files, choose the second option. + +![](http://www.esecurityplanet.com/imagesvr_ce/7828/Vera4.jpg) + +This screen allows you to choose your encryption options. If you are unsure of which to choose, leave the default settings of AES and SHA-512. + +![](http://www.esecurityplanet.com/imagesvr_ce/5918/Vera5.jpg) + +After confirming the Volume Size screen, enter and re-enter the password you want to use to encrypt your data. + +![](http://www.esecurityplanet.com/imagesvr_ce/3850/Vera6.jpg) + +To work effectively, VeraCrypt must draw from a pool of entropy or "randomness." To generate this pool, you'll be asked to move your mouse around in a random fashion for about a minute. Once the bar has turned green, or preferably when it reaches the far right of the screen, click Format to finish creating your encrypted drive. + +![](http://www.esecurityplanet.com/imagesvr_ce/7468/Vera8.jpg) + +### Using a Flash Drive Encrypted with VeraCrypt + +When you want to use an encrypted flash drive, first insert the drive in the computer and start VeraCrypt. + +Then select an unused drive letter (such as z:) and click Auto-Mount Devices. + +![](http://www.esecurityplanet.com/imagesvr_ce/2016/Vera10.jpg) + +Enter your password and click OK. + +![](http://www.esecurityplanet.com/imagesvr_ce/8222/Vera11.jpg) + +The mounting process may take a few minutes, after which your unencrypted drive will become available with the drive letter you selected previously. + +### VeraCrypt Traveler Disk Setup + +If you set up a flash drive with an encrypted container rather than encrypting the whole drive, you also have the option to create what VeraCrypt calls a traveler disk. This installs a copy of VeraCrypt on the USB flash drive itself, so when you insert the drive in another Windows computer you can run VeraCrypt automatically from the flash drive; there is no need to install it on the computer. + +You can set up a flash drive to be a Traveler Disk by choosing Traveler Disk SetUp from the Tools menu of VeraCrypt. + +![](http://www.esecurityplanet.com/imagesvr_ce/5812/Vera12.jpg) + +It is worth noting that in order to run VeraCrypt from a Traveler Disk on a computer, you must have administrator privileges on that computer. While that may seem to be a limitation, no confidential files can be opened safely on a computer that you do not control, such as one in a business center. + +>Paul Rubens has been covering enterprise technology for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch. + +-------------------------------------------------------------------------------- + +via: http://www.esecurityplanet.com/open-source-security/how-to-encrypt-flash-drive-using-veracrypt.html + +作者:[Paul Rubens ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.esecurityplanet.com/author/3700/Paul-Rubens +[1]: http://www.esecurityplanet.com/views/article.php/3880616/How-to-Encrypt-a-USB-Flash-Drive.htm +[2]: http://www.esecurityplanet.com/open-source-security/veracrypt-a-worthy-truecrypt-alternative.html +[3]: https://veracrypt.codeplex.com/releases/view/619351 + + + diff --git a/sources/tech/20160706 What is Git.md b/sources/tech/20160706 What is Git.md new file mode 100644 index 0000000000..f98de42fd3 --- /dev/null +++ b/sources/tech/20160706 What is Git.md @@ -0,0 +1,123 @@ +translating by cvsher +What is Git +=========== + +Welcome to my series on learning how to use the Git version control system! In this introduction to the series, you will learn what Git is for and who should use it. + +If you're just starting out in the open source world, you're likely to come across a software project that keeps its code in, and possibly releases it for use, by way of Git. In fact, whether you know it or not, you're certainly using software right now that is developed using Git: the Linux kernel (which drives the website you're on right now, if not the desktop or mobile phone you're accessing it on), Firefox, Chrome, and many more projects share their codebase with the world in a Git repository. + +On the other hand, all the excitement and hype over Git tends to make things a little muddy. Can you only use Git to share your code with others, or can you use Git in the privacy of your own home or business? Do you have to have a GitHub account to use Git? Why use Git at all? What are the benefits of Git? Is Git the only option? + +So forget what you know or what you think you know about Git, and let's take it from the beginning. + +### What is version control? + +Git is, first and foremost, a version control system (VCS). There are many version control systems out there: CVS, SVN, Mercurial, Fossil, and, of course, Git. + +Git serves as the foundation for many services, like GitHub and GitLab, but you can use Git without using any other service. This means that you can use Git privately or publicly. + +If you have ever collaborated on anything digital with anyone, then you know how it goes. It starts out simple: you have your version, and you send it to your partner. They make some changes, so now there are two versions, and send the suggestions back to you. You integrate their changes into your version, and now there is one version again. + +Then it gets worse: while you change your version further, your partner makes more changes to their version. Now you have three versions; the merged copy that you both worked on, the version you changed, and the version your partner has changed. + +As Jason van Gumster points out in his article, 【Even artists need version control][1], this syndrome tends to happen in individual settings as well. In both art and science, it's not uncommon to develop a trial version of something; a version of your project that might make it a lot better, or that might fail miserably. So you create file names like project_justTesting.kdenlive and project_betterVersion.kdenlive, and then project_best_FINAL.kdenlive, but with the inevitable allowance for project_FINAL-alternateVersion.kdenlive, and so on. + +Whether it's a change to a for loop or an editing change, it happens to the best of us. That is where a good version control system makes life easier. + +### Git snapshots + +Git takes snapshots of a project, and stores those snapshots as unique versions. + +If you go off in a direction with your project that you decide was the wrong direction, you can just roll back to the last good version and continue along an alternate path. + +If you're collaborating, then when someone sends you changes, you can merge those changes into your working branch, and then your collaborator can grab the merged version of the project and continue working from the new current version. + +Git isn't magic, so conflicts do occur ("You changed the last line of the book, but I deleted that line entirely; how do we resolve that?"), but on the whole, Git enables you to manage the many potential variants of a single work, retaining the history of all the changes, and even allows for parallel versions. + +### Git distributes + +Working on a project on separate machines is complex, because you want to have the latest version of a project while you work, makes your own changes, and share your changes with your collaborators. The default method of doing this tends to be clunky online file sharing services, or old school email attachments, both of which are inefficient and error-prone. + +Git is designed for distributed development. If you're involved with a project you can clone the project's Git repository, and then work on it as if it was the only copy in existence. Then, with a few simple commands, you can pull in any changes from other contributors, and you can also push your changes over to someone else. Now there is no confusion about who has what version of a project, or whose changes exist where. It is all locally developed, and pushed and pulled toward a common target (or not, depending on how the project chooses to develop). + +### Git interfaces + +In its natural state, Git is an application that runs in the Linux terminal. However, as it is well-designed and open source, developers all over the world have designed other ways to access it. + +It is free, available to anyone for $0, and comes in packages on Linux, BSD, Illumos, and other Unix-like operating systems. It looks like this: + +``` +$ git --version +git version 2.5.3 +``` + +Probably the most well-known Git interfaces are web-based: sites like GitHub, the open source GitLab, Savannah, BitBucket, and SourceForge all offer online code hosting to maximise the public and social aspect of open source along with, in varying degrees, browser-based GUIs to minimise the learning curve of using Git. This is what the GitLab interface looks like: + +![](https://opensource.com/sites/default/files/0_gitlab.png) + +Additionally, it is possible that a Git service or independent developer may even have a custom Git frontend that is not HTML-based, which is particularly handy if you don't live with a browser eternally open. The most transparent integration comes in the form of file manager support. The KDE file manager, Dolphin, can show the Git status of a directory, and even generate commits, pushes, and pulls. + +![](https://opensource.com/sites/default/files/0_dolphin.jpg) + +[Sparkleshare][2] uses Git as a foundation for its own Dropbox-style file sharing interface. + +![](https://opensource.com/sites/default/files/0_sparkleshare_1.jpg) + +For more, see the (long) page on the official [Git wiki][3] listing projects with graphical interfaces to Git. + +### Who should use Git? + +You should! The real question is when? And what for? + +### When should I use Git, and what should I use it for? + +To get the most out of Git, you need to think a little bit more than usual about file formats. + +Git is designed to manage source code, which in most languages consists of lines of text. Of course, Git doesn't know if you're feeding it source code or the next Great American Novel, so as long as it breaks down to text, Git is a great option for managing and tracking versions. + +But what is text? If you write something in an office application like Libre Office, then you're probably not generating raw text. There is usually a wrapper around complex applications like that which encapsulate the raw text in XML markup and then in a zip container, as a way to ensure that all of the assets for your office file are available when you send that file to someone else. Strangely, though, something that you might expect to be very complex, like the save files for a [Kdenlive][4] project, or an SVG from [Inkscape][5], are actually raw XML files that can easily be managed by Git. + +If you use Unix, you can check to see what a file is made of with the file command: + +``` +$ file ~/path/to/my-file.blah +my-file.blah: ASCII text +$ file ~/path/to/different-file.kra: Zip data (MIME type "application/x-krita") +``` + +If unsure, you can view the contents of a file with the head command: + +``` +$ head ~/path/to/my-file.blah +``` + +If you see text that is mostly readable by you, then it is probably a file made of text. If you see garbage with some familiar text characters here and there, it is probably not made of text. + +Make no mistake: Git can manage other formats of files, but it treats them as blobs. The difference is that in a text file, two Git snapshots (or commits, as we call them) might be, say, three lines different from each other. If you have a photo that has been altered between two different commits, how can Git express that change? It can't, really, because photographs are not made of any kind of sensible text that can just be inserted or removed. I wish photo editing were as easy as just changing some text from "ugly greenish-blue" to "blue-with-fluffy-clouds" but it truly is not. + +People check in blobs, like PNG icons or a speadsheet or a flowchart, to Git all the time, so if you're working in Git then don't be afraid to do that. Know that it's not sensible to do that with huge files, though. If you are working on a project that does generate both text files and large blobs (a common scenario with video games, which have equal parts source code to graphical and audio assets), then you can do one of two things: either invent your own solution, such as pointers to a shared network drive, or use a Git add-on like Joey Hess's excellent [git annex][6], or the [Git-Media][7] project. + +So you see, Git really is for everyone. It is a great way to manage versions of your files, it is a powerful tool, and it is not as scary as it first seems. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/resources/what-is-git + +作者:[Seth Kenlon ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://opensource.com/life/16/2/version-control-isnt-just-programmers +[2]: http://sparkleshare.org/ +[3]: https://git.wiki.kernel.org/index.php/InterfacesFrontendsAndTools#Graphical_Interfaces +[4]: https://opensource.com/life/11/11/introduction-kdenlive +[5]: http://inkscape.org/ +[6]: https://git-annex.branchable.com/ +[7]: https://github.com/alebedev/git-media + + + + diff --git a/sources/tech/20160711 Getting started with Git.md b/sources/tech/20160711 Getting started with Git.md new file mode 100644 index 0000000000..032e5e3510 --- /dev/null +++ b/sources/tech/20160711 Getting started with Git.md @@ -0,0 +1,139 @@ +Getting started with Git +========================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/get_started_lead.jpeg?itok=r22AKc6P) +>Image by : opensource.com + + +In the introduction to this series we learned who should use Git, and what it is for. Today we will learn how to clone public Git repositories, and how to extract individual files without cloning the whole works. + +Since Git is so popular, it makes life a lot easier if you're at least familiar with it at a basic level. If you can grasp the basics (and you can, I promise!), then you'll be able to download whatever you need, and maybe even contribute stuff back. And that, after all, is what open source is all about: having access to the code that makes up the software you run, the freedom to share it with others, and the right to change it as you please. Git makes this whole process easy, as long as you're comfortable with Git. + +So let's get comfortable with Git. + +### Read and write + +Broadly speaking, there are two ways to interact with a Git repository: you can read from it, or you can write to it. It's just like a file: sometimes you open a document just to read it, and other times you open a document because you need to make changes. + +In this article, we'll cover reading from a Git repository. We'll tackle the subject of writing back to a Git repository in a later article. + +### Git or GitHub? + +A word of clarification: Git is not the same as GitHub (or GitLab, or Bitbucket). Git is a command-line program, so it looks like this: + +``` +$ git +usage: Git [--version] [--help] [-C ] + [-p | --paginate | --no-pager] [--bare] + [--Git-dir=] [] + +``` + +As Git is open source, lots of smart people have built infrastructures around it which, in themselves, have become very popular. + +My articles about Git teach pure Git first, because if you understand what Git is doing then you can maintain an indifference to what front end you are using. However, my articles also include common ways of accomplishing each task through popular Git services, since that's probably what you'll encounter first. + +### Installing Git + +To install Git on Linux, grab it from your distribution's software repository. BSD users should find Git in the Ports tree, in the devel section. + +For non-open source operating systems, go to the [project site][1] and follow the instructions. Once installed, there should be no difference between Linux, BSD, and Mac OS X commands. Windows users will have to adapt Git commands to match the Windows file system, or install Cygwin to run Git natively, without getting tripped up by Windows file system conventions. + +### Afternoon tea with Git + +Not every one of us needs to adopt Git into our daily lives right away. Sometimes, the most interaction you have with Git is to visit a repository of code, download a file or two, and then leave. On the spectrum of getting to know Git, this is more like afternoon tea than a proper dinner party. You make some polite conversation, you get the information you need, and then you part ways without the intention of speaking again for at least another three months. + +And that's OK. + +Generally speaking, there are two ways to access Git: via command line, or by any one of the fancy Internet technologies providing quick and easy access through the web browser. + +Say you want to install a trash bin for use in your terminal because you've been burned one too many times by the rm command. You've heard about Trashy, which calls itself "a sane intermediary to the rm command", and you want to look over its documentation before you install it. Lucky for you, [Trashy is hosted publicly on GitLab.com][2]. + +### Landgrab + +The first way we'll work with this Git repository is a sort of landgrab method: we'll clone the entire thing, and then sort through the contents later. Since the repository is hosted with a public Git service, there are two ways to do this: on the command line, or through a web interface. + +To grab an entire repository with Git, use the git clone command with the URL of the Git repository. If you're not clear on what the right URL is, the repository should tell you. GitLab gives you a copy-and-paste repository URL [for Trashy][3]. + +![](https://opensource.com/sites/default/files/1_gitlab-url.jpg) + +You might notice that on some services, both SSH and HTTPS links are provided. You can use SSH only if you have write permissions to the repository. Otherwise, you must use the HTTPS URL. + +Once you have the right URL, cloning the repository is pretty simple. Just git clone the URL, and optionally name the directory to clone it into. The default behaviour is to clone the git directory to your current directory; for example, 'trashy.git' gets put in your current location as 'trashy'. I use the .clone extension as a shorthand for repositories that are read-only, and the .git extension as shorthand for repositories I can read and write, but that's not by any means an official mandate. + +``` +$ git clone https://gitlab.com/trashy/trashy.git trashy.clone +Cloning into 'trashy.clone'... +remote: Counting objects: 142, done. +remote: Compressing objects: 100% (91/91), done. +remote: Total 142 (delta 70), reused 103 (delta 47) +Receiving objects: 100% (142/142), 25.99 KiB | 0 bytes/s, done. +Resolving deltas: 100% (70/70), done. +Checking connectivity... done. +``` + +Once the repository has been cloned successfully, you can browse files in it just as you would any other directory on your computer. + +The other way to get a copy of the repository is through the web interface. Both GitLab and GitHub provide a snapshot of any repository in a .zip file. GitHub has a big green download button, but on GitLab, look for an inconspicuous download button on the far right of your browser window: + +![](https://opensource.com/sites/default/files/1_gitlab-zip.jpg) + +### Pick and choose + +An alternate method of obtaining a file from a Git repository is to find the file you're after and pluck it right out of the repository. This method is only supported via web interfaces, which is essentially you looking at someone else's clone of a repository; you can think of it as a sort of HTTP shared directory. + +The problem with using this method is that you might find that certain files don't actually exist in a raw Git repository, as a file might only exist in its complete form after a make command builds the file, which won't happen until you download the repository, read the README or INSTALL file, and run the command. Assuming, however, that you are sure a file does exist and you just want to go into the repository, grab it, and walk away, you can do that. + +In GitLab and GitHub, click the Files link for a file view, view the file in Raw mode, and use your web browser's save function, e.g. in Firefox, File > Save Page As. In a GitWeb repository (a web view of personal git repositories used some who prefer to host git themselves), the Raw view link is in the file listing view. + +![](https://opensource.com/sites/default/files/1_webgit-file.jpg) + +### Best practices + +Generally, cloning an entire Git repository is considered the right way of interacting with Git. There are a few reasons for this. Firstly, a clone is easy to keep updated with the git pull command, so you won't have to keep going back to some web site for a new copy of a file each time an improvement has been made. Secondly, should you happen to make an improvement yourself, then it is easier to submit those changes to the original author if it is all nice and tidy in a Git repository. + +For now, it's probably enough to just practice going out and finding interesting Git repositories and cloning them to your drive. As long as you know the basics of using a terminal, then it's not hard to do. Don't know the basics of terminal usage? Give me five more minutes of your time. + +### Terminal basics + +The first thing to understand is that all files have a path. That makes sense; if I told you to open a file for me on a regular non-terminal day, you'd have to get to where that file is on your drive, and you'd do that by navigating a bunch of computer windows until you reached that file. For example, maybe you'd click your home directory > Pictures > InktoberSketches > monkey.kra. + +In that scenario, we could say that the file monkeysketch.kra has the path $HOME/Pictures/InktoberSketches/monkey.kra. + +In the terminal, unless you're doing special sysadmin work, your file paths are generally going to start with $HOME (or, if you're lazy, just the ~ character) followed by a list of folders up to the filename itself. This is analogous to whatever icons you click in your GUI to reach the file or folder. + +If you want to clone a Git repository into your Documents directory, then you could open a terminal and run this command: + +``` +$ git clone https://gitlab.com/foo/bar.git +$HOME/Documents/bar.clone +``` + +Once that is complete, you can open a file manager window, navigate to your Documents folder, and you'll find the bar.clone directory waiting for you. + +If you want to get a little more advanced, you might revisit that repository at some later date, and try a git pull to see if there have been updates to the project: + +``` +$ cd $HOME/Documents/bar.clone +$ pwd +bar.clone +$ git pull +``` + +For now, that's all the terminal commands you need to get started, so go out and explore. The more you do it, the better you get at it, and that is, at least give or take a vowel, the name of the game. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/16/7/stumbling-git + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]: https://git-scm.com/download +[2]: https://gitlab.com/trashy/trashy +[3]: https://gitlab.com/trashy/trashy.git + diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md deleted file mode 100644 index 199d65957e..0000000000 --- a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md +++ /dev/null @@ -1,319 +0,0 @@ -LFCS 第十讲:学习简单的 Shell 脚本编程和文件系统故障排除 - -================================================================================ -Linux 基金会发起了 LFCS 认证 (Linux Foundation Certified Sysadmin, Linux 基金会认证系统管理员),这是一个全新的认证体系,主要目标是让全世界任何人都有机会考取认证。认证内容为 Linux 中间系统的管理,主要包括:系统运行和服务的维护、全面监控和分析的能力以及问题来临时何时想上游团队请求帮助的决策能力 - -![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png) - -LFCS 系列第十讲 - -请看以下视频,这里边介绍了 Linux 基金会认证程序。 - -注:youtube 视频 - - - -本讲是系列教程中的第十讲,主要集中讲解简单的 Shell 脚本编程和文件系统故障排除。这两块内容都是 LFCS 认证中的必备考点。 - -### 理解终端 (Terminals) 和 Shell ### - -首先要声明一些概念。 - -- Shell 是一个程序,它将命令传递给操作系统来执行。 -- Terminal 也是一个程序,作为最终用户,我们需要使用它与 Shell 来交互。比如,下边的图片是 GNOME Terminal。 - -![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png) - -Gnome Terminal - -启动 Shell 之后,会呈现一个命令提示符 (也称为命令行) 提示我们 Shell 已经做好了准备,接受标准输入设备输入的命令,这个标准输入设备通常是键盘。 - -你可以参考该系列文章的 [第一讲 使用命令创建、编辑和操作文件][1] 来温习一些常用的命令。 - -Linux 为提供了许多可以选用的 Shell,下面列出一些常用的: - -**bash Shell** - -Bash 代表 Bourne Again Shell,它是 GNU 项目默认的 Shell。它借鉴了 Korn shell (ksh) 和 C shell (csh) 中有用的特性,并同时对性能进行了提升。它同时也是 LFCS 认证中所涵盖的风发行版中默认 Shell,也是本系列教程将使用的 Shell。 - -**sh Shell** - -Bash Shell 是一个比较古老的 shell,一次多年来都是多数类 Unix 系统的默认 shell。 - -**ksh Shell** - -Korn SHell (ksh shell) 也是一个 Unix shell,是贝尔实验室 (Bell Labs) 的 David Korn 在 19 世纪 80 年代初的时候开发的。它兼容 Bourne shell ,并同时包含了 C shell 中的多数特性。 - - -一个 shell 脚本仅仅只是一个可执行的文本文件,里边包含一条条可执行命令。 - -### 简单的 Shell 脚本编程 ### - -As mentioned earlier, a shell script is born as a plain text file. Thus, can be created and edited using our preferred text editor. You may want to consider using vi/m (refer to [Usage of vi Editor – Part 2][2] of this series), which features syntax highlighting for your convenience. - -Type the following command to create a file named myscript.sh and press Enter. - - # vim myscript.sh - -The very first line of a shell script must be as follows (also known as a shebang). - - #!/bin/bash - -It “tells” the operating system the name of the interpreter that should be used to run the text that follows. - -Now it’s time to add our commands. We can clarify the purpose of each command, or the entire script, by adding comments as well. Note that the shell ignores those lines beginning with a pound sign # (explanatory comments). - - #!/bin/bash - echo This is Part 10 of the 10-article series about the LFCS certification - echo Today is $(date +%Y-%m-%d) - -Once the script has been written and saved, we need to make it executable. - - # chmod 755 myscript.sh - -Before running our script, we need to say a few words about the $PATH environment variable. If we run, - - echo $PATH - -from the command line, we will see the contents of $PATH: a colon-separated list of directories that are searched when we enter the name of a executable program. It is called an environment variable because it is part of the shell environment – a set of information that becomes available for the shell and its child processes when the shell is first started. - -When we type a command and press Enter, the shell searches in all the directories listed in the $PATH variable and executes the first instance that is found. Let’s see an example, - -![Linux Environment Variables](http://www.tecmint.com/wp-content/uploads/2014/11/Environment-Variable.png) - -Environment Variables - -If there are two executable files with the same name, one in /usr/local/bin and another in /usr/bin, the one in the first directory will be executed first, whereas the other will be disregarded. - -If we haven’t saved our script inside one of the directories listed in the $PATH variable, we need to append ./ to the file name in order to execute it. Otherwise, we can run it just as we would do with a regular command. - - # pwd - # ./myscript.sh - # cp myscript.sh ../bin - # cd ../bin - # pwd - # myscript.sh - -![Execute Script in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Execute-Script.png) - -Execute Script - -#### Conditionals #### - -Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is: - - if CONDITION; then - COMMANDS; - else - OTHER-COMMANDS - fi - -Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when: - -- [ -a file ] → file exists. -- [ -d file ] → file exists and is a directory. -- [ -f file ] →file exists and is a regular file. -- [ -u file ] →file exists and its SUID (set user ID) bit is set. -- [ -g file ] →file exists and its SGID bit is set. -- [ -k file ] →file exists and its sticky bit is set. -- [ -r file ] →file exists and is readable. -- [ -s file ]→ file exists and is not empty. -- [ -w file ]→file exists and is writable. -- [ -x file ] is true if file exists and is executable. -- [ string1 = string2 ] → the strings are equal. -- [ string1 != string2 ] →the strings are not equal. - -[ int1 op int2 ] should be part of the preceding list, while the items that follow (for example, -eq –> is true if int1 is equal to int2.) should be a “children” list of [ int1 op int2 ] where op is one of the following comparison operators. - -- -eq –> is true if int1 is equal to int2. -- -ne –> true if int1 is not equal to int2. -- -lt –> true if int1 is less than int2. -- -le –> true if int1 is less than or equal to int2. -- -gt –> true if int1 is greater than int2. -- -ge –> true if int1 is greater than or equal to int2. - -#### For Loops #### - -This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is: - - for item in SEQUENCE; do - COMMANDS; - done - -Where item is a generic variable that represents each value in SEQUENCE during each iteration. - -#### While Loops #### - -This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is: - - while EVALUATION_COMMAND; do - EXECUTE_COMMANDS; - done - -Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops. - -#### Putting It All Together #### - -We will demonstrate the use of the if construct and the for loop with the following example. - -**Determining if a service is running in a systemd-based distro** - -Let’s create a file with a list of services that we want to monitor at a glance. - - # cat myservices.txt - - sshd - mariadb - httpd - crond - firewalld - -![Script to Monitor Linux Services](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Services.png) - -Script to Monitor Linux Services - -Our shell script should look like. - - #!/bin/bash - - # This script iterates over a list of services and - # is used to determine whether they are running or not. - - for service in $(cat myservices.txt); do - systemctl status $service | grep --quiet "running" - if [ $? -eq 0 ]; then - echo $service "is [ACTIVE]" - else - echo $service "is [INACTIVE or NOT INSTALLED]" - fi - done - -![Linux Service Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Script.png) - -Linux Service Monitoring Script - -**Let’s explain how the script works.** - -1). The for loop reads the myservices.txt file one element of LIST at a time. That single element is denoted by the generic variable named service. The LIST is populated with the output of, - - # cat myservices.txt - -2). The above command is enclosed in parentheses and preceded by a dollar sign to indicate that it should be evaluated to populate the LIST that we will iterate over. - -3). For each element of LIST (meaning every instance of the service variable), the following command will be executed. - - # systemctl status $service | grep --quiet "running" - -This time we need to precede our generic variable (which represents each element in LIST) with a dollar sign to indicate it’s a variable and thus its value in each iteration should be used. The output is then piped to grep. - -The –quiet flag is used to prevent grep from displaying to the screen the lines where the word running appears. When that happens, the above command returns an exit status of 0 (represented by $? in the if construct), thus verifying that the service is running. - -An exit status different than 0 (meaning the word running was not found in the output of systemctl status $service) indicates that the service is not running. - -![Services Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Services-Monitoring-Script.png) - -Services Monitoring Script - -We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop. - - #!/bin/bash - - # This script iterates over a list of services and - # is used to determine whether they are running or not. - - if [ -f myservices.txt ]; then - for service in $(cat myservices.txt); do - systemctl status $service | grep --quiet "running" - if [ $? -eq 0 ]; then - echo $service "is [ACTIVE]" - else - echo $service "is [INACTIVE or NOT INSTALLED]" - fi - done - else - echo "myservices.txt is missing" - fi - -**Pinging a series of network or internet hosts for reply statistics** - -You may want to maintain a list of hosts in a text file and use a script to determine every now and then whether they’re pingable or not (feel free to replace the contents of myhosts and try for yourself). - -The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command. - - #!/bin/bash - - # This script is used to demonstrate the use of a while loop - - while read host; do - ping -c 2 $host - done < myhosts - -![Script to Ping Servers](http://www.tecmint.com/wp-content/uploads/2014/11/Script-to-Ping-Servers.png) - -Script to Ping Servers - -Read Also: - -- [Learn Shell Scripting: A Guide from Newbies to System Administrator][3] -- [5 Shell Scripts to Learn Shell Programming][4] - -### Filesystem Troubleshooting ### - -Although Linux is a very stable operating system, if it crashes for some reason (for example, due to a power outage), one (or more) of your file systems will not be unmounted properly and thus will be automatically checked for errors when Linux is restarted. - -In addition, each time the system boots during a normal boot, it always checks the integrity of the filesystems before mounting them. In both cases this is performed using a tool named fsck (“file system check”). - -fsck will not only check the integrity of file systems, but also attempt to repair corrupt file systems if instructed to do so. Depending on the severity of damage, fsck may succeed or not; when it does, recovered portions of files are placed in the lost+found directory, located in the root of each file system. - -Last but not least, we must note that inconsistencies may also happen if we try to remove an USB drive when the operating system is still writing to it, and may even result in hardware damage. - -The basic syntax of fsck is as follows: - - # fsck [options] filesystem - -**Checking a filesystem for errors and attempting to repair automatically** - -In order to check a filesystem with fsck, we must first unmount it. - - # mount | grep sdg1 - # umount /mnt - # fsck -y /dev/sdg1 - -![Scan Linux Filesystem for Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Filesystem-Errors.png) - -Check Filesystem Errors - -Besides the -y flag, we can use the -a option to automatically repair the file systems without asking any questions, and force the check even when the filesystem looks clean. - - # fsck -af /dev/sdg1 - -If we’re only interested in finding out what’s wrong (without trying to fix anything for the time being) we can run fsck with the -n option, which will output the filesystem issues to standard output. - - # fsck -n /dev/sdg1 - -Depending on the error messages in the output of fsck, we will know whether we can try to solve the issue ourselves or escalate it to engineering teams to perform further checks on the hardware. - -### 总结 ### - -We have arrived at the end of this 10-article series where have tried to cover the basic domain competencies required to pass the LFCS exam. - -For obvious reasons, it is not possible to cover every single aspect of these topics in any single tutorial, and that’s why we hope that these articles have put you on the right track to try new stuff yourself and continue learning. - -If you have any questions or comments, they are always welcome – so don’t hesitate to drop us a line via the form below! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[GHLandy](https://github.com/GHLandy) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]:http://www.tecmint.com/vi-editor-usage/ -[3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/ -[4]:http://www.tecmint.com/basic-shell-programming-part-ii/ diff --git a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md deleted file mode 100644 index 4d2a9d7a13..0000000000 --- a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md +++ /dev/null @@ -1,206 +0,0 @@ -Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands -============================================================================================ - -Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well. - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png) ->LFCS: Manage LVM and Create LVM Partition – Part 11 - -One of the most important decisions while installing a Linux system is the amount of storage space to be allocated for system files, home directories, and others. If you make a mistake at that point, growing a partition that has run out of space can be burdensome and somewhat risky. - -**Logical Volumes Management** (also known as **LVM**), which have become a default for the installation of most (if not all) Linux distributions, have numerous advantages over traditional partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical divisions to be resized (reduced or increased) at will without much hassle. - -The structure of the LVM consists of: - -* One or more entire hard disks or partitions are configured as physical volumes (PVs). -* A volume group (**VG**) is created using one or more physical volumes. You can think of a volume group as a single storage unit. -* Multiple logical volumes can then be created in a volume group. Each logical volume is somewhat equivalent to a traditional partition – with the advantage that it can be resized at will as we mentioned earlier. - -In this article we will use three disks of **8 GB** each (**/dev/sdb**, **/dev/sdc**, and **/dev/sdd**) to create three physical volumes. You can either create the PVs directly on top of the device, or partition it first. - -Although we have chosen to go with the first method, if you decide to go with the second (as explained in [Part 4 – Create Partitions and File Systems in Linux][3] of this series) make sure to configure each partition as type `8e`. - -### Creating Physical Volumes, Volume Groups, and Logical Volumes - -To create physical volumes on top of **/dev/sdb**, **/dev/sdc**, and **/dev/sdd**, do: - -``` -# pvcreate /dev/sdb /dev/sdc /dev/sdd -``` - -You can list the newly created PVs with: - -``` -# pvs -``` - -and get detailed information about each PV with: - -``` -# pvdisplay /dev/sdX -``` - -(where **X** is b, c, or d) - -If you omit `/dev/sdX` as parameter, you will get information about all the PVs. - -To create a volume group named `vg00` using `/dev/sdb` and `/dev/sdc` (we will save `/dev/sdd` for later to illustrate the possibility of adding other devices to expand storage capacity when needed): - -``` -# vgcreate vg00 /dev/sdb /dev/sdc -``` - -As it was the case with physical volumes, you can also view information about this volume group by issuing: - -``` -# vgdisplay vg00 -``` - -Since `vg00` is formed with two **8 GB** disks, it will appear as a single **16 GB** drive: - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png) ->List LVM Volume Groups - -When it comes to creating logical volumes, the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use. - -For example, let’s create two LVs named `vol_projects` (**10 GB**) and `vol_backups` (remaining space), which we can use later to store project documentation and system backups, respectively. - -The `-n` option is used to indicate a name for the LV, whereas `-L` sets a fixed size and `-l` (lowercase L) is used to indicate a percentage of the remaining space in the container VG. - -``` -# lvcreate -n vol_projects -L 10G vg00 -# lvcreate -n vol_backups -l 100%FREE vg00 -``` - -As before, you can view the list of LVs and basic information with: - -``` -# lvs -``` - -and detailed information with - -``` -# lvdisplay -``` - -To view information about a single **LV**, use **lvdisplay** with the **VG** and **LV** as parameters, as follows: - -``` -# lvdisplay vg00/vol_projects -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png) ->List Logical Volume - -In the image above we can see that the LVs were created as storage devices (refer to the LV Path line). Before each logical volume can be used, we need to create a filesystem on top of it. - -We’ll use ext4 as an example here since it allows us both to increase and reduce the size of each LV (as opposed to xfs that only allows to increase the size): - -``` -# mkfs.ext4 /dev/vg00/vol_projects -# mkfs.ext4 /dev/vg00/vol_backups -``` - -In the next section we will explain how to resize logical volumes and add extra physical storage space when the need arises to do so. - -### Resizing Logical Volumes and Extending Volume Groups - -Now picture the following scenario. You are starting to run out of space in `vol_backups`, while you have plenty of space available in `vol_projects`. Due to the nature of LVM, we can easily reduce the size of the latter (say **2.5 GB**) and allocate it for the former, while resizing each filesystem at the same time. - -Fortunately, this is as easy as doing: - -``` -# lvreduce -L -2.5G -r /dev/vg00/vol_projects -# lvextend -l +100%FREE -r /dev/vg00/vol_backups -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png) ->Resize Reduce Logical Volume and Volume Group - -It is important to include the minus `(-)` or plus `(+)` signs while resizing a logical volume. Otherwise, you’re setting a fixed size for the LV instead of resizing it. - -It can happen that you arrive at a point when resizing logical volumes cannot solve your storage needs anymore and you need to buy an extra storage device. Keeping it simple, you will need another disk. We are going to simulate this situation by adding the remaining PV from our initial setup (`/dev/sdd`). - -To add `/dev/sdd` to `vg00`, do - -``` -# vgextend vg00 /dev/sdd -``` - -If you run vgdisplay `vg00` before and after the previous command, you will see the increase in the size of the VG: - -``` -# vgdisplay vg00 -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png) ->Check Volume Group Disk Size - -Now you can use the newly added space to resize the existing LVs according to your needs, or to create additional ones as needed. - -### Mounting Logical Volumes on Boot and on Demand - -Of course there would be no point in creating logical volumes if we are not going to actually use them! To better identify a logical volume we will need to find out what its `UUID` (a non-changing attribute that uniquely identifies a formatted storage device) is. - -To do that, use blkid followed by the path to each device: - -``` -# blkid /dev/vg00/vol_projects -# blkid /dev/vg00/vol_backups -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) ->Find Logical Volume UUID - -Create mount points for each LV: - -``` -# mkdir /home/projects -# mkdir /home/backups -``` - -and insert the corresponding entries in `/etc/fstab` (make sure to use the UUIDs obtained before): - -``` -UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4 defaults 0 0 -UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0 -``` - -Then save the changes and mount the LVs: - -``` -# mount -a -# mount | grep home -``` - -![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) ->Find Logical Volume UUID - -When it comes to actually using the LVs, you will need to assign proper `ugo+rwx` permissions as explained in [Part 8 – Manage Users and Groups in Linux][4] of this series. - -### Summary - -In this article we have introduced [Logical Volume Management][5], a versatile tool to manage storage devices that provides scalability. When combined with RAID (which we explained in [Part 6 – Create and Manage RAID in Linux][6] of this series), you can enjoy not only scalability (provided by LVM) but also redundancy (offered by RAID). - -In this type of setup, you will typically find `LVM` on top of `RAID`, that is, configure RAID first and then configure LVM on top of it. - -If you have questions about this article, or suggestions to improve it, feel free to reach us using the comment form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.tecmint.com/author/gacanepa/ -[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ -[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ -[3]: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ -[4]: http://www.tecmint.com/manage-users-and-groups-in-linux/ -[5]: http://www.tecmint.com/create-lvm-storage-in-linux/ -[6]: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/ diff --git a/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md b/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md new file mode 100644 index 0000000000..cf24c51b58 --- /dev/null +++ b/sources/tech/LFCS/Part 13 - How to Configure and Troubleshoot Grand Unified Bootloader (GRUB).md @@ -0,0 +1,185 @@ +Part 13 - LFCS: How to Configure and Troubleshoot Grand Unified Bootloader (GRUB) +===================================================================================== + +Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Configure-Troubleshoot-Grub-Boot-Loader.png) +>LFCS: Configure and Troubleshoot Grub Boot Loader – Part 13 + +In this article we will introduce you to GRUB and explain why a boot loader is necessary, and how it adds versatility to the system. + +The [Linux boot process][3] from the time you press the power button of your computer until you get a fully-functional system follows this high-level sequence: + +* 1. A process known as **POST** (**Power-On Self Test**) performs an overall check on the hardware components of your computer. +* 2. When **POST** completes, it passes the control over to the boot loader, which in turn loads the Linux kernel in memory (along with **initramfs**) and executes it. The most used boot loader in Linux is the **GRand Unified Boot loader**, or **GRUB** for short. +* 3. The kernel checks and accesses the hardware, and then runs the initial process (mostly known by its generic name “**init**”) which in turn completes the system boot by starting services. + +In Part 7 of this series (“[SysVinit, Upstart, and Systemd][4]”) we introduced the [service management systems and tools][5] used by modern Linux distributions. You may want to review that article before proceeding further. + +### Introducing GRUB Boot Loader + +Two major **GRUB** versions (**v1** sometimes called **GRUB Legacy** and **v2**) can be found in modern systems, although most distributions use **v2** by default in their latest versions. Only **Red Hat Enterprise Linux 6** and its derivatives still use **v1** today. + +Thus, we will focus primarily on the features of **v2** in this guide. + +Regardless of the **GRUB** version, a boot loader allows the user to: + +* 1). modify the way the system behaves by specifying different kernels to use, +* 2). choose between alternate operating systems to boot, and +* 3). add or edit configuration stanzas to change boot options, among other things. + +Today, **GRUB** is maintained by the **GNU** project and is well documented in their website. You are encouraged to use the [GNU official documentation][6] while going through this guide. + +When the system boots you are presented with the following **GRUB** screen in the main console. Initially, you are prompted to choose between alternate kernels (by default, the system will boot using the latest kernel) and are allowed to enter a **GRUB** command line (with `c`) or edit the boot options (by pressing the `e` key). + +![](http://www.tecmint.com/wp-content/uploads/2016/03/GRUB-Boot-Screen.png) +>GRUB Boot Screen + +One of the reasons why you would consider booting with an older kernel is a hardware device that used to work properly and has started “acting up” after an upgrade (refer to [this link][7] in the AskUbuntu forums for an example). + +The **GRUB v2** configuration is read on boot from `/boot/grub/grub.cfg` or `/boot/grub2/grub.cfg`, whereas `/boot/grub/grub.conf` or `/boot/grub/menu.lst` are used in **v1**. These files are NOT to be edited by hand, but are modified based on the contents of `/etc/default/grub` and the files found inside `/etc/grub.d`. + +In a **CentOS 7**, here’s the configuration file that is created when the system is first installed: + +``` +GRUB_TIMEOUT=5 +GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" +GRUB_DEFAULT=saved +GRUB_DISABLE_SUBMENU=true +GRUB_TERMINAL_OUTPUT="console" +GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet" +GRUB_DISABLE_RECOVERY="true" +``` + +In addition to the online documentation, you can also find the GNU GRUB manual using info as follows: + +``` +# info grub +``` + +If you’re interested specifically in the options available for /etc/default/grub, you can invoke the configuration section directly: + +``` +# info -f grub -n 'Simple configuration' +``` + +Using the command above you will find out that `GRUB_TIMEOUT` sets the time between the moment when the initial screen appears and the system automatic booting begins unless interrupted by the user. When this variable is set to `-1`, boot will not be started until the user makes a selection. + +When multiple operating systems or kernels are installed in the same machine, `GRUB_DEFAULT` requires an integer value that indicates which OS or kernel entry in the GRUB initial screen should be selected to boot by default. The list of entries can be viewed not only in the splash screen shown above, but also using the following command: + +### In CentOS and openSUSE: + +``` +# awk -F\' '$1=="menuentry " {print $2}' /boot/grub2/grub.cfg +``` + +### In Ubuntu: + +``` +# awk -F\' '$1=="menuentry " {print $2}' /boot/grub/grub.cfg +``` + +In the example shown in the below image, if we wish to boot with the kernel version **3.10.0-123.el7.x86_64** (4th entry), we need to set `GRUB_DEFAULT` to `3` (entries are internally numbered beginning with zero) as follows: + +``` +GRUB_DEFAULT=3 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Boot-System-with-Old-Kernel-Version.png) +>Boot System with Old Kernel Version + +One final GRUB configuration variable that is of special interest is `GRUB_CMDLINE_LINUX`, which is used to pass options to the kernel. The options that can be passed through GRUB to the kernel are well documented in the [Kernel Parameters file][8] and in [man 7 bootparam][9]. + +Current options in my **CentOS 7** server are: + +``` +GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet" +``` + +Why would you want to modify the default kernel parameters or pass extra options? In simple terms, there may be times when you need to tell the kernel certain hardware parameters that it may not be able to determine on its own, or to override the values that it would detect. + +This happened to me not too long ago when I tried **Vector Linux**, a derivative of **Slackware**, on my 10-year old laptop. After installation it did not detect the right settings for my video card so I had to modify the kernel options passed through GRUB in order to make it work. + +Another example is when you need to bring the system to single-user mode to perform maintenance tasks. You can do this by appending the word single to `GRUB_CMDLINE_LINUX` and rebooting: + +``` +GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet single" +``` + +After editing `/etc/defalt/grub`, you will need to run `update-grub` (Ubuntu) or `grub2-mkconfig -o /boot/grub2/grub.cfg` (**CentOS** and **openSUSE**) afterwards to update `grub.cfg` (otherwise, changes will be lost upon boot). + +This command will process the boot configuration files mentioned earlier to update `grub.cfg`. This method ensures changes are permanent, while options passed through GRUB at boot time will only last during the current session. + +### Fixing Linux GRUB Issues + +If you install a second operating system or if your GRUB configuration file gets corrupted due to human error, there are ways you can get your system back on its feet and be able to boot again. + +In the initial screen, press `c` to get a GRUB command line (remember that you can also press `e` to edit the default boot options), and use help to bring the available commands in the GRUB prompt: + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Fix-Grub-Issues-in-Linux.png) +>Fix Grub Configuration Issues in Linux + +We will focus on **ls**, which will list the installed devices and filesystems, and we will examine what it finds. In the image below we can see that there are 4 hard drives (`hd0` through `hd3`). + +Only `hd0` seems to have been partitioned (as evidenced by msdos1 and msdos2, where 1 and 2 are the partition numbers and msdos is the partitioning scheme). + +Let’s now examine the first partition on `hd0` (**msdos1**) to see if we can find GRUB there. This approach will allow us to boot Linux and there use other high level tools to repair the configuration file or reinstall GRUB altogether if it is needed: + +``` +# ls (hd0,msdos1)/ +``` + +As we can see in the highlighted area, we found the `grub2` directory in this partition: + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Grub-Configuration.png) +>Find Grub Configuration + +Once we are sure that GRUB resides in (**hd0,msdos1**), let’s tell GRUB where to find its configuration file and then instruct it to attempt to launch its menu: + +``` +set prefix=(hd0,msdos1)/grub2 +set root=(hd0,msdos1) +insmod normal +normal +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-and-Launch-Grub-Menu.png) +>Find and Launch Grub Menu + +Then in the GRUB menu, choose an entry and press **Enter** to boot using it. Once the system has booted you can issue the `grub2-install /dev/sdX` command (change `sdX` with the device you want to install GRUB on). The boot information will then be updated and all related files be restored. + +``` +# grub2-install /dev/sdX +``` + +Other more complex scenarios are documented, along with their suggested fixes, in the [Ubuntu GRUB2 Troubleshooting guide][10]. The concepts explained there are valid for other distributions as well. + +### Summary + +In this article we have introduced you to GRUB, indicated where you can find documentation both online and offline, and explained how to approach an scenario where a system has stopped booting properly due to a bootloader-related issue. + +Fortunately, GRUB is one of the tools that is best documented and you can easily find help either in the installed docs or online using the resources we have shared in this article. + +Do you have questions or comments? Don’t hesitate to let us know using the comment form below. We look forward to hearing from you! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: http://www.tecmint.com/linux-boot-process/ +[4]: http://www.tecmint.com/linux-boot-process-and-manage-services/ +[5]: http://www.tecmint.com/best-linux-log-monitoring-and-management-tools/ +[6]: http://www.gnu.org/software/grub/manual/ +[7]: http://askubuntu.com/questions/82140/how-can-i-boot-with-an-older-kernel-version +[8]: https://www.kernel.org/doc/Documentation/kernel-parameters.txt +[9]: http://man7.org/linux/man-pages/man7/bootparam.7.html +[10]: https://help.ubuntu.com/community/Grub2/Troubleshooting diff --git a/translated/news/README.md b/translated/news/README.md deleted file mode 100644 index 98d53847b1..0000000000 --- a/translated/news/README.md +++ /dev/null @@ -1 +0,0 @@ -这里放新闻类文章,要求时效性 diff --git a/translated/share/README.md b/translated/share/README.md deleted file mode 100644 index e5e225858e..0000000000 --- a/translated/share/README.md +++ /dev/null @@ -1 +0,0 @@ -这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。 diff --git a/translated/talk/20160505 Confessions of a cross-platform developer.md b/translated/talk/20160505 Confessions of a cross-platform developer.md new file mode 100644 index 0000000000..611ccf506c --- /dev/null +++ b/translated/talk/20160505 Confessions of a cross-platform developer.md @@ -0,0 +1,103 @@ + +一位跨平台开发者的自白 +============================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/business_clouds.png?itok=cucHuJnU) + +Andreia Gaita[1]将会在OSCON开源大会上发表一个题为[跨平台开发者的自白][2]的演讲。她长期从事于开源工作,并且为Mono[3]【译者注:一个致力于开创.NET在Linux上使用的开源工程】工程做着贡献。Andreia任职于GitHub,她的工作是专注于为Visual Studio构建github上的可扩展管理器。 + + +我在她发表演讲前就迫不及待的想要问她一些关于跨平台开发的事,作为一名跨平台开发者,她已经有了16年的学习经验了。 + +![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + + +**在你跨平台工作的过程中,你使用过的最简单的和最难的代码语言是什么?** + + +很少讨论某种语言的好坏,大多数是关于对于这种语言的库和工具的易用性。编译器、解释器以及构建器语言决定了用它们做跨平台开发的难易程度(不论它是否可能),能够实现UI和本地系统访问的函数库的可用性都决定了开发时兼容操作系统的程度。如果这样想,我认为C#最适合完成跨平台开发工作。这种语言有它自己的特色,它允许本地快速唤醒和精确的内存地址;如果你希望你的代码能够与系统和本地函数库进行交互,那么你就会需要C#。当我需要特殊的系统集成的时候,我就会切换到C或者C++。 + + +**你使用的跨平台开发工具或者抽象有哪些?** + +我的大部分跨平台工作都是为其他人开发工具和库,然后把它们组合起来开发跨平台的应用程序,其中大多用到MONO/C#。说真的,我不太懂抽象。大多数时候,我依赖Mono去完成一个跨平台的app开发以及它的UI,或者一个偶然的游戏开发中的Unity3D部分。我经常使用Electron【译者注:Atom编辑器的兄弟项目,可以用Electron开发桌面应用】。 + + + +**你接触过哪些构建系统?它们之间的区别是由于语言还是平台的不同?** + + +我试着选择适合我使用语言的构建系统。那样 ,就会很少遇到让我头疼的问题。它需要支持平台和体系结构间的选择、构建智能工件位置(多个并行构建)以及易配置性等。大多数时候,我的项目会结合C/C++和C#,同时我也想要为一些开源分支构建相应的配置环境(Debug, Release, Windows, OSX, Linux, Android, iOS, etc, etc.)),通常也需要为每个构建的智能工件选择带有不同标志的编译器。所以我不得不做很多工作,而这种以我的方式进行的工作并没有很多收获。我时常尝试着用不同的构建系统,仅仅是想了解到最新的情况,但是最终我还是回到使用makefile的情况,采用shell和批处理脚本之一与Perl脚本相结合的方式,以达到使用他们的目的(因为如果我想用户去做很多我做的事情,我还是最好选择一种命令行脚本语言,这样它在哪里都可以用)。 + + + +**你怎样平衡在这种统一用户接口视觉体验需求上的强烈渴望呢?** + + + +用户接口的跨平台实现很困难。在过去几年中我已经使用了一些跨平台GUI,并且我认为一些问题没有最优解。那些问题都基于两种操作,你也可以选择一个跨平台工具去做一个UI,去用你所喜欢的所有的平台,这样感觉不是太对,用小的代码库和比较低的维护费用。或者你可以选择去开发一个特有平台的UI,那样看起来就会是很本地化并且能很好的使其与一个大型的代码库结合,也会有很高的维护费用。这种决定完全取决于APP的类型。它的特色在哪里呢?你有什么资源?你可以把它运行在多少平台上? + + +最后,我认为用户对于这种框架性的“一个UI统治所有”的UI的容忍度更大了,就比如Electron。我曾经有个Chromium+C+C#的框架侧项目,并且希望我在一天内用C#构建Electron型的app,这样的话我就可以做到两全其美了。 + + + **你对构建或者打包有依赖性吗 ?** + + + 我很少谈及依赖性问题。很多次我都被ABI【译者注:应用程序二进制接口】的崩溃、存在冲突的征兆以及包的丢失问题所困扰。我决定我要使用的操作系统的版本,都是选择最低的依赖去使问题最小化。通常这就意味着有五种不同的Xcode的副本和OSX框架库 ,在同样的机器上有五种不同的Visual Studio版本需要相应的被安装,多种clang【译者注:C语言、C++、Object-C、C++语言的轻量级编译器】和gcc版本,一系列的可以运行的其他的VM版本。如果我不能确定我要使用的操作系统的包的规定,我时常会连接静态库与子模块之间的依赖确保它们一直可用。大多时候,我会避免这些很棘手的问题,除非我非常需要使用他们。 + + + **你使用能够持续集成的、代码重读的相关工具吗?** + + 基本每天都用。这是保持高效的唯一方式。我在一个项目中做的第一件事情是配置跨平台构建脚本,保证每件事尽可能自动化完成。当你想要使用多平台的时候,CI【译者注:持续集成】是至关重要的。在一个机器上,没有人能结合所有的不同的平台。并且一旦你的构建过程没有包含所有的平台,你就不会注意到你搞砸的事情。在一个共享的多平台代码库中 ,不同的人拥有不同的平台和特征,所以仅有的方法是保证跨团队浏览代码时结合CI和其他分析工具的公平性。这不同于其他的软件项目,如果不使用相关的工具就只有失败了。 + + + **你依赖于自动构建测试或者趋向于在每个平台上构建并且进行局部测试吗?** + + + 对于不包括UI的工具和库,我通常能够侥幸完成自动构建测试。如果那是一个UI,两种方法我都会用到——做一个可靠的,可自动编写脚本的UI,因为基本没有GUI工具,所以我不得不在去创建UI自动化工具,这种工具可以工作在所有的我用到的平台上,或者我也可以手动完成。如果一个项目使用一个定制的UI工具(一个像Unity3D那样做的OpenGL UI ),开发自动化的可编写脚本工具和更多的自动化工具就相当容易。不过,没有什么东西会像人类一样通过双击而搞砸事情。 + + + + **如果你要做跨平台开发,你想要在不同的平台上使用不同的编辑器,比如在Windows上使用Visual Studio,在Linux上使用Qt Creator,在Mac上使用XCode吗?还是你更趋向于使用Eclipse这样的可以在所有平台上使用的编辑器?** + + + + 我喜欢使用不同的编辑器构建系统。我更喜欢在不同的带有构建脚本的IDE上保存项目文件(可以使增加IDE变得更容易),这些脚本可以为他们支持的平台去驱动IDE开始工作。对于一个开发者来说编辑器是最重要的工具,开发者花费时间和精力去学习使用各种编辑器,但是它们都不能相互替代。我使用我最喜欢的编辑器和工具,每个人也应该能使用他们最喜爱的工具。 + + + + **在跨平台开发的时候,你更喜欢使用什么开发环境和IDE呢?** + + + 跨平台开发者好像被诅咒一样,他们不得不选择小众化的编辑器去完成大多数跨平台的工作。我爱用Visual Studio,但是我不能依赖它完成除windows平台外的工作(你可能不想让windows成为你的初级交叉编译平台,所以我不会使用它作为我的初级集成开发环境。即使我这么做了,跨平台开发者的潜意识也知道有可能会用到很多平台。这就意味着必须很熟悉他们——使用一种平台上的编辑器就必须知道这种操作系统的设置,运行方式以及它的局限性等。做这些事情就需要头脑清醒(我的捷径是加强记忆),我不得不依赖于跨平台编辑器。所以我使用Emacs 和Sublime。 + + + + **你最喜欢的过去跨平台项目是什么**? + + + 我一直很喜欢Mono,并且得心应手,在一些开发中大多数的项目都是用一些方法围绕着它进行的。Gluezilla曾经是我在多年前开发的一个Mozilla【译者注:Mozilla基金会,为支持和领导开源Mozilla项目而设立的非营利组织】结合器,可以C#开发的app嵌入到web浏览器试图中,并且看起来很明显。在这一点上,我开发过一个窗体app,它是在linux上开发的,它运行在带有一个嵌入GTK试图的windows系统上,并且这个系统将会运行一个Mozilla浏览器试图。CppSharp项目(以前叫做Cxxi,更早时叫做CppInterop)是一个我开始结合绑定有C#的C++库的项目,这样就可以唤醒和创建实例来把C#结合到C++中。这样做的话,它在运行的时候就能够发现所使用的平台以及用来创建本地运行库的编译器;而且还为它生成正确的C#绑定。这多么有趣啊。 + + + **你怎样看跨平台开发的未来趋势呢?** + + + 我们构建本地应用程序的方式已经改变了,我觉得在各种桌面操作系统之间存在差异,而且这种差异将会变得更加微妙;所以构建跨平台的应用程序将会更加容易,而且这种应用程序即使没有在本平台也可以完全兼容。不好的是,这可能意味着应用程序很难获得,并且当在操作系统上使用的时候功能得不到最好的发挥。我们知道怎样把库和工具以及运行环境的跨平台开发做的更好,但是跨平台应用程序的开发仍然需要我们的努力。 + + + -------------------------------------------------------------------------------- + + via: https://opensource.com/business/16/5/oscon-interview-andreia-gaita + + 作者:[Marcus D. Hanwell ][a] + 译者:[vim-kakali](https://github.com/vim-kakali) + 校对:[校对者ID](https://github.com/校对者ID) + + 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + [a]: https://opensource.com/users/mhanwell + [1]: https://twitter.com/sh4na + [2]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/48702 + [3]: http://www.mono-project.com/ + diff --git a/translated/talk/20160525 Should distributors disable IPv4-mapped IPv6.md b/translated/talk/20160525 Should distributors disable IPv4-mapped IPv6.md new file mode 100644 index 0000000000..8b4f4b4b4f --- /dev/null +++ b/translated/talk/20160525 Should distributors disable IPv4-mapped IPv6.md @@ -0,0 +1,60 @@ +发行版分发者应该禁用 IPv4 映射的 IPv6 地址吗 +============================================= + +大家都说,互联网向 IPv6 的过渡是件很缓慢的事情。不过在最近几年,可能是由于 IPv4 地址资源的枯竭,IPv6 的使用处于[上升态势][1]。相应的,开发者也有兴趣确保软件能在 IPv4 和 IPv6 下工作。但是,正如近期 OpenBSD 邮件列表的讨论所关注的,一个使得向 IPv6 转换更加轻松的机制设计同时也可能导致网络更不安全——并且 Linux 发行版们的默认配置可能并不安全。 + +### 地址映射 + +IPv6 在很多方面看起来可能很像 IPv4,但它是带有不同地址空间的不同的协议。服务器程序想要接受使用二者之中任意一个协议的连接,必须给两个不同的地址族分别打开一个套接字——IPv4 的 AF_INET 和 IPv6 的 AF_INET6。特别是一个程序希望接受使用任意协议到任意主机接口的连接的话,需要创建一个绑定到全零通配符地址(0.0.0.0)的 AF_INET 套接字和一个绑定到 IPv6 等效地址(写作“::”)的 AF_INET6 套接字。它必须在两个套接字上都监听连接——或者有人会这么认为。 + +多年前,在 [RFC 3493][2],IETF 指定了一个机制,程序可以使用一个单独的 IPv6 套接字工作在两个协议之上。有了一个启用这个行为的套接字,程序只需要绑定到 :: 来接受使用这两个协议到达所有接口的连接。当创建了一个 IPv4 连接到绑定端口,源地址会像 [RFC 2373][3] 中描述的那样映射到 IPv6。所以,举个例子,一个使用了这个模式的程序会将一个 192.168.1.1 的传入连接看作来自 ::ffff:192.168.1.1(这个混合的写法就是这种地址通常的写法)。程序也能通过相同的映射方法打开一个到 IPv4 地址的连接。 + +RFC 要求这个行为要默认实现,所以大多数系统这么做了。不过也有些例外,OpenBSD 就是其中之一;在那里,希望在两种协议下工作的程序能做的只能是创建两个独立的套接字。但一个在 Linux 中打开两个套接字的程序会遇到麻烦:IPv4 和 IPv6 套接字都会尝试绑定到 IPv4 地址,所以不论是哪个后者都会失败。换句话说,一个绑定到 :: 指定端口的套接字的程序会同时绑定到 IPv6 :: 和 IPv4 0.0.0.0 地址的那个端口上。如果程序之后尝试绑定一个 IPv4 套接字到 0.0.0.0 的相同端口上时,这个操作会失败,因为这个端口已经被绑定了。 + +当然有个办法可以解决这个问题;程序可以调用 setsockopt() 来打开 IPV6_V6ONLY 选项。一个打开两个套接字并且设置了 IPV6_V6ONLY 的程序应该可以在所有的系统间移植。 + +读者们可能对不是每个程序都能正确处理这一问题没那么震惊。事实证明,这些程序的其中之一是网络时间协议(Network Time Protocol)的 [OpenNTPD][4] 实现。Brent Cook 最近给上游 OpenNTPD 源码[提交了一个小补丁][5],添加了必要的 setsockopt() 调用,它也被提交到了 OpenBSD 中了。尽管那个补丁看起来不大可能被接受,最可能是因为 OpenBSD 式的理由(LCTT 译注:如前文提到的,OpenBSD 并不受这个问题的影响)。 + +### 安全担忧 + +正如上文所提到,OpenBSD 根本不支持 IPv4 映射的 IPv6 套接字。即使一个程序试着通过将 IPV6_V6ONLY 选项设置为 0 显式地启用地址映射,它的作者会感到沮丧,因为这个设置在 OpenBSD 系统中无效。这个决定背后的原因是这个映射带来了一些安全担忧。攻击打开接口的攻击类型有很多种,但它们最后都会回到规定的两个途径到达相同的端口,每个端口都有它自己的控制规则。 + +任何给定的服务器系统可能都设置了防火墙规则,描述端口的允许访问权限。也许还会有适当的机制,比如 TCP wrappers 或一个基于 BPF 的过滤器,或一个网络上的路由可以做连接状态协议过滤。结果可能是导致防火墙保护和潜在的所有类型的混乱连接之间的缺口导致同一 IPv4 地址可以通过两个不同的协议到达。如果地址映射是在网络边界完成的,情况甚至会变得更加复杂;参看[这个 2003 年的 RFC 草案][6],它描述了如果映射地址在主机之间传送,一些随之而来的其它攻击场景。 + +改变系统和软件合适地处理 IPv4 映射的 IPv6 地址当然可以实现。但那增加了系统的整体复杂度,并且可以确定这个改动没有实际完整实现到它应该实现的范围内。如同 Theo de Raadt [说的][7]: + + **有时候人们将一个坏主意放进了 RFC。之后他们发现不可能将这个主意扔回垃圾箱了。结果就是概念变得如此复杂,每个人都得在管理和编码方面是个全职专家。** + +我们也根本不清楚这些全职专家有多少在实际配置使用 IPv4 映射的 IPv6 地址的系统和网络。 + +有人可能会说,尽管 IPv4 映射的 IPv6 地址造成了安全危险,更改一下程序让它关闭部署实现它的系统上的地址映射应该没什么危害。但 Theo 认为不应该这么做,有两个理由。第一个是有许多破损的程序,它们永远不会被修复。但实际的原因是给发行版分发者压力去默认关闭地址映射。正如他说的:“**最终有人会理解这个危害是系统性的,并更改系统默认行为使之‘secure by default’**。” + +### Linux 上的地址映射 + +在 Linux 系统,地址映射由一个叫做 net.ipv6.bindv6only 的 sysctl 开关控制;它默认设置为 0(启用地址映射)。管理员(或发行版分发者)可以通过将它设置为 1 关闭地址映射,但在部署这样一个系统到生产环境之前最好确认软件都能正常工作。一个快速调查显示没有哪个主要发行版分发者改变这个默认值;Debian 在 2009 年的 “squeeze” 中[改变了这个默认值][9],但这个改动破坏了足够多的软件包(比如[任何包含 Java 的][10]),[在经过了一定数量的 Debian 式讨论之后][11],它恢复到了原来的设置。看上去不少程序依赖于默认启用地址映射。 + +OpenBSD 有自由以“secure by default”的名义打破其核心系统之外的东西;Linux 发行版分发者倾向于更难以作出这样的改变。所以那些一般不愿意收到他们用户的不满的发行版分发者,不太可能很快对 bindv6only 的默认设置作出改变。好消息是这个功能作为默认已经很多年了,但很难找到利用的例子。但是,正如我们都知道的,谁都无法保证这样的利用不可能发生。 + + +-------------------------------------------------------------------------------- + +via: https://lwn.net/Articles/688462/ + +作者:[Jonathan Corbet][a] +译者:[alim0x](https://github.com/alim0x) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://lwn.net/ +[1]: https://www.google.com/intl/en/ipv6/statistics.html +[2]: https://tools.ietf.org/html/rfc3493#section-3.7 +[3]: https://tools.ietf.org/html/rfc2373#page-10 +[4]: https://github.com/openntpd-portable/ +[5]: https://lwn.net/Articles/688464/ +[6]: https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02 +[7]: https://lwn.net/Articles/688465/ +[8]: https://lwn.net/Articles/688466/ +[9]: https://lists.debian.org/debian-devel/2009/10/msg00541.html +[10]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560056 +[11]: https://lists.debian.org/debian-devel/2010/04/msg00099.html diff --git a/translated/talk/20160531 The Anatomy of a Linux User.md b/translated/talk/20160531 The Anatomy of a Linux User.md new file mode 100644 index 0000000000..a404697347 --- /dev/null +++ b/translated/talk/20160531 The Anatomy of a Linux User.md @@ -0,0 +1,58 @@ + + +一个 Linux 用户的故事 +================================ + + + +**一些新的 GNU/Linux 用户都很清楚的知道 Linux 不是 Windows .其他很多人都不是很清楚的知道.最好的发行版设计者努力保持新的思想** + +### Linux 的核心 + +不管怎么说,Nicky 都不是那种表面上看起来很值得注意的人.她已经三十岁了,却决定回到学校学习.她在海军待了6年时间直到她的老友给她一份新的工作,而且这份工作比她在军队的工作还好.在过去的军事分支服务期间发生了很多事情.我认识她还是她在军队工作的时候.她是8个州的货车运输业协商区域的管理者.那会我在达拉斯跑肉品包装工具的运输. +![](http://i2.wp.com/fossforce.com/wp-content/uploads/2016/05/anatomy.jpg?w=525) + + +Nicky 和我在 2006 年成为了好朋友.她很外向,并且有很强的好奇心,她几乎走过每个运输商走的路线.一个星期五晚上我们在灯光下有一场很激烈的争论,像这样的 达 30 分钟的争论在我们之间并不少见.或许这并不比油漆一个球场省事,但是气氛还是可以控制的住的,感觉就像是一个很恐怖的游戏.在这次争论的时候她问到我是否可以修复她的电脑. + +她知道我为了一些贫穷的孩子能拥有他们自己的电脑做出的努力,当她抱怨她的电脑很慢的时候,我提到了参加 Bill Gate 的 401k 计划[译注:这个计划是为那些为内部收益代码做过贡献的人们提供由税收定义的养老金账户.].Nicky 说这是了解 Linux 的最佳时间. + +她的电脑相当好,它是一个带有 Dell 19 显示器的华硕电脑.不好的是,当不需要一些东西的时候,这个 Windows 电脑会强制性的显示所有的工具条和文本菜单.我们把电脑上的文件都做了备份之后就开始安装 Linux 了.我们一起完成了安装,并且我确信她知道了如何分区.不到一个小时,她的电脑上就有了一个漂亮的 PCLinuxOS 桌面. + +她会经常谈论她使用新系统的方法,系统看起来多么漂亮.她不曾提及的是,她几乎被她面前的井然有序的漂亮桌面吸引.她说她的桌面带有漂亮的"微光".这是我在安装系统期间特意设置的.我每次在安装 Linux 的时候都会进行这样的配置.我想让每个 Linux 用户的桌面都配置这个漂亮的微光. + +大概第一周左右,她给我打电话或者发邮件问了一些常规的问题,但是最主要的问题还是她想知道怎样保存她打开的 Office 文件(OpenOffice)以至于她的同事也可以读这些文件.教一个人使用 Linux 或者 Open/LibreOffice 的时候最重要的就是教她保存文件.大多数用户仅仅看到第一个提示,只需要手指轻轻一点就可以以打开文件模式( Open Document Format )保存. + + +大约一年前或者更久,一个高中生说他没有通过期末考试,因为教授不能打开他写着论文的文件.这引来不能决定谁对谁错的读者的激烈评论.这个高中生和教授都不知道这件事该怪谁. + +我知道一些大学教授甚至每个人都能够打开一个 ODF 文件.见鬼,很少有像微软这样优秀的公司,我想 Microsoft Office 现在已经能打开 ODT 或者 ODF 文件了.我也不能确保,毕竟我最近一次用 Microsoft Office 是在 2005 年. + +甚至在困难时期,当 Microsoft 很受欢迎并且很热衷于为使用他们系统的厂商的企业桌面上安装他们自己的软件的时候,我和一些使用 Microsoft Office 的用户的产品生意和合作从来没有出现过问题,因为我会提前想到可能出现的问题并且不会有侥幸心理.我会发邮件给他们询问他们正在使用的 Office 版本.这样,我就可以确保以他们能够读写的格式保存文件. + +说到 Nicky ,她花了很多时间学习她的 Linux 系统.我很惊奇于她的热情. + +当人们意识到所有的使用 Windows 的习惯和工具都要被抛弃的时候,学习 Linux 系统也会很容易.甚至在我们谈论第一次用的系统时,我查看这些系统的桌面或者下载文件夹大多都找不到 some_dodgy_file.exe 这样的文件. + +在我们通常讨论这些文件的时候,我们也会提及关于更新的问题.很长时间我都不会在一台电脑上反复设置去完成多种程序的更新和安装.比如 Mint ,它没有带有 Synaptic [译注:一个 Linux 管理图像程序包]的完整更新方法,这让我失去兴趣.但是我们的老成员 dpkg 和 apt 是我们的好朋友,聪明的领导者已经得到肯定并且认识到命令行看起来不是那么舒服,同时欢迎新的用户加入. + +我很生气,强烈反对机器对 Synaptic 的削弱,最后我放弃使用它.你记得你第一次使用的 Linux 发行版吗?记得你什么时候在 Synaptic 中详细查看大量的软件列表吗?记得你怎样开始检查并标记每个你发现的很酷的程序吗?你记得有多少这样的程序开始都是使用"lib"这样的文件吗? + +是的,我也是。我安装并且查看了一些新的安装程序,直到我发现那些库文件是应用程序的螺母和螺栓,而不是应用程序本身.这就是为什么这些聪明的开发者在 Linux Mint 和 Ubuntu 之后创造了聪明、漂亮和易于使用的应用程序的安装程序.Synaptic 仍然是我们的老大,但是对于一些后来者,安装像 lib 文件这样的方式需要打开大量的文件夹,很多这样的事情都会导致他们放弃使用这个系统.在新的安装程序中,这些文件会被放在一个文件夹中甚至不会展示给用户.总之,这也是该有的解决方法. + +除非你要改变应该支持的需求. + +现在的 Linux 发行版中有很多有用的软件,我也很感谢这些开发者,因为他们,我的工作变得容易.不是每一个 Linux 新用户都像 Nicky 这样富有热情.她相当不错的完成了安装过程并且达到了忘我的状态.像她这样极具热情的毕竟是少数.大多数新的 Linux 用户也就是在需要的时候才这样用心. + +很不错,他们都是要教自己的孩子使用 Linux 的人. +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2016/05/anatomy-linux-user/ + +作者:[Ken Starks][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://linuxlock.blogspot.com/ diff --git a/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md b/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md new file mode 100755 index 0000000000..c9a1b243ff --- /dev/null +++ b/translated/talk/20160615 Explanation of “Everything is a File” and Types of Files in Linux.md @@ -0,0 +1,277 @@ +诠释 Linux 中“一切都是文件”概念和相应的文件类型 +==================================================================== + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Everything-is-a-File-in-Linux.png) +>Linux 系统中一切都是文件并有相应的文件类型 + +在 Unix 和它衍生的比如 Linux 系统中,一切都可以看做文件。虽然它仅仅只是一个泛泛的概念,但这是事实。如果有不是文件的,那它一定是正运行的进程。 + +要理解这点,可以举个例子,您的根目录(/) 的空间是由不同类型的 Linux 文件所占据的。当您创建一个文件或向系统传一个文件时,它在物理磁盘上占据的一些空间,可以认为是一个特定的格式(文件类型)。 + +虽然 Linux 系统中文件和目录没有什么不同,但目录还有一个重要的功能,那就是有结构性的分组存储其它文件,以方便查找访问。所有的硬件部件都表示为文件,系统使用这些文件来与硬件通信。 + +这些思想是对伟大的 Linux 财产的重要阐述,因此像文档、目录(Mac OS X 和 Windows 系统下是文件夹)、键盘、监视器、硬盘、可移动媒体设备、打印机、调制解调器、虚拟终端,还有进程间通信(IPC)和网络通信等输入/输出资源都在定义在文件系统空间下的字节流。 + +一切都可看作是文件,其最显著的好处是对于上面所列出的输入/输出资源,只需要相同的一套 Linux 工具、实用程序和 API。 + +虽然在 Linux 中一切都可看作是文件,但也有一些特殊的文件,比如[套接字和命令管道][1]。 + +### Linux 文件类型的不同之处? + +Linux 系统中有三种基本的文件类型: + +- 普通/常规文件 +- 特殊文件 +- 目录文件 + +#### 普通/常规文件 + +它们是包含文本、数据、程序指令等数据的文件,其在 Linux 系统中是最常见的一种。包括如下: + +- 只读文件 +- 二进制文件 +- 图像文件 +- 压缩文件等等 + +#### 特殊文件 + +特殊文件包括以下几种: + +块文件:设备文件,对访问系统硬件部件提供了缓存接口。他们提供了一种使用文件系统与设备驱动通信的方法。 + +有关于块文件一个重要的性能就是它们能在指定时间内传输大块的数据和信息。 + +列出某目录下的块文件: + +``` +# ls -l /dev | grep "^b" +``` + +输出例子 + +``` +brw-rw---- 1 root disk 7, 0 May 18 10:26 loop0 +brw-rw---- 1 root disk 7, 1 May 18 10:26 loop1 +brw-rw---- 1 root disk 7, 2 May 18 10:26 loop2 +brw-rw---- 1 root disk 7, 3 May 18 10:26 loop3 +brw-rw---- 1 root disk 7, 4 May 18 10:26 loop4 +brw-rw---- 1 root disk 7, 5 May 18 10:26 loop5 +brw-rw---- 1 root disk 7, 6 May 18 10:26 loop6 +brw-rw---- 1 root disk 7, 7 May 18 10:26 loop7 +brw-rw---- 1 root disk 1, 0 May 18 10:26 ram0 +brw-rw---- 1 root disk 1, 1 May 18 10:26 ram1 +brw-rw---- 1 root disk 1, 10 May 18 10:26 ram10 +brw-rw---- 1 root disk 1, 11 May 18 10:26 ram11 +brw-rw---- 1 root disk 1, 12 May 18 10:26 ram12 +brw-rw---- 1 root disk 1, 13 May 18 10:26 ram13 +brw-rw---- 1 root disk 1, 14 May 18 10:26 ram14 +brw-rw---- 1 root disk 1, 15 May 18 10:26 ram15 +brw-rw---- 1 root disk 1, 2 May 18 10:26 ram2 +brw-rw---- 1 root disk 1, 3 May 18 10:26 ram3 +brw-rw---- 1 root disk 1, 4 May 18 10:26 ram4 +brw-rw---- 1 root disk 1, 5 May 18 10:26 ram5 +... +``` + +字符文件: 也是设备文件,对访问系统硬件组件提供了非缓冲串行接口。它们与设备的通信工作方式是一次只传输一个字符的数据。 + +列出某目录下的字符文件: + +``` +# ls -l /dev | grep "^c" +``` + +输出例子 + +``` +crw------- 1 root root 10, 235 May 18 15:54 autofs +crw------- 1 root root 10, 234 May 18 15:54 btrfs-control +crw------- 1 root root 5, 1 May 18 10:26 console +crw------- 1 root root 10, 60 May 18 10:26 cpu_dma_latency +crw------- 1 root root 10, 203 May 18 15:54 cuse +crw------- 1 root root 10, 61 May 18 10:26 ecryptfs +crw-rw---- 1 root video 29, 0 May 18 10:26 fb0 +crw-rw-rw- 1 root root 1, 7 May 18 10:26 full +crw-rw-rw- 1 root root 10, 229 May 18 10:26 fuse +crw------- 1 root root 251, 0 May 18 10:27 hidraw0 +crw------- 1 root root 10, 228 May 18 10:26 hpet +crw-r--r-- 1 root root 1, 11 May 18 10:26 kmsg +crw-rw----+ 1 root root 10, 232 May 18 10:26 kvm +crw------- 1 root root 10, 237 May 18 10:26 loop-control +crw------- 1 root root 10, 227 May 18 10:26 mcelog +crw------- 1 root root 249, 0 May 18 10:27 media0 +crw------- 1 root root 250, 0 May 18 10:26 mei0 +crw-r----- 1 root kmem 1, 1 May 18 10:26 mem +crw------- 1 root root 10, 57 May 18 10:26 memory_bandwidth +crw------- 1 root root 10, 59 May 18 10:26 network_latency +crw------- 1 root root 10, 58 May 18 10:26 network_throughput +crw-rw-rw- 1 root root 1, 3 May 18 10:26 null +crw-r----- 1 root kmem 1, 4 May 18 10:26 port +crw------- 1 root root 108, 0 May 18 10:26 ppp +crw------- 1 root root 10, 1 May 18 10:26 psaux +crw-rw-rw- 1 root tty 5, 2 May 18 17:40 ptmx +crw-rw-rw- 1 root root 1, 8 May 18 10:26 random +``` + +符号链接文件 : 符号链接是指向系统上其他文件的引用。因此,符号链接文件是指向其它文件的文件,也可以是目录或常规文件。 + +列出某目录下的符号链接文件: + +``` +# ls -l /dev/ | grep "^l" +``` + +输出例子 + +``` +lrwxrwxrwx 1 root root 3 May 18 10:26 cdrom -> sr0 +lrwxrwxrwx 1 root root 11 May 18 15:54 core -> /proc/kcore +lrwxrwxrwx 1 root root 13 May 18 15:54 fd -> /proc/self/fd +lrwxrwxrwx 1 root root 4 May 18 10:26 rtc -> rtc0 +lrwxrwxrwx 1 root root 8 May 18 10:26 shm -> /run/shm +lrwxrwxrwx 1 root root 15 May 18 15:54 stderr -> /proc/self/fd/2 +lrwxrwxrwx 1 root root 15 May 18 15:54 stdin -> /proc/self/fd/0 +lrwxrwxrwx 1 root root 15 May 18 15:54 stdout -> /proc/self/fd/1 +``` + +Linux 中使用 `ln` 工具就可以创建一个符号链接文件,如下所示: + +``` +# touch file1.txt +# ln -s file1.txt /home/tecmint/file1.txt [创建符号链接文件] +# ls -l /home/tecmint/ | grep "^l" [列出符号链接文件] +``` + +在上面的例子中,首先我们在 `/tmp` 目录创建了一个名叫 `file1.txt` 的文件,然后创建符号链接文件,所以 `/home/tecmint/file1.txt` 指向 `/tmp/file1.txt` 文件。 + +套接字和命令管道 : 连接一个进行的输出和另一个进程的输入,允许进程间通信的文件。 + +命名管道实际上是一个文件,用来使两个进程彼此通信,就像一个 Linux pipe(管道) 命令一样。 + +列出某目录下的管道文件: + +``` +# ls -l | grep "^p" +``` + +输出例子 + +``` +prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe1 +prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe2 +prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe3 +prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe4 +prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe5 +``` + +在 Linux 中可以使用 `mkfifo` 工具来创建一个命名管道,如下所示: + +``` +# mkfifo pipe1 +# echo "This is named pipe1" > pipe1 +``` + +在上的例子中,我们创建了一个名叫 `pipe1` 的命名管道,然后使用 [echo 命令][2] 加入一些数据,在这操作后,要使用这些输入数据就要用非交互的 shell 了。 + + + +然后,我们打开另外的 shell 终端,运行另外的命令来打印出刚加入管道的数据。 + +``` +# while read line ;do echo "This was passed-'$line' "; done - -在 "[探索 Sense HAT][9]" 中可以学到更多。 - -### 4. 红外鸟箱 ### - -![](https://opensource.com/sites/default/files/ir-bird-box.png) -上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。 - -让全班所有同学都能够参与进来的一个好的练习是 —— 在一个鸟箱中沿着某些红外线放置一个树莓派和 NoIR 照相模块,这样你就可以在黑暗中观看,然后通过网络或在网络中你可以从树莓派那里获取到视频流。等鸟进入笼子,然后你就可以在不打扰到它们的情况下观察它们。 - -在这期间,你可以学习到所有关于红外和光谱的知识,以及如何用软件来调整摄像头的焦距和控制它。 - -在 "[制作一个红外鸟箱][10]" 中你可以学到更多。 - -### 5. 机器人 ### - -![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg) - -上图由 Low Voltage Labs 提供。遵循 [CC BY-SA 4.0][1] 协议。 - -拥有一个树莓派,一些感应器和一个感应器控制电路板,你就可以构建你自己的机器人。你可以制作各种类型的机器人,从用透明胶带和自制底盘组合在一起的简易四驱车,一直到由游戏控制器驱动的具有自我意识,带有传感器和摄像头的金属马儿。 - -学习如何直接去控制单个的发动机,例如通过 RTK Motor Controller Board (£8/$12),或者尝试新的 CamJam robotics kit (£17/$25) ,它带有发动机、轮胎和一系列的感应器 — 这些都很有价值并很有学习的潜力。 - -另外,如何你喜欢更为骨灰级别的东西,可以尝试 PiBorg 的 [4Borg][11] (£99/$150) 或 [DiddyBorg][12] (£180/$273) 或者一干到底,享受他们的 DoodleBorg 金属版 (£250/$380) — 并构建一个他们声名远扬的 [DoodleBorg tank][13](很不幸的时,这个没有卖的) 的迷你版。 - -另外请参考 [CamJam robotics kit worksheets][14]。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom - -作者:[Ben Nuttall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/bennuttall -[1]:https://creativecommons.org/licenses/by-sa/4.0/ -[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi -[3]:http://lowvoltagelabs.com/ -[4]:http://lowvoltagelabs.com/products/pi-traffic/ -[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390 -[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html -[7]:http://pythonhosted.org/gpiozero/recipes/ -[8]:http://camjam.me/?page_id=236 -[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat -[10]:https://www.raspberrypi.org/learning/infrared-bird-box/ -[11]:https://www.piborg.org/4borg -[12]:https://www.piborg.org/diddyborg -[13]:https://www.piborg.org/doodleborg -[14]:http://camjam.me/?page_id=1035#worksheets diff --git a/translated/tech/20151210 Getting started with Docker by Dockerizing this Blog.md.md b/translated/tech/20151210 Getting started with Docker by Dockerizing this Blog.md.md deleted file mode 100644 index a74af87b6f..0000000000 --- a/translated/tech/20151210 Getting started with Docker by Dockerizing this Blog.md.md +++ /dev/null @@ -1,464 +0,0 @@ -通过Dockerize这篇博客来开启我们的Docker之旅 -=== ->这篇文章将包含Docker的基本概念,以及如何通过创建一个定制的Dockerfile来Dockerize一个应用 ->作者:Benjamin Cane,2015-12-01 10:00:00 - -Docker是2年前从某个idea中孕育而生的有趣技术,世界各地的公司组织都积极使用它来部署应用。在今天的文章中,我将教你如何通过"Dockerize"一个现有的应用,来开始我们的Docker运用。问题中的应用指的就是这篇博客! - -## 什么是Docker? - -当我们开始学习Docker基本概念时,让我们先去搞清楚什么是Docker以及它为什么这么流行。Docker是一个操作系统容器管理工具,它通过将应用打包在操作系统容器中,来方便我们管理和部署应用。 - -### 容器 vs. 虚拟机 - -容器虽和虚拟机并不完全相似,但它也是一种提供**操作系统虚拟化**的方式。但是,它和标准的虚拟机还是有不同之处的。 - -标准虚拟机一般会包括一个完整的操作系统,操作系统包,最后还有一至两个应用。这都得益于为虚拟机提供硬件虚拟化的管理程序。这样一来,一个单一的服务器就可以将许多独立的操作系统作为虚拟客户机运行了。 - -容器和虚拟机很相似,它们都支持在单一的服务器上运行多个操作环境,只是,在容器中,这些环境并不是一个个完整的操作系统。容器一般只包含必要的操作系统包和一些应用。它们通常不会包含一个完整的操作系统或者硬件虚拟化程序。这也意味着容器比传统的虚拟机开销更少。 - -容器和虚拟机常被误认为是两种抵触的技术。虚拟机采用同一个物理服务器,来提供全功能的操作环境,该环境会和其余虚拟机一起共享这些物理资源。容器一般用来隔离运行中的应用进程,运行进程将在单独的主机中运行,以保证隔离后的进程之间不能相互影响。事实上,容器和**BSD Jails**以及`chroot`进程的相似度,超过了和完整虚拟机的相似度。 - -### Docker在容器的上层提供了什么 - -Docker不是一个容器运行环境,事实上,只是一个容器技术,并不包含那些帮助Docker支持[Solaris Zones](https://blog.docker.com/2015/08/docker-oracle-solaris-zones/)和[BSD Jails](https://wiki.freebsd.org/Docker)的技术。Docker提供管理,打包和部署容器的方式。虽然一定程度上,虚拟机多多少少拥有这些类似的功能,但虚拟机并没有完整拥有绝大多数的容器功能,即使拥有,这些功能用起来都并没有Docker来的方便。 - -现在,我们应该知道Docker是什么了,然后,我们将从安装Docker,并部署一个公共的预构建好的容器开始,学习Docker是如何工作的。 - -## 从安装开始 - -默认情况下,Docker并不会自动被安装在您的计算机中,所以,第一步就是安装Docker包;我们的教学机器系统是Ubuntu 14.0.4,所以,我们将使用Apt包管理器,来执行安装操作。 - -``` -# apt-get install docker.io -Reading package lists... Done -Building dependency tree -Reading state information... Done -The following extra packages will be installed: - aufs-tools cgroup-lite git git-man liberror-perl -Suggested packages: - btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc - git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki - git-svn -The following NEW packages will be installed: - aufs-tools cgroup-lite docker.io git git-man liberror-perl -0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. -Need to get 7,553 kB of archives. -After this operation, 46.6 MB of additional disk space will be used. -Do you want to continue? [Y/n] y -``` - -为了检查当前是否有容器运行,我们可以执行`docker`命令,加上`ps`选项 - -``` -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -``` - -`docker`命令中的`ps`功能类似于Linux的`ps`命令。它将显示可找到的Docker容器以及各自的状态。由于我们并没有开启任何Docker容器,所以命令没有显示任何正在运行的容器。 - -## 部署一个预构建好的nginx Docker容器 - -我比较喜欢的Docker特性之一就是Docker部署预先构建好的容器的方式,就像`yum`和`apt-get`部署包一样。为了更好地解释,我们来部署一个运行着nginx web服务器的预构建容器。我们可以继续使用`docker`命令,这次选择`run`选项。 - -``` -# docker run -d nginx -Unable to find image 'nginx' locally -Pulling repository nginx -5c82215b03d1: Download complete -e2a4fb18da48: Download complete -58016a5acc80: Download complete -657abfa43d82: Download complete -dcb2fe003d16: Download complete -c79a417d7c6f: Download complete -abb90243122c: Download complete -d6137c9e2964: Download complete -85e566ddc7ef: Download complete -69f100eb42b5: Download complete -cd720b803060: Download complete -7cc81e9a118a: Download complete -``` - -`docker`命令的`run`选项,用来通知Docker去寻找一个指定的Docker镜像,然后开启运行着该镜像的容器。默认情况下,Docker容器在前台运行,这意味着当你运行`docker run`命令的时候,你的shell会被绑定到容器的控制台以及运行在容器中的进程。为了能在后台运行该Docker容器,我们可以使用`-d` (**detach**)标志。 - -再次运行`docker ps`命令,可以看到nginx容器正在运行。 - -``` -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande -``` - -从上面的打印信息中,我们可以看到正在运行的名为`desperate_lalande`的容器,它是由`nginx:latest image`(译者注:nginx最新版本的镜像)构建而来得。 - -### Docker镜像 - -镜像是Docker的核心特征之一,类似于虚拟机镜像。和虚拟机镜像一样,Docker镜像是一个被保存并打包的容器。当然,Docker不只是创建镜像,它还可以通过Docker仓库发布这些镜像,Docker仓库和包仓库的概念差不多,它让Docker能够模仿`yum`部署包的方式来部署镜像。为了更好地理解这是怎么工作的,我们来回顾`docker run`执行后的输出。 - -``` -# docker run -d nginx -Unable to find image 'nginx' locally -``` - -我们可以看到第一条信息是,Docker不能在本地找到名叫nginx的镜像。这是因为当我们执行`docker run`命令时,告诉Docker运行一个基于nginx镜像的容器。既然Docker要启动一个基于特定镜像的容器,那么Docker首先需要知道那个指定镜像。在检查远程仓库之前,Docker首先检查本地是否存在指定名称的本地镜像。 - -因为系统是崭新的,不存在nginx镜像,Docker将选择从Docker仓库下载之。 - -``` -Pulling repository nginx -5c82215b03d1: Download complete -e2a4fb18da48: Download complete -58016a5acc80: Download complete -657abfa43d82: Download complete -dcb2fe003d16: Download complete -c79a417d7c6f: Download complete -abb90243122c: Download complete -d6137c9e2964: Download complete -85e566ddc7ef: Download complete -69f100eb42b5: Download complete -cd720b803060: Download complete -7cc81e9a118a: Download complete -``` - -这就是第二部分打印信息显示给我们的内容。默认,Docker会使用[Docker Hub](https://hub.docker.com/)仓库,该仓库由Docker公司维护。 - -和Github一样,在Docker Hub创建公共仓库是免费的,私人仓库就需要缴纳费用了。当然,部署你自己的Docker仓库也是可以实现的,事实上只需要简单地运行`docker run registry`命令就行了。但在这篇文章中,我们的重点将不是讲解如何部署一个定制的注册服务。 - -### 关闭并移除容器 - -在我们继续构建定制容器之前,我们先清理Docker环境,我们将关闭先前的容器,并移除它。 - -我们利用`docker`命令和`run`选项运行一个容器,所以,为了停止该相同的容器,我们简单地在执行`docker`命令时,使用`kill`选项,并指定容器名。 - -``` -# docker kill desperate_lalande -desperate_lalande -``` - -当我们再次执行`docker ps`,就不再有容器运行了 - -``` -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -``` - -但是,此时,我们这是停止了容器;虽然它不再运行,但仍然存在。默认情况下,`docker ps`只会显示正在运行的容器,如果我们附加`-a` (all) 标识,它会显示所有运行和未运行的容器。 - -``` -# docker ps -a -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande -``` - -为了能完整地移除容器,我们在用`docker`命令时,附加`rm`选项。 - -``` -# docker rm desperate_lalande -desperate_lalande -``` - -虽然容器被移除了;但是我们仍拥有可用的**nginx**镜像(译者注:镜像缓存)。如果我们重新运行`docker run -d nginx`,Docker就无需再次拉取nginx镜像,即可启动容器。这是因为我们本地系统中已经保存了一个副本。 - -为了列出系统中所有的本地镜像,我们运行`docker`命令,附加`images`选项。 - -``` -# docker images -REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE -nginx latest 9fab4090484a 5 days ago 132.8 MB -``` - -## 构建我们自己的镜像 - -截至目前,我们已经使用了一些基础的Docker命令来开启,停止和移除一个预构建好的普通镜像。为了"Dockerize"这篇博客,我们需要构建我们自己的镜像,也就是创建一个**Dockerfile**。 - -在大多数虚拟机环境中,如果你想创建一个机器镜像,首先,你需要建立一个新的虚拟机,安装操作系统,安装应用,最后将其转换为一个模板或者镜像。但在Docker中,所有这些步骤都可以通过Dockerfile实现全自动。Dockerfile是向Docker提供构建指令去构建定制镜像的方式。在这一章节,我们将编写能用来部署这篇博客的定制Dockerfile。 - -### 理解应用 - -我们开始构建Dockerfile之前,第一步要搞明白,我们需要哪些东西来部署这篇博客。 - -博客本质上是由静态站点生成器生成的静态HTML页面,这个静态站点是我编写的,名为**hamerkop**。这个生成器很简单,它所做的就是生成该博客站点。所有的博客源码都被我放在了一个公共的[Github仓库](https://github.com/madflojo/blog)。为了部署这篇博客,我们要先从Github仓库把博客内容拉取下来,然后安装**Python**和一些**Python**模块,最后执行`hamerkop`应用。我们还需要安装**nginx**,来运行生成后的内容。 - -截止目前,这些还是一个简单的Dockerfile,但它却给我们展示了相当多的[Dockerfile语法]((https://docs.docker.com/v1.8/reference/builder/))。我们需要克隆Github仓库,然后使用你最喜欢的编辑器编写Dockerfile;我选择`vi` - -``` -# git clone https://github.com/madflojo/blog.git -Cloning into 'blog'... -remote: Counting objects: 622, done. -remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 -Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. -Resolving deltas: 100% (242/242), done. -Checking connectivity... done. -# cd blog/ -# vi Dockerfile -``` - -### FROM - 继承一个Docker镜像 - -第一条Dockerfile指令是`FROM`指令。这将指定一个现存的镜像作为我们的基础镜像。这也从根本上给我们提供了继承其他Docker镜像的途径。在本例中,我们还是从刚刚我们使用的**nginx**开始,如果我们想重新开始,我们可以通过指定`ubuntu:latest`来使用**Ubuntu** Docker镜像。 - -``` -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane -``` - -除了`FROM`指令,我还使用了`MAINTAINER`,它用来显示Dockerfile的作者。 - -Docker支持使用`#`作为注释,我将经常使用该语法,来解释Dockerfile的部分内容。 - -### 运行一次测试构建 - -因为我们继承了**nginx** Docker镜像,我们现在的Dockerfile也就包括了用来构建**nginx**镜像的[Dockerfile](https://github.com/nginxinc/docker-nginx/blob/08eeb0e3f0a5ee40cbc2bc01f0004c2aa5b78c15/Dockerfile)中所有指令。这意味着,此时我们可以从该Dockerfile中构建出一个Docker镜像,然后从该镜像中运行一个容器。虽然,最终的镜像和**nginx**镜像本质上是一样的,但是我们这次是通过构建Dockerfile的形式,然后我们将讲解Docker构建镜像的过程。 - -想要从Dockerfile构建镜像,我们只需要在运行`docker`命令的时候,加上**build**选项。 - -``` -# docker build -t blog /root/blog -Sending build context to Docker daemon 23.6 MB -Sending build context to Docker daemon -Step 0 : FROM nginx:latest - ---> 9fab4090484a -Step 1 : MAINTAINER Benjamin Cane - ---> Running in c97f36450343 - ---> 60a44f78d194 -Removing intermediate container c97f36450343 -Successfully built 60a44f78d194 -``` - -上面的例子,我们使用了`-t` (**tag**)标识给镜像添加"blog"的标签。本质上我们只是在给镜像命名,如果我们不指定标签,就只能通过Docker分配的**Image ID**来访问镜像了。本例中,从Docker构建成功的信息可以看出,**Image ID**值为`60a44f78d194`。 - -除了`-t`标识外,我还指定了目录`/root/blog`。该目录被称作"构建目录",它将包含Dockerfile,以及其他需要构建该容器的文件。 - -现在我们构建成功,下面我们开始定制该镜像。 - -### 使用RUN来执行apt-get - -用来生成HTML页面的静态站点生成器是用**Python**语言编写的,所以,在Dockerfile中需要做的第一件定制任务是安装Python。我们将使用Apt包管理器来安装Python包,这意味着在Dockerfile中我们要指定运行`apt-get update`和`apt-get install python-dev`;为了完成这一点,我们可以使用`RUN`指令。 - -``` -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane - -## Install python and pip -RUN apt-get update -RUN apt-get install -y python-dev python-pip -``` - -如上所示,我们只是简单地告知Docker构建镜像的时候,要去执行指定的`apt-get`命令。比较有趣的是,这些命令只会在该容器的上下文中执行。这意味着,即使容器中安装了`python-dev`和`python-pip`,但主机本身并没有安装这些。说的更简单点,`pip`命令将只在容器中执行,出了容器,`pip`命令不存在。 - -还有一点比较重要的是,Docker构建过程中不接受用户输入。这说明任何被`RUN`指令执行的命令必须在没有用户输入的时候完成。由于很多应用在安装的过程中需要用户的输入信息,所以这增加了一点难度。我们例子,`RUN`命令执行的命令都不需要用户输入。 - -### 安装Python模块 - -**Python**安装完毕后,我们现在需要安装Python模块。如果在Docker外做这些事,我们通常使用`pip`命令,然后参考博客Git仓库中名叫`requirements.txt`的文件。在之前的步骤中,我们已经使用`git`命令成功地将Github仓库"克隆"到了`/root/blog`目录;这个目录碰巧也是我们创建`Dockerfile`的目录。这很重要,因为这意味着Dokcer在构建过程中可以访问Git仓库中的内容。 - -当我们执行构建后,Docker将构建的上下文环境设置为指定的"构建目录"。这意味着目录中的所有文件都可以在构建过程中被使用,目录之外的文件(构建环境之外)是不能访问的。 - -为了能安装需要的Python模块,我们需要将`requirements.txt`从构建目录拷贝到容器中。我们可以在`Dockerfile`中使用`COPY`指令完成这一需求。 - -``` -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane - -## Install python and pip -RUN apt-get update -RUN apt-get install -y python-dev python-pip - -## Create a directory for required files -RUN mkdir -p /build/ - -## Add requirements file and run pip -COPY requirements.txt /build/ -RUN pip install -r /build/requirements.txt -``` - -在`Dockerfile`中,我们增加了3条指令。第一条指令使用`RUN`在容器中创建了`/build/`目录。该目录用来拷贝生成静态HTML页面需要的一切应用文件。第二条指令是`COPY`指令,它将`requirements.txt`从"构建目录"(`/root/blog`)拷贝到容器中的`/build/`目录。第三条使用`RUN`指令来执行`pip`命令;安装`requirements.txt`文件中指定的所有模块。 - -当构建定制镜像时,`COPY`是条重要的指令。如果在Dockerfile中不指定拷贝文件,Docker镜像将不会包含requirements.txt文件。在Docker容器中,所有东西都是隔离的,除非在Dockerfile中指定执行,否则容器中不会包括需要的依赖。 - -### 重新运行构建 - -现在,我们让Docker执行了一些定制任务,现在我们尝试另一次blog镜像的构建。 - -``` -# docker build -t blog /root/blog -Sending build context to Docker daemon 19.52 MB -Sending build context to Docker daemon -Step 0 : FROM nginx:latest - ---> 9fab4090484a -Step 1 : MAINTAINER Benjamin Cane - ---> Using cache - ---> 8e0f1899d1eb -Step 2 : RUN apt-get update - ---> Using cache - ---> 78b36ef1a1a2 -Step 3 : RUN apt-get install -y python-dev python-pip - ---> Using cache - ---> ef4f9382658a -Step 4 : RUN mkdir -p /build/ - ---> Running in bde05cf1e8fe - ---> f4b66e09fa61 -Removing intermediate container bde05cf1e8fe -Step 5 : COPY requirements.txt /build/ - ---> cef11c3fb97c -Removing intermediate container 9aa8ff43f4b0 -Step 6 : RUN pip install -r /build/requirements.txt - ---> Running in c50b15ddd8b1 -Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) -Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) - -Successfully installed jinja2 PyYaml mistune markdown MarkupSafe -Cleaning up... - ---> abab55c20962 -Removing intermediate container c50b15ddd8b1 -Successfully built abab55c20962 -``` - -上述输出所示,我们可以看到构建成功了,我们还可以看到另外一个有趣的信息` ---> Using cache`。这条信息告诉我们,Docker在构建该镜像时使用了它的构建缓存。 - -### Docker构建缓存 - -当Docker构建镜像时,它不仅仅构建一个单独的镜像;事实上,在构建过程中,它会构建许多镜像。从上面的输出信息可以看出,在每一"步"执行后,Docker都在创建新的镜像。 - -``` - Step 5 : COPY requirements.txt /build/ - ---> cef11c3fb97c -``` - -上面片段的最后一行可以看出,Docker在告诉我们它在创建一个新镜像,因为它打印了**Image ID**;`cef11c3fb97c`。这种方式有用之处在于,Docker能在随后构建**blog**镜像时将这些镜像作为缓存使用。这很有用处,因为这样,Docker就能加速同一个容器中新构建任务的构建流程。从上面的例子中,我们可以看出,Docker没有重新安装`python-dev`和`python-pip`包,Docker则使用了缓存镜像。但是由于Docker并没有找到执行`mkdir`命令的构建缓存,随后的步骤就被一一执行了。 - -Docker构建缓存一定程度上是福音,但有时也是噩梦。这是因为使用缓存或者重新运行指令的决定在一个很狭窄的范围内执行。比如,如果`requirements.txt`文件发生了修改,Docker会在构建时检测到该变化,然后Docker会重新执行该执行那个点往后的所有指令。这得益于Docker能查看`requirements.txt`的文件内容。但是,`apt-get`命令的执行就是另一回事了。如果提供Python包的**Apt** 仓库包含了一个更新的python-pip包;Docker不会检测到这个变化,转而去使用构建缓存。这会导致之前旧版本的包将被安装。虽然对`python-pip`来说,这不是主要的问题,但对使用了某个致命攻击缺陷的包缓存来说,这是个大问题。 - -出于这个原因,抛弃Docker缓存,定期地重新构建镜像是有好处的。这时,当我们执行Docker构建时,我简单地指定`--no-cache=True`即可。 - -## 部署博客的剩余部分 - -Python包和模块安装后,接下来我们将拷贝需要用到的应用文件,然后运行`hamerkop`应用。我们只需要使用更多的`COPY` and `RUN`指令就可完成。 - -``` -## Dockerfile that generates an instance of http://bencane.com - -FROM nginx:latest -MAINTAINER Benjamin Cane - -## Install python and pip -RUN apt-get update -RUN apt-get install -y python-dev python-pip - -## Create a directory for required files -RUN mkdir -p /build/ - -## Add requirements file and run pip -COPY requirements.txt /build/ -RUN pip install -r /build/requirements.txt - -## Add blog code nd required files -COPY static /build/static -COPY templates /build/templates -COPY hamerkop /build/ -COPY config.yml /build/ -COPY articles /build/articles - -## Run Generator -RUN /build/hamerkop -c /build/config.yml -``` - -现在我们已经写出了剩余的构建指令,我们再次运行另一次构建,并确保镜像构建成功。 - -``` -# docker build -t blog /root/blog/ -Sending build context to Docker daemon 19.52 MB -Sending build context to Docker daemon -Step 0 : FROM nginx:latest - ---> 9fab4090484a -Step 1 : MAINTAINER Benjamin Cane - ---> Using cache - ---> 8e0f1899d1eb -Step 2 : RUN apt-get update - ---> Using cache - ---> 78b36ef1a1a2 -Step 3 : RUN apt-get install -y python-dev python-pip - ---> Using cache - ---> ef4f9382658a -Step 4 : RUN mkdir -p /build/ - ---> Using cache - ---> f4b66e09fa61 -Step 5 : COPY requirements.txt /build/ - ---> Using cache - ---> cef11c3fb97c -Step 6 : RUN pip install -r /build/requirements.txt - ---> Using cache - ---> abab55c20962 -Step 7 : COPY static /build/static - ---> 15cb91531038 -Removing intermediate container d478b42b7906 -Step 8 : COPY templates /build/templates - ---> ecded5d1a52e -Removing intermediate container ac2390607e9f -Step 9 : COPY hamerkop /build/ - ---> 59efd1ca1771 -Removing intermediate container b5fbf7e817b7 -Step 10 : COPY config.yml /build/ - ---> bfa3db6c05b7 -Removing intermediate container 1aebef300933 -Step 11 : COPY articles /build/articles - ---> 6b61cc9dde27 -Removing intermediate container be78d0eb1213 -Step 12 : RUN /build/hamerkop -c /build/config.yml - ---> Running in fbc0b5e574c5 -Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux -Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux - -Successfully created file /usr/share/nginx/html//archive.html -Successfully created file /usr/share/nginx/html//sitemap.xml - ---> 3b25263113e1 -Removing intermediate container fbc0b5e574c5 -Successfully built 3b25263113e1 -``` - -### 运行定制的容器 - -成功的一次构建后,我们现在就可以通过运行`docker`命令和`run`选项来运行我们定制的容器,和之前我们启动nginx容器一样。 - -``` -# docker run -d -p 80:80 --name=blog blog -5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1 -``` - -我们这次又使用了`-d` (**detach**)标识来让Docker在后台运行。但是,我们也可以看到两个新标识。第一个新标识是`--name`,这用来给容器指定一个用户名称。之前的例子,我们没有指定名称,因为Docker随机帮我们生成了一个。第二个新标识是`-p`,这个标识允许用户从主机映射一个端口到容器中的一个端口。 - -之前我们使用的基础**nginx**镜像分配了80端口给HTTP服务。默认情况下,容器内的端口通道并没有绑定到主机系统。为了让外部系统能访问容器内部端口,我们必须使用`-p`标识将主机端口映射到容器内部端口。上面的命令,我们通过`-p 8080:80`语法将主机80端口映射到容器内部的80端口。 - -经过上面的命令,我们的容器似乎成功启动了,我们可以通过执行`docker ps`核实。 - -``` -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog -``` - -## 总结 - -截止目前,我们拥有了正在运行的定制Docker容器。虽然在这篇文章中,我们只接触了一些Dockerfile指令用法,但是我们还是要讨论所有的指令。我们可以检查[Docker's reference page](https://docs.docker.com/v1.8/reference/builder/)来获取所有的Dockerfile指令用法,那里对指令的用法说明得很详细。 - -另一个比较好的资源是[Dockerfile Best Practices page](https://docs.docker.com/engine/articles/dockerfile_best-practices/),它有许多构建定制Dockerfile的最佳练习。有些技巧非常有用,比如战略性地组织好Dockerfile中的命令。上面的例子中,我们将`articles`目录的`COPY`指令作为Dockerfile中最后的`COPY`指令。这是因为`articles`目录会经常变动。所以,将那些经常变化的指令尽可能地放在最后面的位置,来最优化那些可以被缓存的步骤。 - -通过这篇文章,我们涉及了如何运行一个预构建的容器,以及如何构建,然后部署定制容器。虽然关于Docker你还有许多需要继续学习的地方,但我想这篇文章给了你如何继续开始的好建议。当然,如果你认为还有一些需要继续补充的内容,在下面评论即可。 - --------------------------------------- -via:http://bencane.com/2015/12/01/getting-started-with-docker-by-dockerizing-this-blog/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+bencane%2FSAUo+%28Benjamin+Cane%29 - -作者:Benjamin Cane - -译者:[su-kaiyao](https://github.com/su-kaiyao) - -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - diff --git a/translated/tech/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md b/translated/tech/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md deleted file mode 100644 index befa31312f..0000000000 --- a/translated/tech/20160217 How to Enable Multiple PHP-FPM Instances with Nginx or Apache.md +++ /dev/null @@ -1,119 +0,0 @@ -启用Nginx/Apache的PHP-FPM多实例 -================================================================================ - -PHP-FPM作为FastCGI进程管理器而广为熟知,它是PHP FastCGI实现的改进,带有更为有用的功能,用于处理高负载的服务器和网站。下面列出其中一些功能: -### 新功能 ### - - - 拥有具有优雅启动/停止选项的高级进程管理能力。 - - 可以以监听不同端口以及使用不同PHP配置的不同用户身份/组身份来运行进程。 - - 错误日志记录。 - - 支持上传加速。 - - 用于在处理一些耗时任务时结束请求和清空所有数据的特别功能。 - - 同时支持动态和静态子进程重生。 - - 支持IP地址限制。 - -在本文中,我将要讨论的是,在运行EA3下CPanel 11.52的CentOS 7服务器上与Nginx和Apache一起安装PHP-FPM,以及如何来通过CPanel管理这些安装好的多个PHP-FPM实例。 -Before going to the installation procedures, let us take a look on the pre-requisites. - -### 先决条件 ### - - 1. 启用 Mod_proxy_fcgi模块 - 2. 启用 MPM_Event - -由于我们要将PHP-FPM安装到一台EA3服务器,我们需要运行EasyApache来编译Apache以启用这些模块。 - -你们可以参考我以前写的,关于如何在Apache服务器上安装Nginx作为反向代理的文档来确认Nginx的安装。 - -这里,我将再次简述那些安装步骤。具体细节,你可以参考我之前写的**(如何在CentOS 7/CPanel服务器上配置Nginx反向代理)**一文。 - - 步骤 1:安装Epel仓库 - 步骤 2:安装nDeploy RPM仓库,这是此次安装中最为**重要**的步骤。 - 步骤 3:使用yum从nDeploy仓库安装nDeploy和Nginx插件。 - 步骤 4:启用/配置Nginx为反向代理。 - -完成这些步骤后,下面为服务器中所有可用PHP版本安装PHP-FPM包,EA3使用remi仓库来安装这些包。你可以运行这个nDeploy脚本来下载所有的包。 - - root@server1 [~]# /opt/nDeploy/scripts/easy_php_setup.sh - Loaded plugins: fastestmirror, tsflags, universal-hooks - EA4 | 2.9 kB 00:00:00 - base | 3.6 kB 00:00:00 - epel/x86_64/metalink | 9.7 kB 00:00:00 - epel | 4.3 kB 00:00:00 - extras | 3.4 kB 00:00:00 - updates | 3.4 kB 00:00:00 - (1/2): epel/x86_64/updateinfo | 460 kB 00:00:00 - (2/2): epel/x86_64/primary_db - -运行该脚本将为PHP 54,PHP 55,PHP 56和PHP 70安装所有这些FPM包。 - - Installed Packages - php54-php-fpm.x86_64 5.4.45-3.el7.remi @remi - php55-php-fpm.x86_64 5.5.31-1.el7.remi @remi - php56-php-fpm.x86_64 5.6.17-1.el7.remi @remi - php70-php-fpm.x86_64 7.0.2-1.el7.remi @remi - -在以上安装完成后,你需要为Apache启用PHP-FPM SAPI。你可以运行下面这个脚本来启用PHP-FPM实例。 - - root@server1 [~]# /opt/nDeploy/scripts/apache_php-fpm_setup.sh enable - mod_proxy_fcgi.c - Please choose one default PHP version from the list below - PHP70 - PHP56 - PHP54 - PHP55 - Provide the exact desired version string here and press ENTER: PHP54 - ConfGen:: lxblogger - ConfGen:: blogr - ConfGen:: saheetha - ConfGen:: satest - which: no cagefsctl in (/usr/local/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/X11R6/bin:/root/bin) - info [rebuildhttpdconf] Missing owner for domain server1.centos7-test.com, force lookup to root - Built /usr/local/apache/conf/httpd.conf OK - Waiting for “httpd” to restart gracefully …waiting for “httpd” to initialize …… - …finished. - -它会问你需要运行哪个PHP版本作为服务器默认版本,你可以输入那些细节内容,然后继续配置并为现存的域生成虚拟主机文件。 - -我选择了PHP 54作为我服务器上的默认PHP-FPM版本。 - -![confirm-php-fpm](http://blog.linoxide.com/wp-content/uploads/2016/01/confirm-php-fpm-1024x525.png) - -虽然服务器配置了PHP-FPM 54,但是我们可以通过CPanel为各个独立的域修改PHP-FPM实例。 - -下面我将通过一些截图来为你们说明一下,怎样通过CPanel为各个独立域修改PHP-FPM实例。 - -Nginx插件的安装将为你的域的CPanel提供一个Nginx Webstack图标,你可以点击该图标来配置你的Web服务器。我已经登陆进了我其中的一个CPanel来配置相应的Web服务器。 - -请看这些截图。 - -![nginx webstack](http://blog.linoxide.com/wp-content/uploads/2016/01/nginx-webstack.png) - -![nginxicon1](http://blog.linoxide.com/wp-content/uploads/2016/01/nginxicon1-1024x253.png) - -现在,你可以根据需要为选中的主域配置web服务器(这里,我已经选择了主域saheetha.com)。我已经继续通过自动化配置选项来进行了,因为我不需要添加任何手动设置。 - -![nginx_auto_proxy](http://blog.linoxide.com/wp-content/uploads/2016/01/nginx_auto_proxy-1024x408.png) - -当Nginx配置完后,你可以在这里为你的域选择PHP-FPM实例。 - -![php-fpm1](http://blog.linoxide.com/wp-content/uploads/2016/01/php-fpm1-1024x408.png) - -![php54](http://blog.linoxide.com/wp-content/uploads/2016/01/php54-1024x169.png) - -![php55](http://blog.linoxide.com/wp-content/uploads/2016/01/php55.png) - -就像你在截图中所看到的,我服务器上的默认PHP-FPM是**PHP 54**,而我正要将我的域的PHP-FPM实例单独修改成**PHP 55**。当你为你的域修改PHP-FPM后,你可以通过访问**phpinfo**页面来确认。 - -谢谢你们参考本文,我相信这篇文章会给你提供不少信息和帮助。我会为你们推荐关于这个内容的有价值的评论 :)。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/enable-multiple-php-fpm-instances-nginx-apache/ - -作者:[Saheetha Shameer][a] -译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/saheethas/ diff --git a/translated/tech/20160218 7 Steps to Start Your Linux SysAdmin Career.md b/translated/tech/20160218 7 Steps to Start Your Linux SysAdmin Career.md deleted file mode 100644 index e6b4fa62ab..0000000000 --- a/translated/tech/20160218 7 Steps to Start Your Linux SysAdmin Career.md +++ /dev/null @@ -1,67 +0,0 @@ -七步开始你的 Linux 系统管理员生涯 -=============================================== - - -Linux 现在是个大热门。每个人都在寻求 Linux 才能。招聘人员对有 Linux 经验的人求贤若渴,还有无数的职位虚位以待。但是如果你是 Linux 新手,又想要赶上这波热潮,该从何开始下手呢? - -1. 安装 Linux - - 这应该是不言而喻的,但学习 Linux 的第一关键就是安装 Linux。LFS101x 和 LFS201 课程都包含第一次安装和配置 Linux 的详细内容。 - -2. 完成 LFS101x 课程 - - 如果你是 Linux 完完全全的新手,最佳的起点是免费的 Linux 课程 [LFS101x Introduction](https://www.edx.org/course/introduction-linux-linuxfoundationx-lfs101x-2)。这个在线课程在 edX.org,探索 Linux 系统管理员和终端用户常用的各种工具和技能以及日常 Linux 工作环境。该课程是为有一定经验但较少或没有接触过 Linux 的电脑用户设计的,不论他们是在个人还是企业环境中工作。这个课程会从图形界面和命令行教会你有用的 Linux 知识,让你能够了解主流的 Linux 发行版。 - -3. 看看 LFS201 课程 - - 在你完成 LFS101x 之后,你就准备好开始进入 Linux 更加复杂的任务了,这是成为一名专业的系统管理员所必须的。为了掌握这些技能,你应该看看 [LFS201 Essentials of Linux System Administration](http://training.linuxfoundation.org/linux-courses/system-administration-training/essentials-of-system-administration) 这个课程。该课程对每个话题进行了深度的解释和介绍,还有大量的练习和实验,帮助你获得相关主题实际的上手经验。 - - 如果你更愿意有个教练或者你的雇主对将你培养成 Linux 系统管理员有兴趣,你可能会对 LFS220 Linux System Administration 有兴趣。这个课程有 LFS201 中所有的主题,但是它是由专家专人教授的,帮助你进行实验以及解答你在课程主题中的问题。 - -4. 练习! - - 熟能生巧,和对任何乐器或运动适用一样,这对 Linux 来说也一样适用。在你安装 Linux 之后,经常使用它。一遍遍地练习关键任务,直到你不需要参考材料也能轻而易举地完成。练习命令行和图形界面的输入输出。这些练习能够保证你掌握成为成功的 Linux 系统管理员所必需的知识和技能。 - -5. 获得认证 - - 在你完成 LFS201 或 LFS220 并且充分练习之后,你现在已经准备好获得系统管理员的认证了。你需要这个证书,因为你需要向雇主证明你拥有一名专业 Linux 系统管理员必需的技能。 - - 现在有一些不同的 Linux 证书,它们有它们的独到之处。但是,它们里大部分不是在特定发行版(如红帽)上认证,就是纯粹的知识测试,没有演示 Linux 的实际技能。Linux 基金会认证系统管理员(Linux Foundation Certified System Administrator)证书对想要一个灵活的,有意义的初级证书的人来说是个优秀的替代。 - -6. 参与进来 - - 如果你所在的地方有本地 Linux 用户组(Linux Users Group,LUG)的话,这时候你可能还想要考虑加入他们。这些组织通常由各种年龄和经验水平的人组成,所以不管你的 Linux 经验水平如何,你都能找到和你类似技能水平的人互助,或是更高水平的 Linux 用户来解答你的问题以及介绍有用的资源。要想知道你附近有没有 LUG,上 meet.com 看看,或是附近的大学,又或是上网搜索一下。 - - 还有不少在线社区可以在你学习 Linux 的时候帮助你。这些站点和社区向 Linux 新手和有经验的管理员都能够提供帮助和支持: - - - [Linux Admin subreddit](https://www.reddit.com/r/linuxadmin) - - - [Linux.com](http://www.linux.com/) - - - [training.linuxfoundation.org](http://training.linuxfoundation.org/) - - - [http://community.ubuntu.com/help-information/](http://community.ubuntu.com/help-information/) - - - [https://forums.opensuse.org/forum.php](https://forums.opensuse.org/forum.php) - - - [http://wiki.centos.org/Documentation](http://wiki.centos.org/Documentation) - -7. 学会热爱文档 - - 最后但同样重要的是,如果你困在 Linux 的某些地方,别忘了 Linux 包含的文档。使用命令 man(manual,手册),info 和 help,你从系统内就可以找到 Linux 几乎所有方面的信息。这些内置资源的用处再夸大也不为过,你会发现你在生涯中始终会用到,所以你可能最好早点掌握使用它们。 - - 想要了解更多开始你 Linux IT 生涯的信息?查看我们免费的电子书“[开始你 Linux IT 生涯的简短指南](http://training.linuxfoundation.org/sysadmin-it-career-guide)”。 - -[立刻下载](http://training.linuxfoundation.org/sysadmin-it-career-guide) - ------------------------------------------------------------------------------- - -via: http://www.linux.com/news/featured-blogs/191-linux-training/834644-7-steps-to-start-your-linux-sysadmin-career - -作者:[linux.com][a] -译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:linux.com diff --git a/translated/tech/20160218 9 Key Trends in Hybrid Cloud Computing.md b/translated/tech/20160218 9 Key Trends in Hybrid Cloud Computing.md deleted file mode 100644 index 5efec75ec3..0000000000 --- a/translated/tech/20160218 9 Key Trends in Hybrid Cloud Computing.md +++ /dev/null @@ -1,76 +0,0 @@ -混合云计算的9大关键趋势 -======================================== - -自从几年前云计算的概念受到IT界的关注以来,公有云、私有云和混合云这三种云计算方式都有了可观的演进。其中混合云计算方式是最热门的云计算方式,在接受调查的公司中,有[88%的公司](https://www.greenhousedata.com/blog/hybrid-continues-to-be-most-popular-cloud-option-adoption-accelerating)将混合云计算摆在至关重要的地位。 - -混合云计算的疾速演进意味着一两年前的传统观念已经过时了。为此,我们询问了几个行业分析师,混合云在2016年的走势将会如何,我们得到了几个比较有意思的答案。 - -1. **2016年可能是我们将混合云投入使用的一年。** - - 混合云从本质上来说依赖于私有云,这对企业来说是比较难实现的。事实上,亚马逊,谷歌和微软的公有云已经进行了大量的投资,并且起步也比较早。私有云拖了混合云发展和使用的后腿。 - - 私有云没有得到这么多的投资,这是有私有云的性质决定的。私有云意味着维护和投资你自己的数据中心。许多公有云提供商正在推动企业减少或者消除他们的数据中心。 - - 然而,得益于OpenStack的发展和微软的 Azure Stack ,这两者基本上就是封装在一个盒子里的私有云,我们将会看到私有云慢慢追上公有云的发展步伐。支持混合云的工具、基础设施和架构也变得更加健壮。 - - -2. **容器,微服务和unikernels将会促进混合云的发展。** - - 分析师预言,到2016年底,这些原生云技术会或多或少成为主流的。这些云技术正在快速成熟,将会成为虚拟机的一个替代品,而虚拟机需要更多的资源。 - - 更重要的是,他们既能工作在在线场景,也能工作在离线场景。容器化和编排允许快速的扩大规模,进行公有云和私有云之间的服务迁移,使你能够更容易移动你的服务。 - -3. **数据和相关性占据核心舞台。** - - 所有的云计算方式都处在发展模式。这使得云计算变成了一个技术类的故事。咨询公司[Avoa](http://avoa.com/2016/01/01/2016-is-the-year-of-data-and-relevance/)称,随着云趋于成熟,数据和相关性变得越来越重要。起初,云计算和大数据都是关于怎么吸收尽可能多的数据,然后他们担心如何处理这海量的数据。 - - 2016年,相关组织将会继续锤炼如何进行数据收集和使用的相关技术。在必须处理的技术和文化方面仍然有待提高。但是2016年应该重新将关注点放在从各个方面考虑的数据重要性上,发现最相关的信息,而不只是数据的数量。 - -4. **云服务将超越按需工作负载。** - - AWS(Amazon Web Services)起初是提供给程序员或者是开发人员能够快速启动虚拟机,做一些工作然后离线的一个地方。这就是按需使用的本质。要让这些服务持续运行,几乎需要全天候工作。 - - 然而,IT组织正开始作为服务代理,为内部用户提供各种IT服务。可以是内部IT服务,公有云基础架构提供商,平台服务和软件服务。 - - 他们将越来越多的认识到想云管理平台这样的工具的价值。云管理平台可以提供针对不同服务的基于策略的一致性管理。他们也将看到像提高可移植性的容器等技术的价值。然而,云服务代理,在不同云之间快速移动的工作负载进行价格套利或者相关原因的意义上来讲,仍然是行不通的。 - -5. **服务提供商转变成了云服务提供商。** - - 到目前为止,购买云服务成了直销模式。AWS EC2 服务的使用者通常变成了购买者,通过官方认证渠道或者通过影子IT。但是随着云服务越来越全面,提供的服务菜单越来越复杂,越来越多的人转向了经销商,服务提供商转变成了他们IT服务的购买者。 - - 2nd Watch (2nd Watch是为企业提供云管理的 AWS 的首选合作伙伴)最近的一项调查发现,在美国将近85%的IT高管愿意支付一个小的溢价从渠道商那里购买公有云服务,如果购买过程变得不再那么复杂。根据调查,这85%的高管有五分之四的愿意支付额外的15%或者更多。三分之一的受访高管表示,他们可以使用帮助来购买、使用和管理公有云服务。 - -6. **物联网和云对于2016年的意义好比移动和云对2012年的意义。** - - 物联网获得了广泛的关注,更重要的是,物联网已经从测试场景进行了实际应用。云的分布式特性使得云成为了物联网非常重要的一部分,对于工业物联网,与后端系统交互的机械和重型设备,混合云将会成为最自然的驱动者,连接,数据采集和处理将会发生在混合云环境中,这得益于私有云在安全和隐私方面的好处。 - -7. **NIST 对云的定义开始瓦解。** - - 2011年,美国国家标准与技术研究院发布了“[ NIST 对于云计算的定义](http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf)”(PDF),这个定义成为了私有云、公有云、混合云和 aas模板的标准定义。 - - 然而随着时间的推移,定义开始改变。Iaas变得更加复杂,开始支持OpenStack,[Swift](https://wiki.openstack.org/wiki/Swift) 对象存储和神经网络这样的项目。Paas似乎正在消退,因为Paas和传统的中间件开发几乎无异。Saas,只是通过浏览器进行访问的应用,也正在失去发展动力,因为许多app和服务提供了许多云接口,你可以通过各种手段调用接口,不仅仅通过浏览器。 - -8. **分析变得更加重要** - - 对于混合云计算来说,分析将会成为一个巨大的增长机遇,云计算具有规模大、灵活性高的优势,使得云计算非常适合需要海量数据的分析工作。对于某些分析方式,比如高度敏感的数据,私有云仍然是主导地位但是私有云也是混合云的一部分。因此,无论如何,混合云计算胜出。 - -9. **安全仍然是一个非常紧迫的问题。** - - 随着混合云计算在2016年的发展,以及对物联网和容器等新技术的引进,这同时也增加了更多的脆弱可攻破的地方从而导致数据泄露。先增加使用新技术的趋势,然后再去考虑安全性,这种问题经常发生,同时还有缺少经验的工程师不去考虑系统的安全问题,总有一天你会尝到灾难的后果的。 - - 当一项新技术出来,管理规范总是落后于安全问题产生后,然后我们才考虑去保护技术。容器就是一个很鲜明的例子。你可以从Docker下周各种示例容器,但是你知道你下载的东西来自哪里么?在人们在对容器内容不知情的情况下下载并运行了容器之后,Docker不得不重新加上安全验证。 - - 像Path和Snapchat这样的移动技术在智能手机市场火起来之后也出现了重大的安全问题。一项新技术被恶意利用无可避免。所以安全研究人员需要通过各种手段来保证新技术的安全性。很有可能在部署之后才会发现安全问题。 - - ------------------------------------------------------------------------------- - -via: http://www.datamation.com/cloud-computing/9-key-trends-in-hybrid-cloud-computing.html - -作者:[Andy Patrizio][a] -译者:[棣琦](https://github.com/sonofelice) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.datamation.com/author/Andy-Patrizio-90720.html diff --git a/translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md b/translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md deleted file mode 100644 index b833d94658..0000000000 --- a/translated/tech/20160218 How to Best Manage Encryption Keys on Linux.md +++ /dev/null @@ -1,110 +0,0 @@ -Linux 如何最好地管理加密密钥 -============================================= - -![](http://www.linux.com/images/stories/41373/key-management-diagram.png) - -存储 SSH 的加密秘钥以及记住密码一直是一个让人头疼的问题。但是不幸的是,在当前充满了恶意黑客和攻击的世界中,基本的安全预警是必不可少的。对于大量的普通用户,它相当于简单地记住密码,也可能寻找一个好程序去存储密码,正如我们提醒这些用户不要在每个网站都有相同的密码。但是对于在各个 IT 领域的我们,我们需要将这个提高一个层次。我们不得不处理加密秘钥,比如 SSH 密钥,而不只是密码。 - -设想一个场景:我有一个运行在云上的服务器,用于我的主 git 库。我有很多台工作电脑。所有电脑都需要登陆中央服务器去 push 与 pull。我设置 git 使用 SSH。当 git 使用 SSH, git 实际上以相同的方式登陆服务器,就好像你通过 SSH 命令打开一个服务器的命令行。为了配置所有内容,我在我的 .ssh 目录下创建一个配置文件,其中包含一个有服务器名字,主机名,登陆用户,密钥文件的路径的主机项。之后我可以通过输入命令来测试这个配置。 - ->ssh gitserver - -很快我得到了服务器的 bash shell。现在我可以配置 git 使用相同项与存储的密钥来登陆服务器。很简单,除了一个问题,对于每一个我用于登陆服务器的电脑,我需要有一个密钥文件。那意味着需要不止一个密钥文件。当前这台电脑和我的其他电脑都存储有这些密钥文件。同样的,用户每天有特大量的密码,对于我们 IT人员,很容易结束这特大量的密钥文件。怎么办呢? - -## 清理 - -在开始使用程序去帮助你管理你的密钥之前,你不得不在你的密码应该怎么处理和我们问的问题是否有意义这两个方面打下一些基础。同时,这需要第一,也是最重要的,你明白你的公钥和私钥的使用位置。我将设想你知道: - -1. 公钥和私钥之间的差异; - -2. 为什么你不可以从公钥生成私钥,但是你可以逆向生成? - -3. `authorized_keys` 文件的目的以及它的内容; - -4. 你如何使用私钥去登陆服务器,其中服务器上的 `authorized_keys` 文件中存有相应的公钥; - -这里有一个例子。当你在亚马逊的网络服务上创建一个云服务器,你必须提供一个 SSH 密码,用于连接你的服务器。每一个密钥有一个公开的部分,和私密的部分。因为你想保持你的服务器安全,乍看之下你可能要将你的私钥放到服务器上,同时你自己带着公钥。毕竟,你不想你的服务器被公开访问,对吗?但是实际上这是逆向的。 - -你把自己的公钥放到 AWS 服务器,同时你持有你自己的私钥用于登陆服务器。你保护私钥,同时保持私钥在自己一方,而不是在一些远程服务器上,正如上图中所示。 - -原因如下:如果公钥公之于众,他们不可以登陆服务器,因为他们没有私钥。进一步说,如果有人成功攻入你的服务器,他们所能找到的只是公钥。你不可以从公钥生成私钥。同时如果你在其他的服务器上使用相同的密钥,他们不可以使用它去登陆别的电脑。 - -这就是为什么你把你自己的公钥放到你的服务器上以便通过 SSH 登陆这些服务器。你持有这些私钥,不要让这些私钥脱离你的控制。 - -但是还有麻烦。试想一下我 git 服务器的例子。我要做一些决定。有时我登陆架设在别的地方的开发服务器。在开发服务器上,我需要连接我的 git 服务器。如何使我的开发服务器连接 git 服务器?通过使用私钥。同时这里面还有麻烦。这个场景需要我把私钥放置到一个架设在别的地方的服务器上。这相当危险。 - -一个进一步的场景:如果我要使用一个密钥去登陆许多的服务器,怎么办?如果一个入侵者得到这个私钥,他或她将拥有私钥,并且得到服务器的全部虚拟网络的权限,同时准备做一些严重的破坏。这一点也不好。 - -同时那当然会带来一个别的问题,我真的应该在这些其他服务器上使用相同的密钥?因为我刚才描述的,那会非常危险的。 - -最后,这听起来有些混乱,但是有一些简单的解决方案。让我们有条理地组织一下: - -(注意你有很多地方需要密钥登陆服务器,但是我提出这个作为一个场景去向你展示当你处理密钥的时候你面对的问题) - -## 关于口令句 - -当你创建你的密钥时,你可以选择是否包含一个口令字,这个口令字会在使用私钥的时候是必不可少的。有了这个口令字,私钥文件本身会被口令字加密。例如,如果你有一个公钥存储在服务器上,同时你使用私钥去登陆服务器的时候,你会被提示,输入口令字。没有口令字,这个密钥是无法使用的。或者,你可以配置你的密钥不需要口令字。然后所有你需要的只是用于登陆服务器的密钥文件。 - -普遍上,不使用口令字对于用户来说是更容易的,但是我强烈建议在很多情况下使用口令字,原因是,如果私钥文件被偷了,偷密钥的人仍然不可以使用它,除非他或者她可以找到口令字。在理论上,这个将节省你很多时间,因为你可以在攻击者发现口令字之前,从服务器上删除公钥文件,从而保护你的系统。还有一些别的原因去使用口令字,但是这个原因对我来说在很多场合更有价值。(举一个例子,我的 Android 平板上有 VNC 软件。平板上有我的密钥。如果我的平板被偷了之后,我会马上从服务器上删除公钥,使得它的私钥没有作用,无论有没有口令字。)但是在一些情况下我不使用口令字,是因为我正在登陆的服务器上没有什么有价值的数据。它取决于情境。 - -## 服务器基础设施 - -你如何设置自己服务器的基础设置将会影响到你如何管理你的密钥。例如,如果你有很多用户登陆,你将需要决定每个用户是否需要一个单独的密钥。(普遍来说,他们应该;你不会想要用户之间共享私钥。那样当一个用户离开组织或者失去信任时,你可以删除那个用户的公钥,而不需要必须给其他人生成新的密钥。相似地,通过共享密钥,他们能以其他人的身份登录,这就更坏了。)但是另外一个问题,你如何配置你的服务器。你是否使用工具,比如 Puppet,配置大量的服务器?同时你是否基于你自己的镜像创建大量的服务器?当你复制你的服务器,你是否需要为每个人设置相同的密钥?不同的云服务器软件允许你配置这个;你可以让这些服务器使用相同的密钥,或者给每一个生成一个新的密钥。 - -如果你在处理复制的服务器,它可能导致混淆如果用户需要使用不同的密钥登陆两个不同的系统。但是另一方面,服务器共享相同的密钥会有安全风险。或者,第三,如果你的密钥有除了登陆之外的需要(比如挂载加密的驱动),之后你会在很多地方需要相同的密钥。正如你所看到的,你是否需要在不同的服务器上使用相同的密钥不是我为你做的决定;这其中有权衡,而且你需要去决定什么是最好的。 - -最终,你可能会有: - -- 需要登录的多个服务器 - -- 多个用户登陆不同的服务器,每个都有自己的密钥 - -- 每个用户多个密钥当他们登陆不同的服务器的时候 - -(如果你正在别的情况下使用密钥,相同的普遍概念会应用于如何使用密钥,需要多少密钥,他们是否共享,你如何处理密钥的私密部分和公开部分。) - -## 安全方法 - -知道你的基础设施和独一无二的情况,你需要组合一个密钥管理方案,它会引导你去分发和存储你的密钥。比如,正如我之前提到的,如果我的平板被偷了,我会从我服务器上删除公钥,期望在平板在用于访问服务器。同样的,我会在我的整体计划中考虑以下内容: - -1. 移动设备上的私钥没有问题,但是必须包含口令字; - -2. 必须有一个方法可以快速地从服务器上删除公钥。 - -在你的情况中,你可能决定,你不想在自己经常登录的系统上使用口令字;比如,这个系统可能是一个开发者一天登录多次的测试机器。这没有问题,但是你需要调整你的规则。你可能添加一条规则,不可以通过移动设备登录机器。换句话说,你需要根据自己的状况构建你的协议,不要假设某个方案放之四海而皆准。 - -## 软件 - -至于软件,毫不意外,现实世界中并没有很多好的,可用的软件解决方案去存储和管理你的私钥。但是应该有吗?考虑到这个,如果你有一个程序存储你所有服务器的全部密钥,并且这个程序被一个核心密钥锁住,那么你的密钥就真的安全了吗?或者,同样的,如果你的密钥被放置在你的硬盘上,用于 SSH 程序快速访问,那样一个密钥管理软件是否真正提供了任何保护吗? - -但是对于整体基础设施和创建,管理公钥,有许多的解决方案。我已经提到了 Puppet。在 Puppet 的世界中,你创建模块来以不同的方式管理你的服务器。这个想法是服务器是动态的,而且不必要准确地复制其他机器。[这里有一个聪明的途径](http://manuel.kiessling.net/2014/03/26/building-manageable-server-infrastructures-with-puppet-part-4/),它在不同的服务器上使用相同的密钥,但是对于每一个用户使用不同的 Puppet 模块。这个方案可能适合你,也可能不适合你。 - -或者,另一个选项就是完全换挡。在 Docker 的世界中,你可以采取一个不同的方式,正如[关于 SSH 和 Docker 博客](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)所描述的。 - -但是怎么样管理私钥?如果你搜索,你无法找到很多的软件选择,原因我之前提到过;密钥存放在你的硬盘上,一个管理软件可能无法提到很多额外的安全。但是我确实使用这种方法来管理我的密钥: - -首先,我的 `.ssh/config` 文件中有很多的主机项。我有一个我登陆的主机项,但是有时我对于一个单独的主机有不止一项。如果我有很多登陆,那种情况就会发生。对于架设我的 git 库的服务器,我有两个不同的登陆项;一个限制于 git,另一个为普遍目的的 bash 访问。这个为 git 设置的登陆选项在机器上有极大的限制。还记得我之前说的关于我存在于远程开发机器上的 git 密钥吗?好了。虽然这些密钥可以登陆到我其中一个服务器,但是使用的账号是被严格限制的。 - -其次,大部分的私钥都包含口令字。(对于处理不得不多次输入口令字的情况,考虑使用 [ssh-agent](http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/)。) - -再次,我确实有许多服务器,我想要更加小心地防御,并且在我 host 文件中并没有这样的项。这更加接近于社交工程方面,因为密钥文件还存在于那里,但是可能需要攻击者花费更长的时间去定位这个密钥文件,分析出来他们攻击的机器。在这些例子中,我只是手动打出来长的 SSH 命令。(这真不怎么坏。) - -同时你可以看出来我没有使用任何特别的软件去管理这些私钥。 - -## 无放之四海而皆准的方案 - -我们偶然间收到 linux.com 的问题,关于管理密钥的好软件的建议。但是让我们后退一步。这个问题事实上需要重新定制,因为没有一个普适的解决方案。你问的问题基于你自己的状况。你是否简单地尝试找到一个位置去存储你的密钥文件?你是否寻找一个方法去管理多用户问题,其中每个人都需要将他们自己的公钥插入到 `authorized_keys` 文件中? - -通过这篇文章,我已经囊括了基础知识,希望到此你明白如何管理你的密钥,并且,只有当你问了正确的问题,无论你寻找任何软件(甚至你需要另外的软件),它都会出现。 - ------------------------------------------------------------------------------- - -via: http://www.linux.com/learn/tutorials/838235-how-to-best-manage-encryption-keys-on-linux - -作者:[Jeff Cogswell][a] -译者:[mudongliang](https://github.com/mudongliang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linux.com/community/forums/person/62256 diff --git a/translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md b/translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md deleted file mode 100644 index 8840e8b973..0000000000 --- a/translated/tech/20160219 How to Setup Lighttpd Web server on Ubuntu 15.04 or CentOS 7.md +++ /dev/null @@ -1,210 +0,0 @@ -[Translated] Haohong Wang -如何在Ubuntu 15.04/CentOS 7中安装Lighttpd Web server -================================================================================= -Lighttpd 是一款开源Web服务器软件。Lighttpd 安全快速,符合行业标准,适配性强并且针对高配置环境进行了优化。Lighttpd因其CPU、内存占用小,针对小型CPU加载的快速适配以及出色的效率和速度而从众多Web服务器中脱颖而出。 而Lighttpd诸如FastCGI,CGI,Auth,Out-Compression,URL-Rewriting等高级功能更是那些低配置的服务器的福音。 - -以下便是在我们运行Ubuntu 15.04 或CentOS 7 Linux发行版的机器上安装Lighttpd Web服务器的简要流程。 - -### 安装Lighttpd - -#### 使用包管理器安装 - -这里我们通过使用包管理器这种最简单的方法来安装Lighttpd。只需以sudo模式在终端或控制台中输入下面的指令即可。 - -**CentOS 7** - -由于CentOS 7.0官方repo中并没有提供Lighttpd,所以我们需要在系统中安装额外的软件源epel repo。使用下面的yum指令来安装epel。 - - # yum install epel-release - -然后,我们需要更新系统及进程为Lighttpd的安装做准备。 - - # yum update - # yum install lighttpd - -![Install Lighttpd Centos](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-centos.png) - -**Ubuntu 15.04** - -Ubuntu 15.04官方repo中包含了Lighttpd,所以只需更新本地repo并使用apt-get指令即可安装Lighttpd。 - - # apt-get update - # apt-get install lighttpd - -![Install lighttpd ubuntu](http://blog.linoxide.com/wp-content/uploads/2016/02/install-lighttpd-ubuntu.png) - -#### 从源代码安装Lighttpd - -如果想从Lighttpd源码安装最新版本(例如1.4.39),我们需要在本地编译源码并进行安装。首先我们要安装编译源码所需的依赖包。 - - # cd /tmp/ - # wget http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.39.tar.gz - -下载完成后,执行下面的指令解压缩。 - - # tar -zxvf lighttpd-1.4.39.tar.gz - -然后使用下面的指令进行编译。 - - # cd lighttpd-1.4.39 - # ./configure - # make - -**注:**在这份教程中,我们安装的是默认配置的Lighttpd。其他诸如高级功能或拓展功能,如对SSL的支持,mod_rewrite,mod_redirect等,需自行配置。 - -当编译完成后,我们就可以把它安装到系统中了。 - - # make install - -### 设置Lighttpd - -如果有更高的需求,我们可以通过修改默认设置文件,如`/etc/lighttpd/lighttpd.conf`,来对Lighttpd进行进一步设置。 而在这份教程中我们将使用默认设置,不对设置文件进行修改。如果你曾做过修改并想检查设置文件是否出错,可以执行下面的指令。 - - # lighttpd -t -f /etc/lighttpd/lighttpd.conf - -#### 使用 CentOS 7 - -在CentOS 7中,我们需在Lighttpd默认设置中创设一个例如`/src/www/htdocs`的webroot文件夹。 - - # mkdir -p /srv/www/htdocs/ - -而后将默认欢迎页面从`/var/www/lighttpd`复制至刚刚新建的目录中: - - # cp -r /var/www/lighttpd/* /srv/www/htdocs/ - -### 开启服务 - -现在,通过执行systemctl指令来重启数据库服务。 - - # systemctl start lighttpd - -然后我们将它设置为伴随系统启动自动运行。 - - # systemctl enable lighttpd - -### 设置防火墙 - -如要让我们运行在Lighttpd上的网页和网站能在Internet或相似的网络上被访问,我们需要在防火墙程序中设置打开80端口。由于CentOS 7和Ubuntu15.04都附带Systemd作为默认初始化系统,所以我们安装firewalld作为解决方案。如果要打开80端口或http服务,我们只需执行下面的命令: - - # firewall-cmd --permanent --add-service=http - success - # firewall-cmd --reload - success - -### 连接至Web Server -在将80端口设置为默认端口后,我们就可以默认直接访问Lighttpd的欢迎页了。我们需要根据运行Lighttpd的设备来设置浏览器的IP地址和域名。在本教程中,我们令浏览器指向 [http://lighttpd.linoxide.com/](http://lighttpd.linoxide.com/) 同时将子域名指向它的IP地址。如此一来,我们就可以在浏览器中看到如下的欢迎页面了。 - -![Lighttpd Welcome Page](http://blog.linoxide.com/wp-content/uploads/2016/02/lighttpd-welcome-page.png) - -此外,我们可以将网站的文件添加到webroot目录下,并删除lighttpd的默认索引文件,使我们的静态网站链接至互联网上。 - -如果想在Lighttpd Web Server中运行PHP应用,请参考下面的步骤: - -### 安装PHP5模块 -在Lighttpd成功安装后,我们需要安装PHP及相关模块以在Lighttpd中运行PHP5脚本。 - -#### 使用 Ubuntu 15.04 - - # apt-get install php5 php5-cgi php5-fpm php5-mysql php5-curl php5-gd php5-intl php5-imagick php5-mcrypt php5-memcache php-pear - -#### 使用 CentOS 7 - - # yum install php php-cgi php-fpm php-mysql php-curl php-gd php-intl php-pecl-imagick php-mcrypt php-memcache php-pear lighttpd-fastcgi - -### 设置Lighttpd的PHP服务 - -如要让PHP与Lighttpd协同工作,我们只要根据所使用的发行版执行如下对应的指令即可。 - -#### 使用 CentOS 7 - -首先要做的便是使用文件编辑器编辑php设置文件(例如`/etc/php.ini`)并取消掉对**cgi.fix_pathinfo=1**的注释。 - - # nano /etc/php.ini - -完成上面的步骤之后,我们需要把PHP-FPM进程的所有权从Apache转移至Lighttpd。要完成这些,首先用文件编辑器打开`/etc/php-fpm.d/www.conf`文件。 - - # nano /etc/php-fpm.d/www.conf - -然后在文件中增加下面的语句: - - user = lighttpd - group = lighttpd - -做完这些,我们保存并退出文本编辑器。然后从`/etc/lighttpd/modules.conf`设置文件中添加FastCGI模块。 - - # nano /etc/lighttpd/modules.conf - -然后,去掉下面语句前面的`#`来取消对它的注释。 - - include "conf.d/fastcgi.conf" - -最后我们还需在文本编辑器设置FastCGI的设置文件。 - - # nano /etc/lighttpd/conf.d/fastcgi.conf - -在文件尾部添加以下代码: - - fastcgi.server += ( ".php" => - (( - "host" => "127.0.0.1", - "port" => "9000", - "broken-scriptfilename" => "enable" - )) - ) - -在编辑完成后保存并退出文本编辑器即可。 - -#### 使用 Ubuntu 15.04 - -如需启用Lighttpd的FastCGI,只需执行下列代码: - - # lighttpd-enable-mod fastcgi - - Enabling fastcgi: ok - Run /etc/init.d/lighttpd force-reload to enable changes - - # lighttpd-enable-mod fastcgi-php - - Enabling fastcgi-php: ok - Run `/etc/init.d/lighttpd` force-reload to enable changes - -然后,执行下列命令来重启Lighttpd。 - - # systemctl force-reload lighttpd - -### 检测PHP工作状态 - -如需检测PHP是否按预期工作,我们需在Lighttpd的webroot目录下新建一个php文件。本教程中,在Ubuntu下/var/www/html 目录,CentOS下/src/www/htdocs目录下使用文本编辑器创建并打开info.php。 - -**使用 CentOS 7** - - # nano /var/www/info.php - -**使用 Ubuntu 15.04** - - # nano /srv/www/htdocs/info.php - -然后只需将下面的语句添加到文件里即可。 - - - -在编辑完成后保存并推出文本编辑器即可。 - -现在,我们需根据路径 [http://lighttpd.linoxide.com/info.php](http://lighttpd.linoxide.com/info.php) 下的info.php文件的IP地址或域名,来让我们的网页浏览器指向系统上运行的Lighttpd。如果一切都按照以上说明进行,我们将看到如下图所示的PHP页面信息。 - -![phpinfo lighttpd](http://blog.linoxide.com/wp-content/uploads/2016/02/phpinfo-lighttpd.png) - -### 总结 - -至此,我们已经在CentOS 7和Ubuntu 15.04 Linux 发行版上成功安装了轻巧快捷并且安全的Lighttpd Web服务器。现在,我们已经可以利用Lighttpd Web服务器来实现上传网站文件到网站根目录,配置虚拟主机,启用SSL,连接数据库,运行Web应用等功能了。 如果你有任何疑问,建议或反馈请在下面的评论区中写下来以让我们更好的改良Lighttpd。谢谢!(译注:评论网址 http://linoxide.com/linux-how-to/setup-lighttpd-web-server-ubuntu-15-04-centos-7/ ) --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/setup-lighttpd-web-server-ubuntu-15-04-centos-7/ - -作者:[Arun Pyasi][a] -译者:[HaohongWANG](https://github.com/HaohongWANG) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ diff --git a/translated/tech/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md b/translated/tech/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md deleted file mode 100644 index 147294c547..0000000000 --- a/translated/tech/20160223 New Docker Data Center Admin Suite Should Bring Order to Containerization.md +++ /dev/null @@ -1,50 +0,0 @@ -新的Docker数据中心管理套件使容器化变得更加井然有序 -=============================================================================== - -![](https://tctechcrunch2011.files.wordpress.com/2016/02/shutterstock_119411227.jpg?w=738) - -[Docker][1]今天宣布了一个新的容器控制中心,称为Docker数据中心(DDC),被设计用于大型和小型企业能够创建、管理和分发容器的一个集成管理控制台。 - -DDC是由包括Docker Universal Control Plane(也是今天发布)和Docker Trusted Registry等不同的商业组件组成。它也包括了开源组件比如Docker Engine。这个主意能让公司能够在一个中心管理界面中就能够管理整个Docker化程序的生命周期。 - -产品SVP Scott Johnston告诉TechCrunch:“客户催使了这个新工具的产生。公司不仅喜欢Docker给他们带来的敏捷性,它们也希望在创建和分发容器的过程中可以进行行政、安全和管理。” - -Johnston说:“公司称这个为容器即服务(Caas),大多是是因为当客户来询问这个管理的类型时,它们是这样描述的。” - -![](https://tctechcrunch2011.files.wordpress.com/2016/02/screen-shot-2016-02-23-at-7-56-54-am.png?w=680&h=401) - ->Docker免费镜像 - -像许多开源项目那样,Docker首先获得了许多开发者的追随,但是它也很快在那些想直接追踪管理它们的开发者的公司中流行。 - -这就是DDC设计的目的。它给开发者创建容器化应用的敏捷性,也让运维变得井井有条。 - -实际中这意味着开发者可以创建一系列容器化的组件,批准部署后就可以获得一个完全认证的镜像。这可以让开发这一系列的程序中拉取他们所需而不必每次重新发明轮子。这可以加速应用的开发和部署(理论上提升了容器提供的灵活性)。 - -这方面吸引了Beta客户ADP。工资服务业巨头特别喜欢让这个中心镜像仓库提供给开发人员。 - -ADP的CTO Keith Fulton在声明中称:“作为我们将关键业务微服务化倡议的一部分,ADP正在研究能够然开发人员可以利用IT审核过的中央库和安全的核心服务进行快速迭代的方案。” - -Docker在2010年由dotcloud的Solomon Hykes发布。他在2013年将公司的重心移到Docker上,并在[8月dotCloud][2],2014年完全聚焦在Docker上。 - -根据CrunchBase的消息,公司几年来在5轮融资后势如破竹般获得了1亿8000万美元融资(自从成为Docker后获得了1亿6千8百万美元)。吸引投资者关注的是Docker提供了一种称为容器的现在分发应用的方式,可以构建、管理何分发分布式应用。 - -容器化可以让开发者创建由多个小的分布在不同服务器上的分布式应用,而不是一个运行在一个单独服务器上的独立应用。 - -DDC每月每节点150美金起。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/calico-virtual-private-networking-docker/ - -作者:[ Ron Miller][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://techcrunch.com/author/ron-miller/ -[1]: https://www.docker.com/ -[2]: http://techcrunch.com/2014/08/04/docker-sells-dotcloud-to-cloudcontrol-to-focus-on-core-container-business/ - - diff --git a/translated/tech/20160228 Two Outstanding All-in-One Linux Servers.md b/translated/tech/20160228 Two Outstanding All-in-One Linux Servers.md deleted file mode 100644 index a60b23aea1..0000000000 --- a/translated/tech/20160228 Two Outstanding All-in-One Linux Servers.md +++ /dev/null @@ -1,103 +0,0 @@ -两个杰出的一体化Linux服务器 -================================================ - -关键词:Linux服务器,SMB,clearos,Zentyal - -![](http://www.linux.com/images/stories/66866/jack-clear_a.png) - ->图1: ClearOS安装向导。 - -回到2000年,微软发布小型商务服务器。这个产品改变了很多人们对科技在商务领域的看法。你可以部署一个单独的服务器,它能处理邮件,日历,文件共享,目录服务,VPN,以及更多,而不是很多机器处理不同的任务。对很多小型商务来说,这是非常好的恩惠,但是Windows SMB的一些花费是昂贵的。对于其他人,微软设计的依赖于一个服务器的想法,根本不是一个选项。 - -对于最近的用户群,有些替代品。事实上,在Linux和开源领域里,你可以选择许多稳定的平台,它可以作为一站式服务商店服务于你的小型企业。如果你的小型企业有10到50员工,一体化服务器也许是你所需的理想方案。 - -这里,我将要看看两个Linux一体化服务器,所以你可以查看他们哪个能完美适用于你的公司。 - -记住,这些服务器不能,以任何方式,适用于大型商务或企业。大公司无法依靠一体化服务器,仅仅是因为一台服务器不能负荷在企业内所需的企望。除此之外,这就是小型企业可以从Linux一体化服务器期待什么。 - -### ClearOS - -[ClearOS][1]是在2009年在ClarkConnect下发行的,作为一个路由和网关的分支。从那以后,ClearOS已经增加了所有一体化服务器必要的特性。CearOS提供的不仅仅是一个软件。你可以购买一个[ClearBox 100][2] 或[ClearBox 300][3]。这些服务器搭载完整的ClearOS作为一个IT设备被销售。在[这里][4]查看特性比对/价格矩阵。 - -家里已经有这些硬件,你可以下载这些之一: - -- [ClearOS社区][5] — 社区(免费)版的ClearOS - -- [ClearOS家庭][6] — 理想的家庭办公室(详细的功能和订阅费用,见这里) - -- [ClearOS商务][7] — 理想的小型商务(详细的功能和订阅费用,见这里) - -使用ClearOS你得到了什么?你得到了一个单机的业务合作服务器,设计精美的网页。ClearOS独特的是什么?你可以在基础服务中得到很多特性。除了这个,你必须从 [Clear Marketplace][8]增加特性。在市场上,你可以安装免费或付费的应用程序,扩展集的ClearOS服务器的特性。这里你可以找到附加的Windows服务器活动目录,OpenLDAP,Flexshares,Antimalware,云,Web访问控制,内容过滤,还有更多。你甚至可以找到一些第三方组件像谷歌应用同步,Zarafa合作平台,卡巴斯基杀毒。 - -ClearOS的安装像其他Linux发行版(基于红帽的Anaconda安装程序)。安装完成后,系统将提示您设置网络接口就是提供你浏览器访问的地址(与ClearOS服务器在同一个网络里)。地址格式如下: - -[https://IP_OF_CLEAROS_SERVER:81][9] - -IP_OF_CLEAROS_SERVER就是服务器的真实IP地址。注:当你第一次在浏览器访问这个服务器时,你将收到一个“Connection is not private”的警告。继续访问这个地址你才能继续设置。 - -当浏览器连接上,就会提示你root用户认证(在初始化安装中你设置的root用户密码)。一通过认证,你将看到ClearOS的安装向导(上图1) - -点击下一步按钮,开始设置你的ClearOS服务器。这个向导无需加以说明,在最后还会问你想用那个版本的ClearOS。点击社区,家庭,或者商业。一旦选择,你就需要注册一个账户。创建了一个账户注册了服务器后,你可以开始更新服务器,配置服务器,从市场添加模块(图2)。 - -![](http://www.linux.com/images/stories/66866/jack-clear_b.png) - ->图2: 从市场安装模块。 - -此时,你已经准备开始深入挖掘配置你的ClearOS小型商务服务器了。 - -### Zentyal - -[Zentyal][10]是一个基于Ubuntu的小型商务服务器,现在,发布在eBox域名下。Zentyal提供了大量的服务器/服务来适应你的小型商务需求: - -- 电子邮件 — 网页邮件;原生微软邮件协议和活动目录支持;日历和通讯录;手机设备电子邮件同步;反病毒/反垃圾;IMAP,POP,SMTP,CalDAV,和CardDAV支持。 - -- 域和目录 — 核心域目录管理;多个组织单元;单点登录身份验证;文件共享;ACLs,高级域名管理,打印机管理。 - -- 网络和防火墙 — 静态和DHCP接口;对象和服务;包过滤;端口转发。 - -- 基础设施 — DNS;DHCP;NTP;认证中心;VPN。 - -- 防火墙 - -安装Zentyal很像Ubuntu服务器的文本安装而且很简单:启动安装镜像,做一些选择,等待安装完成。一旦初始化,完成基于文本安装,就提供给你桌面GUI,向导程序提供选择包。选择所有你想安装的包,让安装程序完成这些工作。 - -最终,你可以通过网页接口来访问Zentyal服务器(浏览器访问[https://IP_OF_SERVER:8443][11] - IP_OF_SERVER是Zentyal服务器的内网地址)或使用独立的桌面GUI来管理服务器(Zentyal包括快速访问管理员和用户控制台就像Zentyal管理控制台)。当全部系统已经保存开启,你将看到Zentyal面板(图3)。 - -![](http://www.linux.com/images/stories/66866/jack-zentyal_a.png) - ->图3: Zentyal活动面板. - -这个面板允许你控制服务器所有方面,比如更新,管理服务器/服务,获取服务器的敏捷状态更新。您也可以进入组件领域,然后安装部署过程中选择出来的组件或更新当前的软件包列表。点击 软件管理 > 系统更新 并选择你想更新的(图4),然后在屏幕最底端点击更新按钮。 - -![](http://www.linux.com/images/stories/66866/jack-zentyal_b.png) - ->图4: 更新你的Zentyal服务器很简单。 - -### 那个服务器适合你? - -回答这个问题要看你有什么需求。Zentyal是一个不可思议的服务器,它很好的胜任于你的小型商务网络中。如果你需要更多,如组合软件,你最好赌在ClearOS上。如果你不需要组合软件,任意的服务器将表现杰出的工作。 - -我强烈建议安装这两个一体化的服务器,看看哪个是你的小公司所需的最好服务。 - ------------------------------------------------------------------------------- - -via: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers - -作者:[Jack Wallen][a] -译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linux.com/community/forums/person/93 -[1]: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers#clearfoundation-overview -[2]: https://www.clearos.com/products/hardware/clearbox-100-series -[3]: https://www.clearos.com/products/hardware/clearbox-300-series -[4]: https://www.clearos.com/products/hardware/clearbox-overview -[5]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso -[6]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso -[7]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso -[8]: https://www.clearos.com/products/purchase/clearos-marketplace-overview -[9]: https://ip_of_clearos_server:81/ -[10]: http://www.zentyal.org/server/ -[11]: https://ip_of_server:8443/ diff --git a/translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md b/translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md deleted file mode 100644 index c45b3eb760..0000000000 --- a/translated/tech/20160301 The Evolving Market for Commercial Software Built On Open Source.md +++ /dev/null @@ -1,36 +0,0 @@ -构建在开源之上的商业软件市场持续成长 -===================================================================== - -![](https://www.linux.com/images/stories/41373/Structure-event-photo.jpg) -> 与会者在 Structure 上听取演讲,Structure Data 2016 也将在 UCSF Mission Bay 会议中心举办。图片来源:Structure Events。 - -如今真的很难低估开源项目对于企业软件市场的影响;开源集成如此快速地形成了规范,我们没能捕捉到转折点也情有可原。 - -举个例子,Hadoop,改变的不止是数据分析的世界。它引领了新一代数据公司,它们围绕开源项目创造自己的软件,按需调整和支持那些代码,更像红帽在 90 年代和 21 世纪早期拥抱 Linux 那样。软件越来越多地通过公有云交付,而不是购买者自己的服务器,拥有了令人惊奇的操作灵活性,但同时也带来了一些关于授权,支持以及价格之类的新问题。 - -我们多年来持续追踪这个趋势,它们组成了我们的 Structure Data 会议,而 Structure Data 2016 也不例外。三家围绕 Hadoop 最重要的大数据公司——Hortonworks,Cloudera 和 MapR——的 CEO 将会共同讨论它们是如何销售他们围绕开源项目的企业软件和服务,获利的同时回报那个社区项目。 - -以前在企业软件上获利是很容易的事情。一个客户购买了之后,企业供应商的一系列大型软件就变成了它自己的收银机,从维护合同和阶段性升级中获得近乎终生的收入,软件也越来越难以被替代,因为它已经成为了客户的业务核心。客户抱怨这种绑定,但如果它们想提高工作队伍的生产力也确实没有多少选择。 - -而现在的情况不再是这样了。尽管无数的公司还陷于在他们的基础设施上运行至关重要的巨大软件包,新的项目被使用开源技术部署到云服务器上。这让升级功能不再需要去掉大量软件包再重新安装别的,同时也让公司按需付费,而不是为一堆永远用不到的特性买单。 - -有很多客户想要利用开源项目的优势,而又不想建立和支持一支工程师队伍来调整开源项目以满足自己的需求。这些客户愿意为开源项目和在这之上的专有特性之间的差异付费。 - -这对于基础设施相关的软件来说格外正确。当然,你的客户们可以安装他们自己对项目的调整,比如 Hadoop,Spark 或 Node.js,但付费可以帮助他们自定义包部署如今重要的开源技术而不用自己干这些活儿。只需看看 Structure Data 2016 的发言者就明白了,比如 Confluent(Kafka),Databricks(Spark),以及 Cloudera-Hortonworks-MapR(Hadoop)三人组。 - -当然还有一个值得提到的是在出错的时候有个供应商给你指责。如果你的工程师弄糟了开源项目的实现,那你只能怪你自己了。但是如果你和一个愿意保证在服务级别的特定性能和正常运行时间指标的公司签订了合同,你就是愿意为支持,指导,以及在突然出现不可避免的问题时朝你公司外的人发火的机会买单。 - -构建在开源之上的商业软件市场的持续成长是我们在 Structure Data 上追踪多年的内容,如果这个话题正合你意,我们鼓励你加入我们,在旧金山,3 月 9 日和 10 日。 - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/enterprise/cloud-computing/889564-the-evolving-market-for-commercial-software-built-on-open-source- - -作者:[Tom Krazit ][a] -译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/community/forums/person/70513 diff --git a/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md b/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md new file mode 100644 index 0000000000..25353ef462 --- /dev/null +++ b/translated/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md @@ -0,0 +1,64 @@ +给学习OpenStack基础设施的新手的入门指南 +=========================================================== + +任何一个为OpenStack贡献源码的人会受到社区的欢迎,但是,对于一个发展趋近成熟并且快速迭代的开源社区而言,能够拥有一个新手指南并不是 +件坏事。在奥斯汀举办的OpenStack峰会上,[Paul Belanger][1] (红帽公司), [Elizabeth K. Joseph][2] (HPE公司),和[Christopher Aedo][3] (IBM公司)将会就针对新人的OpenStack基础设施作一场专门的会谈。在这次采访中,他们将会提供一些建议和资源来帮助新人成为OpenStack贡献者中的一员。 +![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + +**你在谈话中表示你将“投入全部身心于基础设施并解释你需要知道的有关于维持OpenStack正常工作的系统的每一件事情”。这是一个持续了40分钟的的艰巨任务。那么,对于学习OpenStack基础设施的新手来说最需要知道那些事情呢?** +**Elizabeth K. Joseph (EKJ)**: 我们没有为OpenStack使用GitHub这种提交补丁的方式,这是因为这样做会对新手造成巨大的困扰,尽管由于历史原因我们还是保留了所有原先就在GitHub上的所有库的镜像。相反,我们使用了一种完全开源的复查形式,并通过OpenStack基础设施团队来持续的维持系统集成(CI)。相关的,自从我们使用了CI系统,每一个提交给OpenStack的改变都会在被合并之前进行测试。 +**Paul Belanger (PB)**: 这个项目中的大多数都是富有激情的人,因此当你提交的补丁被某个人否定时不要感到沮丧。 +**Christopher Aedo (CA)**:社区很想帮助你取得成功,因此不要害怕提问或者寻求更多的那些能够促进你理解某些事物的引导者。 + +### 在你的讲话中,对于一些你无法涉及到的方面,你会向新手推荐哪些在线资源来让他们更加容易入门? +**PB**:当然是我们的[OpenStack项目基础设施文档][5]。我们已经花了足够大的努力来尽可能让这些文档能够随时保持最新状态。我们对每个运行OpenStack项目并投入使用的系统都有制作专门的页面来进行说明。甚至于连OpenStack云基础设施团队也即将上线。 + +**EKJ**:我对于将基础设施文档作为新手入门教程这件事上的观点和Paul是一致的,另外,我们十分乐意看到来自那些folk了我们项目的学习者提交上来的补丁。我们通常不会意识到我们忽略了文档中的某些内容,除非它们恰好被人问起。因此,阅读,学习,然后帮助我们修补这些知识上的漏洞。你可以在[OpenStack基础设施邮件清单]提出你的问题,或者在我们位于FreeNode上的#OpenStack-infra的IRC专栏发起你的提问。 +**CA**:我喜欢[这个详细的发布][7],它是由Ian Wienand写的一篇关于构建图片的文章。 +### "gotchas" 会是OpenStack新的代码贡献者苦苦寻找的吗? +**EKJ**:向项目作出贡献并不仅仅是提交新的代码和新的特性;OpenStack社区高度重视代码复查。如果你想要别人查看你的补丁,那你最好先看看其他人是如何做的,然后参考他们的风格,最后一步布做到你也能够向其他人一样提交清晰且结构分明的代码补丁。你越是能让你的同伴了解你的工作并知道你正在做的复查,那他们也就更有可能形成及时复查你的代码的风格。 +**CA**:我看到过大量的新手在面对Gerrit时受挫,阅读开发者引导中的[开发者工作步骤][9]时,可能只是将他通读了一遍。如果你没有经常使用Gerrit,那你最初对它的感觉可能是困惑和无力的。但是,如果你随后做了一些代码复查的工作,那么你马上就能轻松应对它。同样,我是IRC的忠实粉丝。它可能是一个获得帮助的好地方,但是,你最好保持一个长期保留的状态,这样,尽管你在某个时候没有出现,人们也可以回答你的问题。(阅读[IRC,开源界的成功秘诀][10]。)你不必总是在线,但是你最好能够轻松的在一个通道中进行回滚操作,以此来跟上最新的动态,这种能力非常重要。 +**PB**:我同意Elizabeth和Chris—Gerrit关于寻求何种方式来学习的观点。每个开发人员所作出的努力都将让社区变得更加美好。你不仅仅要提交代码给别人去复查,同时,你也要能够复查其他人的代码。留意Gerrit的用户接口,你可能一时会变的很疑惑。我推荐新手去尝试[Gertty][11],它是一个基于控制台的终端接口,用于Gerrit代码复查系统,而它恰好也是OpenStack基础设施所驱动的一个项目。 +### 你对于OpenStack新手如何通过网络与其他贡献者交流方面有什么好的建议? +**PB**:对我来说,是通过IRC以及在Freenode上参加#OpenStack-infra专栏([IRC logs][12]).这专栏上面有很多对新手来说很有价值的资源。你可以看到OpenStack项目日复一日的运作情况,同时,一旦你知道了OpenStack项目的工作原理,你将更好的知道如何为OpenStack的未来发展作出贡献。 +**CA**:我想要为IRC再次说明一点,在IRC上保持一整天的在线记录对我来说有非常重大的意义,因为我会感觉到被重视并且时刻保持连接。这也是一种非常好的获得帮助的方式,特别是当你卡在了项目中出现的某一个难题的时候,而在专栏中,总会有一些人很乐意为你解决问题。 +**EKJ**:[OpenStack开发邮件列表][13]对于能够时刻查看到你所致力于的OpenStack项目的最新情况是非常重要的。因此,我推荐一定要订阅它。邮件列表使用课题标签来区分项目,因此你可以设置你的邮件客户端来使用它,并且集中精力于你所关心的项目。除了在线资源之外,全世界范围内也成立了一些OpenStack小组,他们被用来为OpenStack的用户和贡献者提供服务。这些小组可能会定期举行座谈和针对OpenStack主要贡献者的一些活动。你可以在MeetUp.com上搜素你所在地域的贡献者活动聚会,或者在[groups.openstack.org]上查看你所在的地域是否存在OpenStack小组。最后,还有一个每六个月举办一次的OpenStack峰会,这个峰会上会作一些关于基础设施的演说。当前状态下,这个峰会包含了用户会议和开发者会议,会议内容都是和OpenStack相关的东西,包括它的过去,现在和未来。 +### OpenStack需要在那些方面得到提升来让新手更加容易学会并掌握? +**PB**: 我认为我们的[account-setup][16]过程对于新的贡献者已经做的比较容易了,特别是教他们如何提交他们的第一个补丁。真正参与到OpenStack开发者模式的过程是需要花费很大的努力的,可能对于开发者来说已经显得非常多了;然而,一旦融入进去了,这个模式将会运转的十分高效和令人满意。 +**CA**: 我们拥有一个由专业开发者组成的社区,而且我们的关注点都是发展OpenStack本身,同时,我们致力于让用户付出更小的代价去使用OpenStack云基础设施平台。我们需要发掘更多的应用开发者,并且鼓励更多的人去开发能在OpenStack云上完美运行的云应用程序,我们还鼓励他们在[社区App目录]上去贡献那些由他们开发的app。我们可以通过提升我们的API标准和保证我们不同的库(比如libcloud,phpopencloud已经其他一些库),并让他们持续的为开发者提供可信赖的支持来实现这一目标。还有一点就是通过倡导更多的OpenStack黑客加入进来。所有的这些事情都可以降低新人的学习门槛,这样也能引导他们与这个社区之间的关系更加紧密。y. +**EKJ**: 我已经致力于开源软件很多年了。但是,对于大量的OpenStack开发者而言,这是一个他们每个人都在从事的第一个开源项目。我发现他们之前使用私有软件的背景并没有为他们塑造开源的观念和方法论,还有在开源项目中需要具备的合作技巧。我乐于看到我们能够让那些曾经一直在使用私有软件工作的人能够真正的明白他们在开源如软件社区所从事的事情的巨大价值。 +### 我认为2016年对于开源Haiku的进一步增长是具有重大意义的一年。通过Haiku来向新手解释OpenStack。 +**PB**: 如果你喜欢自由软件,你可以向OpenStack提交你的第一个补丁。 +**CA**: 在不久的未来,OpenStack将以统治世界的姿态让这个世界变得更好。 +**EKJ**: OpenStack是一个可以免费部署在你的服务器上面并且运行你自己的云的一个软件。 +*Paul,Elizabeth和Christopher将会在4月25号星期一上午11:15于奥斯汀举办的OpenStack峰会上进行演说。 + +------------------------------------------------------------------------------ + +via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners + +作者:[linux.com][a] +译者:[kylepeng93](https://github.com/kylepeng93) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://rikkiendsley.com/ +[1]: https://twitter.com/pabelanger +[2]: https://twitter.com/pleia2 +[3]: https://twitter.com/docaedo +[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 +[5]: http://docs.openstack.org/infra/system-config/ +[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra +[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html +[8]: https://code.google.com/p/gerrit/ +[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow +[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/ +[11]: https://pypi.python.org/pypi/gertty +[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/ +[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +[14]: https://groups.openstack.org/ +[15]: https://www.openstack.org/summit/ +[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup +[17]: https://apps.openstack.org/ +[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 diff --git a/sources/tech/20160512 Rapid prototyping with docker-compose.md b/translated/tech/20160512 Rapid prototyping with docker-compose.md similarity index 66% rename from sources/tech/20160512 Rapid prototyping with docker-compose.md rename to translated/tech/20160512 Rapid prototyping with docker-compose.md index 0c67223697..a32f894d4b 100644 --- a/sources/tech/20160512 Rapid prototyping with docker-compose.md +++ b/translated/tech/20160512 Rapid prototyping with docker-compose.md @@ -1,36 +1,36 @@ - -Rapid prototyping with docker-compose +使用docker快速组成样品机 ======================================== -In this write-up we'll look at a Node.js prototype for **finding stock of the Raspberry PI Zero** from three major outlets in the UK. +在写前,我们将看看 Node.js 样机 ** 找寻树莓派 PI Zero ** 的供应在英国三个主要销售. -I wrote the code and deployed it to an Ubuntu VM in Azure within a single evening of hacking. Docker and the docker-compose tool made the deployment and update process extremely quick. +我写的代码,黑客部署到 Azure Ubuntu 虚拟机一个晚上就可以到位。Docker 和 docker-compose 工具做出调配和更新过程非常快。 -### Remember linking? +### 建立链接? -If you've already been through the [Hands-On Docker tutorial][1] then you will have experience linking Docker containers on the command line. Linking a Node hit counter to a Redis server on the command line may look like this: + +如果您已经通过 [动手 Docker 教程指南] [1] 那么你已有在命令行建立 Docker 容器的经验。链接一个Redis 服务器计数器节点在命令行上可能是这样: ``` $ docker run -d -P --name redis1 $ docker run -d hit_counter -p 3000:3000 --link redis1:redis ``` -Now imagine your application has three tiers +现在,假设应用程序中有三个等级 -- Web front-end -- Batch tier for processing long running tasks -- Redis or mongo database +- Web 前端 +- 批次层处理长时间运行的任务 +- Redis 或 MongoDB 数据库 -Explicit linking through `--link` is just about manageable with a couple of containers, but can get out of hand as we add more tiers or containers to the application. +通过 `--link` 管理几个容器,但可能失效,可以添加多层级或容器到应用程序。 -### Enter docker-compose +### 键入 docker 撰写 ![](http://blog.alexellis.io/content/images/2016/05/docker-compose-logo-01.png) ->Docker Compose logo +>Docker 撰写图标 -The docker-compose tool is part of the standard Docker Toolbox and can also be downloaded separately. It provides a rich set of features to configure all of an application's parts through a plain-text YAML file. +docker-compose 工具是标准的 docker工具箱的一部分,也可以单独下载。它提供了丰富功能,通过一个纯文本YAML文件配置所有应用程序组件。 -The above example would look like this: +上述提供了一个例子: ``` version: "2.0" @@ -43,18 +43,18 @@ services: - 3000:3000 ``` -From Docker 1.10 onwards we can take advantage of network overlays to help us scale out across multiple hosts. Prior to this linking only worked across a single host. The `docker-compose scale` command can be used to bring on more computing power as the need arises. +从Docker 1.10起,我们可以充分利用网络来帮助我们在多个主机进行扩展覆盖。在此之前,仅通过单个主机工作。“docker-compose scale” 命令可用于更多计算能力有需要时。 ->View the [docker-compose][2] reference on docker.com +>参考docker.com上关于"docker-compose" -### Real-world example: Raspberry PI Stock Alert +### 真实例子:树莓派 PI 到货通知 ![](http://blog.alexellis.io/content/images/2016/05/Raspberry_Pi_Zero_ver_1-3_1_of_3_large.JPG) ->The new Raspberry PI Zero v1.3 image courtesy of Pimoroni +>新版树莓派 PI Zero V1.3 图片提供来自Pimoroni -There is a huge buzz around the Raspberry PI Zero - a tiny microcomputer with a 1GHz CPU and 512MB RAM capable of running full Linux, Docker, Node.js, Ruby and many other popular open-source tools. One of the best things about the PI Zero is that costs only 5 USD. That also means that stock gets snapped up really quickly. +树莓派 PI Zero - 巨大的轰动一个微型计算机具有一个1GHz 处理器 和 512MB 内存能够运行完整 Linux,Docker,Node.js,Ruby 和许多流行的开源工具。一个关于 PI Zero 的好消息是,成本只有5美元。这也意味着,存量迅速抢购一空。 -*If you want to try Docker or Swarm on the PI check out the tutorial below.* +*如果您想尝试Docker 或集群在PI看看下面的教程。* >[Docker Swarm on the PI Zero][3] @@ -127,7 +127,7 @@ Preview as of 16th of May 2016 via: http://blog.alexellis.io/rapid-prototype-docker-compose/ 作者:[Alex Ellis][a] -译者:[译者ID](https://github.com/译者ID) +译者:[erlinux](https://github.com/erlinux) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -139,4 +139,3 @@ via: http://blog.alexellis.io/rapid-prototype-docker-compose/ [4]: https://github.com/alexellis/pi_zero_stock [5]: https://github.com/alexellis/pi_zero_stock [6]: http://stockalert.alexellis.io/ - diff --git a/translated/tech/20160512 Bitmap in Linux Kernel.md b/translated/tech/20160512 Bitmap in Linux Kernel.md new file mode 100644 index 0000000000..6475b9260e --- /dev/null +++ b/translated/tech/20160512 Bitmap in Linux Kernel.md @@ -0,0 +1,405 @@ +--- +date: 2016-07-09 14:42 +status: public +title: 20160512 Bitmap in Linux Kernel +--- + +Linux 内核里的数据结构 +================================================================================ + +Linux 内核中的位数组和位操作 +-------------------------------------------------------------------------------- + +除了不同的基于[链式](https://en.wikipedia.org/wiki/Linked_data_structure)和[树](https://en.wikipedia.org/wiki/Tree_%28data_structure%29)的数据结构以外,Linux 内核也为[位数组](https://en.wikipedia.org/wiki/Bit_array)或`位图`提供了 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。位数组在 Linux 内核里被广泛使用,并且在以下的源代码文件中包含了与这样的结构搭配使用的通用 `API`: + +* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) +* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) + +除了这两个文件之外,还有体系结构特定的头文件,它们为特定的体系结构提供优化的位操作。我们将探讨 [x86_64](https://en.wikipedia.org/wiki/X86-64) 体系结构,因此在我们的例子里,它会是 + +* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) + +头文件。正如我上面所写的,`位图`在 Linux 内核中被广泛地使用。例如,`位数组`常常用于保存一组在线/离线处理器,以便系统支持[热插拔](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)的 CPU(你可以在 [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) 部分阅读更多相关知识 ),一个`位数组`可以在 Linux 内核初始化等期间保存一组已分配的中断处理。 + +因此,本部分的主要目的是了解位数组是如何在 Linux 内核中实现的。让我们现在开始吧。 + +位数组声明 +================================================================================ + +在我们开始查看位图操作的 `API` 之前,我们必须知道如何在 Linux 内核中声明它。有两中通用的方法声明位数组。第一种简单的声明一个位数组的方法是,定义一个 unsigned long 的数组,例如: + +```C +unsigned long my_bitmap[8] +``` + +第二种方法,是使用 `DECLARE_BITMAP` 宏,它定义于 [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 头文件: + +```C +#define DECLARE_BITMAP(name,bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +``` + +我们可以看到 `DECLARE_BITMAP` 宏使用两个参数: + +* `name` - 位图名称; +* `bits` - 位图中位数; + +并且只是使用 `BITS_TO_LONGS(bits)` 元素展开 `unsigned long` 数组的定义。 `BITS_TO_LONGS` 宏将一个给定的位数转换为 `longs` 的个数,换言之,就是计算 `bits` 中有多少个 `8` 字节元素: + +```C +#define BITS_PER_BYTE 8 +#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) +``` + +因此,例如 `DECLARE_BITMAP(my_bitmap, 64)` 将产生: + +```python +>>> (((64) + (64) - 1) / (64)) +1 +``` + +与: + +```C +unsigned long my_bitmap[1]; +``` + +在能够声明一个位数组之后,我们便可以使用它了。 + +体系结构特定的位操作 +================================================================================ + +我们已经看了以上一对源文件和头文件,它们提供了位数组操作的 [API](https://en.wikipedia.org/wiki/Application_programming_interface)。其中重要且广泛使用的位数组 API 是体系结构特定的且位于已提及的头文件中 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h)。 + +首先让我们查看两个最重要的函数: + +* `set_bit`; +* `clear_bit`. + +我认为没有必要解释这些函数的作用。从它们的名字来看,这已经很清楚了。让我们直接查看它们的实现。如果你浏览 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,你将会注意到这些函数中的每一个都有[原子性](https://en.wikipedia.org/wiki/Linearizability)和非原子性两种变体。在我们开始深入这些函数的实现之前,首先,我们必须了解一些有关原子操作的知识。 + +简而言之,原子操作保证两个或以上的操作不会并发地执行同一数据。`x86` 体系结构提供了一系列原子指令,例如, [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html)、[cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) 等指令。除了原子指令,一些非原子指令可以在 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令的帮助下具有原子性。目前已经对原子操作有了充分的理解,我们可以接着探讨 `set_bit` 和 `clear_bit` 函数的实现。 + +我们先考虑函数的非原子性变体。非原子性的 `set_bit` 和 `clear_bit` 的名字以双下划线开始。正如我们所知道的,所有这些函数都定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并且第一个函数就是 `__set_bit`: + +```C +static inline void __set_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); +} +``` + +正如我们所看到的,它使用了两个参数: + +* `nr` - 位数组中的位号(从0开始,译者注) +* `addr` - 我们需要置位的位数组地址 + +注意,`addr` 参数使用 `volatile` 关键字定义,以告诉编译器给定地址指向的变量可能会被修改。 `__set_bit` 的实现相当简单。正如我们所看到的,它仅包含一行[内联汇编代码](https://en.wikipedia.org/wiki/Inline_assembler)。在我们的例子中,我们使用 [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) 指令,从位数组中选出一个第一操作数(我们的例子中的 `nr`),存储选出的位的值到 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 标志寄存器并设置该位(即 `nr` 指定的位置为1,译者注)。 + +注意,我们了解了 `nr` 的用法,但这里还有一个参数 `addr` 呢!你或许已经猜到秘密就在 `ADDR`。 `ADDR` 是一个定义在同一头文件的宏,它展开为一个包含给定地址和 `+m` 约束的字符串: + +```C +#define ADDR BITOP_ADDR(addr) +#define BITOP_ADDR(x) "+m" (*(volatile long *) (x)) +``` + +除了 `+m` 之外,在 `__set_bit` 函数中我们可以看到其他约束。让我们查看并试图理解它们所表示的意义: + +* `+m` - 表示内存操作数,这里的 `+` 表明给定的操作数为输入输出操作数; +* `I` - 表示整型常量; +* `r` - 表示寄存器操作数 + +除了这些约束之外,我们也能看到 `memory` 关键字,其告诉编译器这段代码会修改内存中的变量。到此为止,现在我们看看相同的原子性变体函数。它看起来比非原子性变体更加复杂: + +```C +static __always_inline void +set_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "orb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)CONST_MASK(nr)) + : "memory"); + } else { + asm volatile(LOCK_PREFIX "bts %1,%0" + : BITOP_ADDR(addr) : "Ir" (nr) : "memory"); + } +} +``` + +(BITOP_ADDR 的定义为:`#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))`,ORB 为字节按位或,译者注) + +首先注意,这个函数使用了与 `__set_bit` 相同的参数集合,但额外地使用了 `__always_inline` 属性标记。 `__always_inline` 是一个定义于 [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) 的宏,并且只是展开为 `always_inline` 属性: + +```C +#define __always_inline inline __attribute__((always_inline)) +``` + +其意味着这个函数总是内联的,以减少 Linux 内核映像的大小。现在我们试着了解 `set_bit` 函数的实现。首先我们在 `set_bit` 函数的开头检查给定的位数量。`IS_IMMEDIATE` 宏定义于相同[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h),并展开为 gcc 内置函数的调用: + +```C +#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr)) +``` + +如果给定的参数是编译期已知的常量,`__builtin_constant_p` 内置函数则返回 `1`,其他情况返回 `0`。假若给定的位数是编译期已知的常量,我们便无须使用效率低下的 `bts` 指令去设置位。我们可以只需在给定地址指向的字节和和掩码上执行 [按位或](https://en.wikipedia.org/wiki/Bitwise_operation#OR) 操作,其字节包含给定的位,而掩码为位号高位 `1`,其他位为 0。在其他情况下,如果给定的位号不是编译期已知常量,我们便做和 `__set_bit` 函数一样的事。`CONST_MASK_ADDR` 宏: + +```C +#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3)) +``` + +展开为带有到包含给定位的字节偏移的给定地址,例如,我们拥有地址 `0x1000` 和 位号是 `0x9`。因为 `0x9` 是 `一个字节 + 一位`,所以我们的地址是 `addr + 1`: + +```python +>>> hex(0x1000 + (0x9 >> 3)) +'0x1001' +``` + +`CONST_MASK` 宏将我们给定的位号表示为字节,位号对应位为高位 `1`,其他位为 `0`: + +```C +#define CONST_MASK(nr) (1 << ((nr) & 7)) +``` + +```python +>>> bin(1 << (0x9 & 7)) +'0b10' +``` + +最后,我们应用 `按位或` 运算到这些变量上面,因此,假如我们的地址是 `0x4097` ,并且我们需要置位号为 `9` 的位 为 1: + +```python +>>> bin(0x4097) +'0b100000010010111' +>>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7))) +'0b100010' +``` + +`第 9 位` 将会被置位。(这里的 9 是从 0 开始计数的,比如0010,按照作者的意思,其中的 1 是第 1 位,译者注) + +注意,所有这些操作使用 `LOCK_PREFIX` 标记,其展开为 [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) 指令,保证该操作的原子性。 + +正如我们所知,除了 `set_bit` 和 `__set_bit` 操作之外,Linux 内核还提供了两个功能相反的函数,在原子性和非原子性的上下文中清位。它们为 `clear_bit` 和 `__clear_bit`。这两个函数都定义于同一个[头文件](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 并且使用相同的参数集合。不仅参数相似,一般而言,这些函数与 `set_bit` 和 `__set_bit` 也非常相似。让我们查看非原子性 `__clear_bit` 的实现吧: + +```C +static inline void __clear_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); +} +``` + +没错,正如我们所见,`__clear_bit` 使用相同的参数集合,并包含极其相似的内联汇编代码块。它仅仅使用 [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) 指令替换 `bts`。正如我们从函数名所理解的一样,通过给定地址,它清除了给定的位。`btr` 指令表现得像 `bts`(原文这里为 btr,可能为笔误,修正为 bts,译者注)。该指令选出第一操作数指定的位,存储它的值到 `CF` 标志寄存器,并且清楚第二操作数指定的位数组中的对应位。 + +`__clear_bit` 的原子性变体为 `clear_bit`: + +```C +static __always_inline void +clear_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "andb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)~CONST_MASK(nr))); + } else { + asm volatile(LOCK_PREFIX "btr %1,%0" + : BITOP_ADDR(addr) + : "Ir" (nr)); + } +} +``` + +并且正如我们所看到的,它与 `set_bit` 非常相似,同时只包含了两处差异。第一处差异为 `clear_bit` 使用 `btr` 指令来清位,而 `set_bit` 使用 `bts` 指令来置位。第二处差异为 `clear_bit` 使用否定的位掩码和 `按位与` 在给定的字节上置位,而 `set_bit` 使用 `按位或` 指令。 + +到此为止,我们可以在任何位数组置位和清位了,并且能够转到位掩码上的其他操作。 + +在 Linux 内核位数组上最广泛使用的操作是设置和清除位,但是除了这两个操作外,位数组上其他操作也是非常有用的。Linux 内核里另一种广泛使用的操作是知晓位数组中一个给定的位是否被置位。我们能够通过 `test_bit` 宏的帮助实现这一功能。这个宏定义于 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 头文件,并展开为 `constant_test_bit` 或 `variable_test_bit` 的调用,这要取决于位号。 + +```C +#define test_bit(nr, addr) \ + (__builtin_constant_p((nr)) \ + ? constant_test_bit((nr), (addr)) \ + : variable_test_bit((nr), (addr))) +``` + +因此,如果 `nr` 是编译期已知常量,`test_bit` 将展开为 `constant_test_bit` 函数的调用,而其他情况则为 `variable_test_bit`。现在让我们看看这些函数的实现,我们从 `variable_test_bit` 开始看起: + +```C +static inline int variable_test_bit(long nr, volatile const unsigned long *addr) +{ + int oldbit; + + asm volatile("bt %2,%1\n\t" + "sbb %0,%0" + : "=r" (oldbit) + : "m" (*(unsigned long *)addr), "Ir" (nr)); + + return oldbit; +} +``` + +`variable_test_bit` 函数调用了与 `set_bit` 及其他函数使用的相似的参数集合。我们也可以看到执行 [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) 和 [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) 指令的内联汇编代码。`bt` 或 `bit test` 指令从第二操作数指定的位数组选出第一操作数指定的一个指定位,并且将该位的值存进标志寄存器的 [CF](https://en.wikipedia.org/wiki/FLAGS_register) 位。第二个指令 `sbb` 从第二操作数中减去第一操作数,再减去 `CF` 的值。因此,这里将一个从给定位数组中的给定位号的值写进标志寄存器的 `CF` 位,并且执行 `sbb` 指令计算: `00000000 - CF`,并将结果写进 `oldbit` 变量。 + +`constant_test_bit` 函数做了和我们在 `set_bit` 所看到的一样的事: + +```C +static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr) +{ + return ((1UL << (nr & (BITS_PER_LONG-1))) & + (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; +} +``` + +它生成了一个位号对应位为高位 `1`,而其他位为 `0` 的字节(正如我们在 `CONST_MASK` 所看到的),并将 [按位与](https://en.wikipedia.org/wiki/Bitwise_operation#AND) 应用于包含给定位号的字节。 + +下一广泛使用的位数组相关操作是改变一个位数组中的位。为此,Linux 内核提供了两个辅助函数: + +* `__change_bit`; +* `change_bit`. + +你可能已经猜测到,就拿 `set_bit` 和 `__set_bit` 例子说,这两个变体分别是原子和非原子版本。首先,让我们看看 `__change_bit` 函数的实现: + +```C +static inline void __change_bit(long nr, volatile unsigned long *addr) +{ + asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); +} +``` + +相当简单,不是吗? `__change_bit` 的实现和 `__set_bit` 一样,只是我们使用 [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html) 替换 `bts` 指令而已。 该指令从一个给定位数组中选出一个给定位,将该为位的值存进 `CF` 并使用求反操作改变它的值,因此值为 `1` 的位将变为 `0`,反之亦然: + +```python +>>> int(not 1) +0 +>>> int(not 0) +1 +``` + + `__change_bit` 的原子版本为 `change_bit` 函数: + +```C +static inline void change_bit(long nr, volatile unsigned long *addr) +{ + if (IS_IMMEDIATE(nr)) { + asm volatile(LOCK_PREFIX "xorb %1,%0" + : CONST_MASK_ADDR(nr, addr) + : "iq" ((u8)CONST_MASK(nr))); + } else { + asm volatile(LOCK_PREFIX "btc %1,%0" + : BITOP_ADDR(addr) + : "Ir" (nr)); + } +} +``` + +它和 `set_bit` 函数很相似,但也存在两点差异。第一处差异为 `xor` 操作而不是 `or`。第二处差异为 `btc`(原文为 `bts`,为作者笔误,译者注) 而不是 `bts`。 + +目前,我们了解了最重要的体系特定的位数组操作,是时候看看一般的位图 API 了。 + +通用位操作 +================================================================================ + +除了 [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 中体系特定的 API 外,Linux 内核提供了操作位数组的通用 API。正如我们本部分开头所了解的一样,我们可以在 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件和* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) 源文件中找到它。但在查看这些源文件之前,我们先看看 [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) 头文件,其提供了一系列有用的宏,让我们看看它们当中一部分。 + +首先我们看看以下 4 个 宏: + +* `for_each_set_bit` +* `for_each_set_bit_from` +* `for_each_clear_bit` +* `for_each_clear_bit_from` + +所有这些宏都提供了遍历位数组中某些位集合的迭代器。第一个红迭代那些被置位的位。第二个宏也是一样,但它是从某一确定位开始。最后两个宏做的一样,但是迭代那些被清位的位。让我们看看 `for_each_set_bit` 宏: + +```C +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) +``` + +正如我们所看到的,它使用了三个参数,并展开为一个循环,该循环从作为 `find_first_bit` 函数返回结果的第一个置位开始到最后一个置位且小于给定大小为止。 + +除了这四个宏, [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) 也提供了 `64-bit` 或 `32-bit` 变量循环的 API 等等。 + +下一个 [头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 提供了操作位数组的 API。例如,它提供了以下两个函数: + +* `bitmap_zero`; +* `bitmap_fill`. + +它们分别可以清除一个位数组和用 `1` 填充位数组。让我们看看 `bitmap_zero` 函数的实现: + +```C +static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst = 0UL; + else { + unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); + } +} +``` + +首先我们可以看到对 `nbits` 的检查。 `small_const_nbits` 是一个定义在同一[头文件](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 的宏: + +```C +#define small_const_nbits(nbits) \ + (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) +``` + +正如我们可以看到的,它检查 `nbits` 是否为编译期已知常量,并且其值不超过 `BITS_PER_LONG` 或 `64`。如果位数目没有超过一个 `long` 变量的位数,我们可以仅仅设置为 0。在其他情况,我们需要计算有多少个需要填充位数组的 `long` 变量并且使用 [memset](http://man7.org/linux/man-pages/man3/memset.3.html) 进行填充。 + +`bitmap_fill` 函数的实现和 `biramp_zero` 函数很相似,除了我们需要在给定的位数组中填写 `0xff` 或 `0b11111111`: + +```C +static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) +{ + unsigned int nlongs = BITS_TO_LONGS(nbits); + if (!small_const_nbits(nbits)) { + unsigned int len = (nlongs - 1) * sizeof(unsigned long); + memset(dst, 0xff, len); + } + dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits); +} +``` + +除了 `bitmap_fill` 和 `bitmap_zero`,[include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件也提供了和 `bitmap_zero` 很相似的 `bitmap_copy`,只是仅仅使用 [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) 而不是 [memset](http://man7.org/linux/man-pages/man3/memset.3.html) 这点差异而已。它也提供了位数组的按位操作,像 `bitmap_and`, `bitmap_or`, `bitamp_xor`等等。我们不会探讨这些函数的实现了,因为如果你理解了本部分的所有内容,这些函数的实现是很容易理解的。无论如何,如果你对这些函数是如何实现的感兴趣,你可以打开并研究 [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) 头文件。 + +本部分到此为止。 + +链接 +================================================================================ + +* [bitmap](https://en.wikipedia.org/wiki/Bit_array) +* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure) +* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) +* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) +* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) +* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) +* [API](https://en.wikipedia.org/wiki/Application_programming_interface) +* [atomic operations](https://en.wikipedia.org/wiki/Linearizability) +* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html) +* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html) +* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html) +* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html) +* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html) +* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html) +* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html) +* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html) +* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) +* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html) +* [CF](https://en.wikipedia.org/wiki/FLAGS_register) +* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) +* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) + + +------------------------------------------------------------------------------ + +via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md + +作者:[0xAX][a] +译者:[cposture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/0xAX \ No newline at end of file diff --git a/translated/tech/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md b/translated/tech/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md deleted file mode 100644 index d189753fd5..0000000000 --- a/translated/tech/20160514 NODEOS - LINUX DISTRIBUTION FOR NODE LOVERS.md +++ /dev/null @@ -1,53 +0,0 @@ -NODEOS: NODE爱好者的Linux发行版 -================================================ - -![](http://itsfoss.com/wp-content/uploads/2016/05/node-os-linux.jpg) - -[NodeOS][1]是一个基于[Node.js][2]的操作系统,自去年的首个[发布候选版][3]之后正朝着它的1.0版本进发。 - -如果你是第一次听到它,NodeOS是首个在[Linux][5]内核之上由Node.js和[npm][4]驱动的操作系统。[Jacob Groundwater][6]仔2013年中介绍了这个项目。操作系统中用到的主要技术是: - -- **Linux 内核**: 这个系统内置了Linux内核 -- **Node.js 运行时**: Node作为主要的运行时 -- **npm 包管理**: npm作为包管理 - -NodeOS源码托管在[Github][7]上,因此,任何感兴趣的人都可以轻松贡献或者报告bug。用户可以从源码构建或者使用[预编译镜像][8]。构建过程及使用可以在项目仓库中找到。 - -NodeOS背后的思想是提供足够npm运行的环境,剩余的功能就可以来自npm包管理。因此,用户可以使用大量的大约250,000的包,并且这个数目每天都还在增长。并且所有的都是开源的,你可以根据你的需要很容易地打补丁或者增加更多的包。 - -NodeOS核心开发被分离成了不同的层,基本的结构包含: - -- **barebones** – 带有可以启动到Node.js REPL的initramfs的自定义内核 -- **initramfs** – =用于挂载用户分区以及启动系统的initram文件系统 -- **rootfs** – 托管linux内核及initramfs文件的只读分区 -- **usersfs** – 多用户文件系统(如传统系统一样) - -NodeOS的目标是可以仔任何平台上运行,包括- **真实的硬件(用户计算机或者SoC)**、**云平台、虚拟机、PaaS提供商,容器**(Docker和Vagga)等等。如今看来,它做得似乎不错。在3.3号,NodeOS的成员[Jesús Leganés Combarro][9]在Github上[宣布][10]: - ->**NodeOS不再是一个玩具系统了**,它现在开始可以用在有实际需求的生产环境中了。 - -因此,如果你是Node.js的死忠或者乐于尝试新鲜事物,这或许值得你一试。在相关的文章中,你应该了解这些[Linux发行版的具体用法][11] - - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/nodeos-operating-system/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29 - -作者:[Munif Tanjim][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://itsfoss.com/author/munif/ -[1]: http://node-os.com/ -[2]: https://nodejs.org/en/ -[3]: https://github.com/NodeOS/NodeOS/releases/tag/v1.0-RC1 -[4]: https://www.npmjs.com/ -[5]: http://itsfoss.com/tag/linux/ -[6]: https://github.com/groundwater -[7]: https://github.com/nodeos/nodeos -[8]: https://github.com/NodeOS/NodeOS/releases -[9]: https://github.com/piranna -[10]: https://github.com/NodeOS/NodeOS/issues/216 -[11]: http://itsfoss.com/weird-ubuntu-based-linux-distributions/ diff --git a/translated/tech/20160516 Scaling Collaboration in DevOps.md b/translated/tech/20160516 Scaling Collaboration in DevOps.md new file mode 100644 index 0000000000..8ef18c4592 --- /dev/null +++ b/translated/tech/20160516 Scaling Collaboration in DevOps.md @@ -0,0 +1,70 @@ +Devops的弹性合作 +================================= + +![](http://devops.com/wp-content/uploads/2016/05/ScalingCollaboration.jpg) + +那些熟悉Devops的人普遍认同这样的文化关乎科技。当然,工具和参与是Devops有效实施的必要。但是Devops成功的基础是[团队合作][1]在企业做事更迅速,有效 + + +大多数的DevOps平台和工具的设计具有可扩展性的理念。DevOps环境通常运行在云端,并且容易发生变化。为了支持DevOps实时规模解决需求激增,软件变得很重要。同样的事情是真实的人为因素,但缩放合作是一个完整的不同的故事。 + + +跨企业协同是DevOps成功的关键。好的代码和发展是需要的。面临的挑战是如何做到无缝和尽可能多的速度和自动化,而不牺牲质量或性能。企业如何才能简化代码的开发和部署,同时保持知名度,治理和合规? + +### 新兴趋势 + +首先,我先提供一些背景,分享一些通过451研究公司 关于devops的研究而获取的数据。云、敏捷和Devops 在今天是非常重要的,不管是理念还是现实。451研究公司看到企业通过这些东西,还包括容器技术、增长大量的用在生产环境中。 + +拥抱这些技术和方法有许多优点,比如提高灵活性和速度,降低成本,提高弹性和可靠性,适应新的或新兴的应用。根据451公司的研究,组织也面临着一些障碍,包括缺乏熟悉和所需的技能,这些新兴技术的不成熟,成本和安全问题。 + +在 “[Voice of the Enterprise: SDI Q4 2015 survey][2],” 451公司发行超过一半的受访者者(最后为57.1%)考虑到他们是最终调查者或新兴科技。另一方面,近半受访者(48.3 %)认为自己是第一个或早起的采用者。 + +这些普遍性的情绪表现在对其他问题的调查中。当问起容器的执行情况时,50.3%的人表示这不在他们的计划中。49.7%的人是在进行计划、试点或积极使用容器技术。近2/3(65.1%)的人表示,他们用敏捷开发的应用开发,但是只有39.6%的回应,他们正在积极拥抱DevOps。然而,敏捷软件开发已经在行业内存在了多年,451公司注意到通过容器和Devops的提升给他们了很现实的趋势。 + +当被问及首要的三个痛点是什么,被提及最多的是成本或预算,工作人员和遗留软件问题。随着企业向云,DevOps,和容器等转型,这些都需要加以解决,以及如何规模技术和协作的有效。 + +### 当前状况 + +由Devops革命在很大程度上带动产生巨大变化的行业,使得软件开发变得更加高度集成的整个业务。软件的创造是不分种族的,而且更多是协作和社会化的功能。 + +在推动价值的今天,几年前的概念和方法已经成熟,很快就成为今天的主流技术和框架。企业依靠如敏捷、精益、虚拟化、云计算、自动化等概念来简化开发,同时使工作更加有效。 + +为适应和发展,企业需要完成一系列的关键任务。当今面临的挑战是如何加快发展的同时降低成本。组织需要消除它和其他业务之间存在的障碍,并在一个由技术驱动的竞争环境中提供更多有效的战略合作。 + +敏捷、云计算、Devops和容器在这个过程中起着重要的作用,但是有一件事情,他们都是有效的合作。每一种技术和方法都提供了独特的优势,但真正的价值来自于组织的整体能够进行规模协同和组织所使用的工具和平台。成功的DevOps的实现也需要其他利益相关者的参与,发展IT运营团队,包括安全、数据库、存储和业务队伍。 + +### 合作即平台 + +有一些在线的服务和平台,比如Github,促进了流式合作。它的功能是一个在线代码库,但是所产生的价值远超出了存储代码。 + + +这样一个[协作平台][4] 有助于开发人员和团队合作,因为它提供了一个代码和程序的社区可以共享和讨论。管理者可以监视进度和跟踪下一个代码是什么开发人员可以在一个安全的环境中进行实验,然后把这些实验的生活环境,新的想法和实验可以有效地传达给适当的团队。 + +更敏捷开发和DevOps的关键之一是允许开发人员测试并收集相关的快速反馈。目标是生产高质量的代码和功能,而不是浪费时间建立和管理基础设施或者安排更多的会议来讨论这个问题。比如GitHub平台,能够更有效的和可扩展的协作是因为代码审查可以由参与者最方便的进行。没有必要尝试协调和安排代码审查会议,使开发人员可以继续不间断地工作,从而产生更大的生产力和工作满意度。 + +Sendachi的Steven Anderson 指出,Github是一个协作平台,但它也是一个和你一起工作的工具。这样意味着他不仅可以帮助协作和持续集成,还影响了代码质量 + +合作平台的好处之一是,大型团队的开发人员可以分解成更小的团队,可以更有效地专注于特定的组件。它还允许诸如文件共享、模糊代码开发技术和非技术的贡献,增加了协作和可见性 + +### 合作是关键 + +合作的重要性不言而喻。合作是Devops文化的关键,也是当今世界能够进行敏捷开发和保持竞争优势的重要的一点。执行或管理的支持以及内部的传道是很重要的。组织还需要拥抱文化的转变---朝着目标混合技能跨越职能领域 + +当这样的文化建立起来,有效的合作是至关重要的。一个合作平台是规模合作的必要组件,因为简化了生产活动,并且减少了冗余和尝试,同时还产生了更高质量的结果。 + + +-------------------------------------------------------------------------------- + +via: http://devops.com/2016/05/16/scaling-collaboration-devops/ + +作者:[TONY BRADLEY][a] +译者:[Bestony](https://github.com/Bestony) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://devops.com/author/tonybsg/ +[1]: http://devops.com/2014/12/15/four-strategies-supporting-devops-collaboration/ +[2]: https://451research.com/ +[3]: https://451research.com/customer-insight-voice-of-the-enterprise-overview +[4]: http://devops.com/events/analytics-of-collaboration-on-github/ diff --git a/translated/tech/20160518 Python 3 - An Intro to Encryption.md b/translated/tech/20160518 Python 3 - An Intro to Encryption.md new file mode 100644 index 0000000000..5c57264aa7 --- /dev/null +++ b/translated/tech/20160518 Python 3 - An Intro to Encryption.md @@ -0,0 +1,280 @@ +Python 3: 加密简介 +=================================== + +Python 3 的标准库中没什么用来解决加密的,不过却有用于处理哈希的库。在这里我们会对其进行一个简单的介绍,但重点会放在两个第三方的软件包: PyCrypto 和 cryptography 上。我们将学习如何使用这两个库,来加密和解密字符串。 + +--- + +### 哈希 + +如果需要用到安全哈希算法或是消息摘要算法,那么你可以使用标准库中的 **hashlib** 模块。这个模块包含了标准的安全哈希算法,包括 SHA1,SHA224,SHA256,SHA384,SHA512 以及 RSA 的 MD5 算法。Python 的 **zlib** 模块也提供 adler32 以及 crc32 哈希函数。 + +一个哈希最常见的用法是,存储密码的哈希值而非密码本身。当然了,使用的哈希函数需要稳健一点,否则容易被破解。另一个常见的用法是,计算一个文件的哈希值,然后将这个文件和它的哈希值分别发送。接受到文件的人可以计算文件的哈希值,检验是否与接受到的哈希值相符。如果两者相符,就说明文件在传送的过程中未经篡改。 + +让我们试着创建一个 md5 哈希: + +``` +>>> import hashlib +>>> md5 = hashlib.md5() +>>> md5.update('Python rocks!') +Traceback (most recent call last): + File "", line 1, in + md5.update('Python rocks!') +TypeError: Unicode-objects must be encoded before hashing +>>> md5.update(b'Python rocks!') +>>> md5.digest() +b'\x14\x82\xec\x1b#d\xf6N}\x16*+[\x16\xf4w' +``` + +Let’s take a moment to break this down a bit. First off, we import **hashlib** and then we create an instance of an md5 HASH object. Next we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s **digest** method to get our hash. If you prefer the hex digest, we can do that too: + +让我们花点时间一行一行来讲解。首先,我们导入 **hashlib** ,然后创建一个 md5 哈希对象的实例。接着,我们向这个实例中添加一个字符串后,却得到了报错信息。原来,计算 md5 哈希时,需要使用字节形式的字符串而非普通字符串。正确添加字符串后,我们调用它的 **digest** 函数来得到哈希值。如果你想要十六进制的哈希值,也可以用以下方法: + +``` +>>> md5.hexdigest() +'1482ec1b2364f64e7d162a2b5b16f477' +``` + +实际上,有一种精简的方法来创建哈希,下面我们看一下用这种方法创建一个 sha512 哈希: + +``` +>>> sha = hashlib.sha1(b'Hello Python').hexdigest() +>>> sha +'422fbfbc67fe17c86642c5eaaa48f8b670cbed1b' +``` + +可以看到,我们可以同时创建一个哈希实例并且调用其 digest 函数。然后,我们打印出这个哈希值看一下。这里我使用 sha1 哈希函数作为例子,但它不是特别安全,读者可以随意尝试其他的哈希函数。 + + +--- + +### 密钥导出 + +Python 的标准库对密钥导出支持较弱。实际上,hashlib 函数库提供的唯一方法就是 **pbkdf2_hmac** 函数。它是基于口令的密钥导出函数 PKCS#5 ,并使用 HMAC 作为伪随机函数。因为它支持加盐和迭代操作,你可以使用类似的方法来哈希你的密码。例如,如果你打算使用 SHA-256 加密方法,你将需要至少 16 个字节的盐,以及最少 100000 次的迭代操作。 + +简单来说,盐就是随机的数据,被用来加入到哈希的过程中,以加大破解的难度。这基本可以保护你的密码免受字典和彩虹表的攻击。 + +让我们看一个简单的例子: + +``` +>>> import binascii +>>> dk = hashlib.pbkdf2_hmac(hash_name='sha256', + password=b'bad_password34', + salt=b'bad_salt', + iterations=100000) +>>> binascii.hexlify(dk) +b'6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02' +``` + +这里,我们用 SHA256 对一个密码进行哈希,使用了一个糟糕的盐,但经过了 100000 次迭代操作。当然,SHA 实际上并不被推荐用来创建密码的密钥。你应该使用类似 **scrypt** 的算法来替代。另一个不错的选择是使用一个叫 **bcrypt** 的第三方库。它是被专门设计出来哈希密码的。 + +--- + +### PyCryptodome + +PyCrypto 可能是 Python 中密码学方面最有名的第三方软件包。可惜的是,它的开发工作于 2012 年就已停止。其他人还在继续发布最新版本的 PyCrypto,如果你不介意使用第三方的二进制包,仍可以取得 Python 3.5 的相应版本。比如,我在 Github (https://github.com/sfbahr/PyCrypto-Wheels) 上找到了对应 Python 3.5 的 PyCrypto 二进制包。 + +幸运的是,有一个该项目的分支 PyCrytodome 取代了 PyCrypto 。为了在 Linux 上安装它,你可以使用以下 pip 命令: + +``` +pip install pycryptodome +``` + +在 Windows 系统上安装则稍有不同: + +``` +pip install pycryptodomex +``` + +如果你遇到了问题,可能是因为你没有安装正确的依赖包(译者注:如 python-devel),或者你的 Windows 系统需要一个编译器。如果你需要安装上的帮助或技术支持,可以访问 PyCryptodome 的[网站][1]。 + +还值得注意的是,PyCryptodome 在 PyCrypto 最后版本的基础上有很多改进。非常值得去访问它们的主页,看看有什么新的特性。 + +### 加密字符串 + +访问了他们的主页之后,我们可以看一些例子。在第一个例子中,我们将使用 DES 算法来加密一个字符串: + +``` +>>> from Crypto.Cipher import DES +>>> key = 'abcdefgh' +>>> def pad(text): + while len(text) % 8 != 0: + text += ' ' + return text +>>> des = DES.new(key, DES.MODE_ECB) +>>> text = 'Python rocks!' +>>> padded_text = pad(text) +>>> encrypted_text = des.encrypt(text) +Traceback (most recent call last): + File "", line 1, in + encrypted_text = des.encrypt(text) + File "C:\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\blockalgo.py", line 244, in encrypt + return self._cipher.encrypt(plaintext) +ValueError: Input strings must be a multiple of 8 in length +>>> encrypted_text = des.encrypt(padded_text) +>>> encrypted_text +b'>\xfc\x1f\x16x\x87\xb2\x93\x0e\xfcH\x02\xd59VQ' +``` + +这段代码稍有些复杂,让我们一点点来看。首先需要注意的是,DES 加密使用的密钥长度为 8 个字节,这也是我们将密钥变量设置为 8 个字符的原因。而我们需要加密的字符串的长度必须是 8 的倍数,所以我们创建了一个名为 **pad** 的函数,来给一个字符串末尾添加空格,直到它的长度是 8 的倍数。然后,我们创建了一个 DES 的实例,以及我们需要加密的文本。我们还创建了一个经过填充处理的文本。我们尝试这对未经填充处理的文本进行加密,啊欧,报错了!我们需要对经过填充处理的文本进行加密,然后得到加密的字符串。 +(译者注:encrypt 函数的参数应为 byte 类型字符串,代码为:`encrypted_text = des.encrypt(padded_textpadded_text.encode('uf-8'))`) + +知道了如何加密,还要知道如何解密: + +``` +>>> des.decrypt(encrypted_text) +b'Python rocks! ' +``` + +幸运的是,解密非常容易,我们只需要调用 des 对象的 **decrypt** 方法就可以得到我们原来的 byte 类型字符串了。下一个任务是学习如何用 RSA 算法加密和解密一个文件。首先,我们需要创建一些 RSA 密钥。 + +### 创建 RSA 密钥 + +如果你希望使用 RSA 算法加密数据,那么你需要拥有访问 RAS 公钥和私钥的权限,否则你需要生成一组自己的密钥对。在这个例子中,我们将生成自己的密钥对。创建 RSA 密钥非常容易,所以我们将在 Python 解释器中完成。 + +``` +>>> from Crypto.PublicKey import RSA +>>> code = 'nooneknows' +>>> key = RSA.generate(2048) +>>> encrypted_key = key.exportKey(passphrase=code, pkcs=8, + protection="scryptAndAES128-CBC") +>>> with open('/path_to_private_key/my_private_rsa_key.bin', 'wb') as f: + f.write(encrypted_key) +>>> with open('/path_to_public_key/my_rsa_public.pem', 'wb') as f: + f.write(key.publickey().exportKey()) +``` + +首先我们从 **Crypto.PublicKey** 包中导入 **RSA**,然后创建一个傻傻的密码。接着我们生成 2048 位的 RSA 密钥。现在我们到了关键的部分。为了生成私钥,我们需要调用 RSA 密钥实例的 **exportKey** 方法,然后传入密码,使用的 PKCS 标准,以及加密方案这三个参数。之后,我们把私钥写入磁盘的文件中。 + +接下来,我们通过 RSA 密钥实例的 **publickey** 方法创建我们的公钥。我们使用方法链调用 publickey 和 exportKey 方法生成公钥,同样将它写入磁盘上的文件。 + +### 加密文件 + +有了私钥和公钥之后,我们就可以加密一些数据,并写入文件了。这儿有个比较标准的例子: + +``` +from Crypto.PublicKey import RSA +from Crypto.Random import get_random_bytes +from Crypto.Cipher import AES, PKCS1_OAEP + +with open('/path/to/encrypted_data.bin', 'wb') as out_file: + recipient_key = RSA.import_key( + open('/path_to_public_key/my_rsa_public.pem').read()) + session_key = get_random_bytes(16) + + cipher_rsa = PKCS1_OAEP.new(recipient_key) + out_file.write(cipher_rsa.encrypt(session_key)) + + cipher_aes = AES.new(session_key, AES.MODE_EAX) + data = b'blah blah blah Python blah blah' + ciphertext, tag = cipher_aes.encrypt_and_digest(data) + + out_file.write(cipher_aes.nonce) + out_file.write(tag) + out_file.write(ciphertext) +``` + +代码的前三行导入 PyCryptodome 包。然后我们打开一个文件用于写入数据。接着我们导入公钥赋给一个变量,创建一个 16 字节的会话密钥。在这个例子中,我们将使用混合加密方法,即 PKCS#1 OAEP ,也就是最优非对称加密填充。这允许我们向文件中写入任意长度的数据。接着我们创建 AES 加密,要加密的数据,然后加密数据。我们将得到加密的文本和消息认证码。最后,我们将随机数,消息认证码和加密的文本写入文件。 + +顺便提一下,随机数通常是真随机或伪随机数,只是用来进行密码通信的。对于 AES 加密,其密钥长度最少是 16 个字节。随意用一个你喜欢的编辑器试着打开这个被加密的文件,你应该只能看到乱码。 + +现在让我们学习如何解密我们的数据。 + +``` +from Crypto.PublicKey import RSA +from Crypto.Cipher import AES, PKCS1_OAEP + +code = 'nooneknows' + +with open('/path/to/encrypted_data.bin', 'rb') as fobj: + private_key = RSA.import_key( + open('/path_to_private_key/my_rsa_key.pem').read(), + passphrase=code) + + enc_session_key, nonce, tag, ciphertext = [ fobj.read(x) + for x in (private_key.size_in_bytes(), + 16, 16, -1) ] + + cipher_rsa = PKCS1_OAEP.new(private_key) + session_key = cipher_rsa.decrypt(enc_session_key) + + cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce) + data = cipher_aes.decrypt_and_verify(ciphertext, tag) + +print(data) +``` + +如果你认真看了上一个例子,这段代码应该很容易解析。在这里,我们先读取二进制的加密文件,然后导入私钥。注意,当你导入私钥时,需要提供一个密码,否则会出现错误。然后,我们文件中读取数据,首先是加密的会话密钥,然后是 16 字节的随机数和 16 字节的消息认证码,最后是剩下的加密的数据。 + +接下来我们需要解密出会话密钥,重新创建 AES 密钥,然后解密出数据。 + +你还可以用 PyCryptodome 库做更多的事。不过我们要接着讨论在 Python 中还可以用什么来满足我们加密解密的需求。 + +--- + +### cryptography 包 + +**cryptography** 的目标是成为人类易于使用的密码学包,就像 **requests** 是人类易于使用的 HTTP 库一样。这个想法使你能够创建简单安全,易于使用的加密方案。如果有需要的话,你也可以使用一些底层的密码学基元,但这也需要你知道更多的细节,否则创建的东西将是不安全的。 + +如果你使用的 Python 版本是 3.5, 你可以使用 pip 安装,如下: + +``` +pip install cryptography +``` + +你会看到 cryptography 包还安装了一些依赖包(译者注:如 libopenssl-devel)。如果安装都顺利,我们就可以试着加密一些文本了。让我们使用 **Fernet** 对称加密算法,它保证了你加密的任何信息在不知道密码的情况下不能被篡改或读取。Fernet 还通过 **MultiFernet** 支持密钥轮换。下面让我们看一个简单的例子: + +``` +>>> from cryptography.fernet import Fernet +>>> cipher_key = Fernet.generate_key() +>>> cipher_key +b'APM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o=' +>>> cipher = Fernet(cipher_key) +>>> text = b'My super secret message' +>>> encrypted_text = cipher.encrypt(text) +>>> encrypted_text +(b'gAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f' + b'6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=') +>>> decrypted_text = cipher.decrypt(encrypted_text) +>>> decrypted_text +b'My super secret message' +``` + +首先我们需要导入 Fernet,然后生成一个密钥。我们输出密钥看看它是什么样儿。如你所见,它是一个随机的字节串。如果你愿意的话,可以试着多运行 **generate_key** 方法几次,生成的密钥会是不同的。然后我们使用这个密钥生成 Fernet 密码实例。 + +现在我们有了用来加密和解密消息的密码。下一步是创建一个需要加密的消息,然后使用 **encrypt** 方法对它加密。我打印出加密的文本,然后你可以看到你不再能够读懂它。为了**解密**出我们的秘密消息,我们只需调用 decrypt 方法,并传入加密的文本作为参数。结果就是我们得到了消息字节串形式的纯文本。 + +--- + +### 小结 + +这一章仅仅浅显地介绍了 PyCryptodome 和 cryptography 这两个包的使用。不过这也确实给了你一个关于如何加密解密字符串和文件的简述。请务必阅读文档,做做实验,看看还能做些什么! + +--- + +### 相关阅读 + +[Github][2] 上 Python 3 的 PyCrypto Wheels + +PyCryptodome 的 [文档][3] + +Python’s 加密 [服务][4] + +Cryptography 包的 [官网][5] + +------------------------------------------------------------------------------ + +via: http://www.blog.pythonlibrary.org/2016/05/18/python-3-an-intro-to-encryption/ + +作者:[Mike][a] +译者:[Cathon](https://github.com/Cathon) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.blog.pythonlibrary.org/author/mld/ +[1]: http://pycryptodome.readthedocs.io/en/latest/ +[2]: https://github.com/sfbahr/PyCrypto-Wheels +[3]: http://pycryptodome.readthedocs.io/en/latest/src/introduction.html +[4]: https://docs.python.org/3/library/crypto.html +[5]: https://cryptography.io/en/latest/ diff --git a/translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md b/translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md new file mode 100644 index 0000000000..bf5a25c77e --- /dev/null +++ b/translated/tech/20160519 The future of sharing integrating Pydio and ownCloud.md @@ -0,0 +1,65 @@ +分享的未来:整合 Pydio 与 ownCloud +========================================================= + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_darwincloud_520x292_0311LL.png?itok=5yWIaEDe) +>图片来源 : +opensource.com + +开源共享生态圈内容纳了许多各异的项目,他们每一个都能提供出自己的解决方案,且每一个都不按套路来。有很多原因导致你选择开源的解决方案,而非 Dropbox、Google Drive、iCloud 或 OneDrive 这些商业的解决方案。这些商业的解决方案虽然能让你不必为如何管理数据担心,但也理所应当的带着种种限制,其中就包含对于原有基础结构的控制和整合不足。 + +对于用户而言仍有相当一部分文件分享和同步的替代品可供选择,其中就包括了 Pydio 和 ownCloud。 + +### Pydio + +Pydio (Put your data in orbit 把你的数据放上轨道) 项目由一位作曲家 Charles du Jeu 发起,起初他也只是需要一种与乐队成员分享大型音频文件的方法。[Pydio][1] 是一种文件分享与同步的解决方案,综合了多存储后端,设计时还同时考虑了开发者和系统管理员两方面。在世界各地有逾百万的下载量,已被翻译成 27 种语言。 + +项目在很开始的时候便开源了,先是在 [SourceForge][2] 上茁壮的成长,现在已在 [GitHub][3] 上安了家.。 + +用户界面基于 Google 的 [Material 设计][4]。用户可以使用现有的传统的文件基础结构或是本地部署 Pydio,并通过 web、桌面和移动端应用随时随地地管理自己的东西。对于管理员来说,细粒度的访问权限绝对是配置访问时的利器。 + +在 [Pydio 社区][5],你可以找到许多让你增速的资源。Pydio 网站 [对于如何为 Pydio GitHub 仓库贡献][6] 给出了明确的指导方案。[论坛][7]中也包含了开发者板块和社区。 + +### ownCloud + +[ownCloud][8] 在世界各地拥有逾 8 百万的用户,并且开源,支持自托管文件同步,且共享技术。同步客户端支持所有主流平台并支持 WebDAV 通过 web 界面实现。ownCloud 拥有简单的使用界面,强大的管理工具,和大规模的共享及协作功能——以满足用户管理数据时的需求。 + +ownCloud 的开放式架构是通过 API 和为应用提供平台来实现可扩展性的。迄今已有逾 300 款应用,功能包括处理像日历、联系人、邮件、音乐、密码、笔记等诸多数据类型。ownCloud 由一个数百位贡献者的国际化的社区开发,安全,并且能做到为小到一个树莓派大到好几百万用户的 PB 级存储集群量身定制。 + +### 联合共享 (Federated sharing) + +文件共享开始转向团队合作时代,而标准化为合作提供了坚实的土壤。 + +联合共享——一个由 [OpenCloudMesh][9] 项目提供的新开放标准,就是在这个方向迈出的一步。先不说别的,在支持该标准的服务端上,可以像 Pydio 和 ownCloud 那样分享文件和文件夹。 + +ownCloud 7 率先引入,这种服务端到服务端的分享方式可以让你挂载远程服务端上共享的文件,实际上就是创建你所有云的云。你可以直接创建共享链接,让用户在其他支持联合云共享的服务端上使用。 + +实现这个新的 API 允许存储解决方案之间更深层次的集成,同时保留了原有平台的安全,控制和属性。 + +“交换和共享文件是当下和未来不可或缺的东西。”ownCloud 的创始人 Frank Karlitschek 说道:“正因如此,采用联合和分布的方式而非集中的数据孤岛就显得至关重要。[联合共享]的设计初衷便是在保证安全和用户隐私的同时追求分享的无缝、至简之道。” + +### 下一步是什么呢? + +正如 OpenCloudMesh 做的那样,将会通过像 Pydio 和 ownCloud 这样的机构和公司,合作推广这一文件共享的新开放标准。ownCloud 9 已经引入联合服务端间交换用户列表的功能,让你的用户在你的服务器上享有和你同样的无缝体验。将来,一个中央地址簿服务(联合!)集合,用以检索其他联合云 ID 的想法可能会把云间合作推向一个新的高度。 + +这一举措无疑有助于日益开放的技术社区中的那些成员方便地讨论,开发,并推动“OCM 分享 API”作为一个厂商中立协议。所有领导 OCM 项目的合作伙伴都全心致力于开放 API 的设计原理,并欢迎其他开源的文件分享和同步社区参与并加入其中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/16/5/sharing-files-pydio-owncloud + +作者:[ben van 't ende][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/benvantende +[1]: https://pydio.com/ +[2]: https://sourceforge.net/projects/ajaxplorer/ +[3]: https://github.com/pydio/ +[4]: https://www.google.com/design/spec/material-design/introduction.html +[5]: https://pydio.com/en/community +[6]: https://pydio.com/en/community/contribute +[7]: https://pydio.com/forum/f +[8]: https://owncloud.org/ +[9]: https://wiki.geant.org/display/OCM/Open+Cloud+Mesh diff --git a/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md b/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md new file mode 100644 index 0000000000..d7b6bf6c2e --- /dev/null +++ b/translated/tech/20160601 Learn Python Control Flow and Loops to Write and Tune Shell Scripts – Part 2.md @@ -0,0 +1,285 @@ +学习使用 python 控制流和循环来编写和执行 Shell 脚本 —— Part 2 +====================================================================================== + +在[Python series][1]之前的文章里,我们分享了 Python的一个简介,它的命令行 shell 和 IDLE(译者注:python 自带的一个IDE)。我们也演示了如何进行数值运算,如何用变量存储值,还有如何打印那些值到屏幕上。最后,我们通过一个练习示例讲解了面向对象编程中方法和属性概念。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png) +>在 Python 编程中写 Linux Shell 脚本 + +本篇中,我嫩会讨论控制流(根据用户输入的信息,计算的结果,或者一个变量的当前值选择不同的动作行为)和循环(自动重复执行任务),接着应用到我们目前所学东西中,编写一个简单的 shell 脚本,这个脚本会显示操作系统类型,主机名,内核发行版,版本号和机器硬件名字。 + +这个例子尽管很基础,但是会帮助我们证明,比起使用一些 bash 工具写 shell 脚本,我们可以使得用 Python OOP 的兼容特性来编写 shell 脚本会更简单些。 + +换句话说,我们想从这里出发 + +``` +# uname -snrvm +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png) +> 检查 Linux 的主机号 + +到 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png) +> 用 Python 脚本来检查 Linux 的主机号 + +或者 + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Script-to-Check-Linux-System-Information.png) +> 用脚本检查 Linux 系统信息 + +看着不错,不是吗?那我们就挽起袖子,开干吧。 + +### Python 中的控制流 + +如我们刚说那样,控制流允许我们根据一个给定的条件,选择不同的输出结果。在 Python 中最简单的实现就是一个 if/else 语句。 + +基本语法是这样的: + +``` +if condition: + # action 1 +else: + # action 2 +``` + +当 condition 求值为真(true),下面的代码块就会被执行(`# action 1`代表的部分)。否则,else 下面的代码就会运行。 +condition 可以是任何表达式,只要可以求得值为真或者假。 + +举个例子: + +1. 1 < 3 # 真 + +2. firstName == "Gabriel" # 对 firstName 为 Gabriel 的人是真,对其他不叫 Gabriel 的人为假 + + - 在第一个例子中,我们比较了两个值,判断 1 是否小于 3。 + - 在第二个例子中,我们比较了 firstName(一个变量)与字符串 “Gabriel”,看在当前执行的位置,firstName 的值是否等于该字符串。 + - 条件和 else 表达式都必须带着一个冒号(:)。 + - 缩进在 Python 非常重要。同样缩进下的行被认为是相同的代码块。 + +请注意,if/else 表达式只是 Python 中许多控制流工具的一个而已。我们先在这里了解以下,后面会用在我们的脚本中。你可以在[官方文档][2]中学到更多工具。 + +### Python 中的循环 + +简单来说,一个循环就是一组指令或者表达式序列,可以按顺序一直执行,只要一个条件为真,或者在一个列表里一次执行一个条目。 + +Python 中最简单的循环,就是 for 循环迭代一个给定列表的元素,或者一个字符串从第一个字符开始到最后一个字符结束。 + +基本语句: + +``` +for x in example: + # do this +``` + +这里的 example 可以是一个列表或者一个字符串。如果是列表,变量 x 就代表列表中每个元素;如果是字符串,x 就代表字符串中每个字符。 + +``` +>>> rockBands = [] +>>> rockBands.append("Roxette") +>>> rockBands.append("Guns N' Roses") +>>> rockBands.append("U2") +>>> for x in rockBands: + print(x) +or +>>> firstName = "Gabriel" +>>> for x in firstName: + print(x) +``` + +上面例子的输出如下图所示: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png) +>学习 Python 中的循环 + +### Python 模块 + +很明显,必须有个途径可以保存一系列的 Python 指令和表达式到文件里,然后需要的时候再取出来。 + +准确来说模块就是这样的。特别地,os 模块提供了一个接口到操作系统的底层,允许我们做许多通常在命令行下的操作。 + +没错,os 模块包含了许多方法和属性,可以用来调用,就如我们之前文章里讲解的那样。尽管如此,我们需要使用 import 关键词导入(或者叫包含)模块到开发环境里来: + +``` +>>> import os +``` + +我们来打印出当前的工作目录: + +``` +>>> os.getcwd() +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png) +>学习 Python 模块 + +现在,让我们把所有结合在一起(包括之前文章里讨论的概念),编写需要的脚本。 + +### Python 脚本 + +以一个声明开始一个脚本是个不错的想法,表明脚本的目的,发行所依据的证书,和一个修订历史列出所做的修改。尽管这主要是个人喜好,但这会让我们的工作看起来比较专业。 + +这里有个脚本,可以输出这篇文章最前面展示的那样。脚本做了大量的注释,为了让大家可以理解发生了什么。 + +在进行下一步之前,花点时间来理解它。注意,我们是如何使用一个 if/else 结构,判断每个字段标题的长度是否比字段本身的值还大。 + +基于这个结果,我们用空字符去填充一个字段标题和下一个之间的空格。同时,我们使用一定数量的短线作为字段标题与其值之间的分割符。 + +``` +#!/usr/bin/python3 +# Change the above line to #!/usr/bin/python if you don't have Python 3 installed + +# Script name: uname.py +# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily +# License: GPL v3 (http://www.gnu.org/licenses/gpl.html) + +# Copyright (C) 2016 Gabriel Alejandro Cánepa +# ​Facebook / Skype / G+ / Twitter / Github: gacanepa +# Email: gacanepa (at) gmail (dot) com + +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +# REVISION HISTORY +# DATE VERSION AUTHOR CHANGE DESCRIPTION +# ---------- ------- -------------- +# 2016-05-28 1.0 Gabriel Cánepa Initial version + +# Import the os module +import os + +# Assign the output of os.uname() to the the systemInfo variable +# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine) +# Documentation: https://docs.python.org/3.2/library/os.html#module-os +systemInfo = os.uname() + +# This is a fixed array with the desired captions in the script output +headers = ["Operating system","Hostname","Release","Version","Machine"] + +# Initial value of the index variable. It is used to define the +# index of both systemInfo and headers in each step of the iteration. +index = 0 + +# Initial value of the caption variable. +caption = "" + +# Initial value of the values variable +values = "" + +# Initial value of the separators variable +separators = "" + +# Start of the loop +for item in systemInfo: + if len(item) < len(headers[index]): + # A string containing dashes to the length of item[index] or headers[index] + # To repeat a character(s), enclose it within quotes followed + # by the star sign (*) and the desired number of times. + separators = separators + "-" * len(headers[index]) + " " + caption = caption + headers[index] + " " + values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " " + else: + separators = separators + "-" * len(item) + " " + caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1) + values = values + item + " " + # Increment the value of index by 1 + index = index + 1 +# End of the loop + +# Print the variable named caption converted to uppercase +print(caption.upper()) + +# Print separators +print(separators) + +# Print values (items in systemInfo) +print(values) + +# INSTRUCTIONS: +# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions: +# chmod +x uname.py +# 2) Execute it: +# ./uname.py +``` + +如果你已经保存上面的脚本到一个文件里,给文件执行权限,并且运行它,像代码底部描述的那样: + +``` +# chmod +x uname.py +# ./uname.py +``` + +如果试图运行脚本时,你得到了如下的错误: + +``` +-bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory +``` + +这意味着你没有安装 Python3。如果那样的话,你要么安装 Python3 的包,要么替换解释器那行(如果你跟着下面的步骤去更新 Python 执行文件的软连接,如之前文章里概述的那样,要特别注意并且非常小心): + +``` +#!/usr/bin/python3 +``` + +为 + +``` +#!/usr/bin/python +``` + +这样会导致使用安装好的 Python 2 版本去执行该脚本。 + +**注意**: 该脚本在 Python 2.x 与 Pyton 3.x 上都测试成功过了。 + +尽管比较粗糙,你可以认为该脚本就是一个 Python 模块。这意味着你可以在 IDLE 中打开它(File → Open… → Select file): + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png) +>在 IDLE 中打开 Python + +一个包含有文件内容的新窗口就会打开。然后执行 Run → Run module(或者按 F5)。脚本的输出就会在原 Shell 里显示出来: + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png) +>执行 Python 脚本 + +如果你想纯粹用 bash 写一个脚本,也获得同样的结果,你可能需要结合使用 [awk][3],[sed][4],并且借助复杂的方法来存储与获得列表中的元素(忘了提醒使用 tr 命令将小写字母转为大写) + +另外,Python 在所有的 Linux 系统版本中集成了至少一个 Python 版本(2.x 或者 3.x,或者两者都有)。你还需要依赖 shell 去完成同样的目标吗,那样你可能会为不同的 shell 编写不同的版本。 + +这里演示了面向对象编程的特性,会成为一个系统管理员得力的助手。 + +**注意**:你可以在我的 Github 仓库里获得 [这个 python 脚本][5](或者其他的)。 + +### 总结 + +这篇文章里,我们讲解了 Python 中控制流,循环/迭代,和模块的概念。我们也演示了如何利用 Python 中 OOP 的方法和属性,来简化复杂的 shell 脚本。 + +你有任何其他希望去验证的想法吗?开始吧,写出自己的 Python 脚本,如果有任何问题可以咨询我们。不必犹豫,在分割线下面留下评论,我们会尽快回复你。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Gabriel Cánepa][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/ +[2]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs. +[3]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/ +[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[5]: https://github.com/gacanepa/scripts/blob/master/python/uname.py + diff --git a/translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md b/translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md new file mode 100644 index 0000000000..b64fa51ab0 --- /dev/null +++ b/translated/tech/20160602 How to mount your Google Drive on Linux with google-drive-ocamlfuse.md @@ -0,0 +1,68 @@ +教你用 google-drive-ocamlfuse 在 Linux 上挂载 Google Drive +===================== + +>如果你在找一个方便的方式在 Linux 机器上挂载你的 Google Drive 文件夹, Jack Wallen 将教你怎么使用 google-drive-ocamlfuse 来挂载 Google Drive。 + +![](http://tr4.cbsistatic.com/hub/i/2016/05/18/ee5d7b81-e5be-4b24-843d-d3ca99230a63/651be96ac8714698f8100afa6883e64d/linuxcloudhero.jpg) +>图片来源: Jack Wallen + +Google 还没有发行 Linux 版本的 Google Drive 应用,尽管现在有很多方法从 Linux 中访问你的 Drive 文件。 +(注:不清楚 app 需不需要翻译成应用,这里翻译了) + +如果你喜欢界面化的工具,你可以选择 Insync。如果你喜欢用命令行,这有很多工具,像 Grive2 和用 Ocaml 语言编写的、非常容易使用的、基于 FUSE 的系统(注:there are tools such as Grive2 and the incredibly easy to use FUSE-based system written in Ocaml. 这一句感觉翻译不出来)。我将会用后面这种方式演示如何在 Linux 桌面上挂载你的 Google Drive。尽管这是通过命令行完成的,但是它的用法会简单到让你吃惊。它太简单了以至于谁都能做到。 + +系统特点: + +- 对普通文件/文件夹有完全的读写权限 +- 对于 Google Docs,sheets,slides 这三个应用只读 +- 能够访问 Drive 回收站(.trash) +- 处理重复文件功能 +- 支持多个帐号 + +接下来完成 google-drive-ocamlfuse 在 Ubuntu 16.04 桌面的安装,然后你就能够访问云盘上的文件了。 + +### 安装 + +1. 打开终端。 +2. 用`sudo add-apt-repository ppa:alessandro-strada/ppa`命令添加必要的 PPA +3. 出现提示的时候,输入密码并按下回车。 +4. 用`sudo apt-get update`命令更新应用。 +5. 输入`sudo apt-get install google-drive-ocamlfuse`命令安装软件。 +(注:这里,我把所有的命令加上着重标记了) + +### 授权 + +接下来就是授权 google-drive-ocamlfuse,让它有权限访问你的 Google 账户。先回到终端窗口敲下命令 google-drive-ocamlfuse,这个命令将会打开一个浏览器窗口,它会提示你登陆你的 Google 帐号或者如果你已经登陆了 Google 帐号,它会询问是否允许 google-drive-ocamlfuse 访问 Google 账户。如果你还没有登陆,先登陆然后点击允许。接下来的窗口(在 Ubuntu 16.04 桌面上会出现,但不会出现在基本系统 Freya 桌面上)将会询问你是否授给 gdfuse 和 OAuth2 Endpoint访问你的 Google 账户的权限,再次点击允许。然后出现的窗口就会告诉你等待授权令牌下载完成,这个时候就能最小化浏览器了。当你的终端提示像图 A 一样的内容,你就能知道令牌下载完了,并且你已经可以挂载 Google Drive 了。 + +**图 A** + +![](http://tr4.cbsistatic.com/hub/i/r/2016/05/18/a493122b-445f-4aca-8974-5ec41192eede/resize/620x/6ae5907ad2c08dc7620b7afaaa9e389c/googledriveocamlfuse3.png) +>图片来源: Jack Wallen + +**应用已经得到授权,你可以进行后面的工作。** + +### 挂载 Google Drive + +在挂载 Google Drive 之前,你得先创建一个文件夹,作为挂载点。在终端里,敲下`mkdir ~/google-drive`命令在你的家目录下创建一个新的文件夹。最后敲下命令`google-drive-ocamlfuse ~/google-drive`将你的 Google Drive 挂载到 google-drive 文件夹中。 + +这时你可以查看本地 google-drive 文件夹中包含的 Google Drive 文件/文件夹。你能够把 Google Drive 当作本地文件系统来进行工作。 + +当你想 卸载 google-drive 文件夹,输入命令 `fusermount -u ~/google-drive`。 + +### 没有 GUI,但它特别好用 + +我发现这个特别的系统非常容易使用,在同步 Google Drive 时它出奇的快,并且这可以作为一种巧妙的方式备份你的 Google Drive 账户。 + +试试 google-drive-ocamlfuse,看看你能用它做出什么有趣的事。 + +-------------------------------------------------------------------------------- + +via: http://www.techrepublic.com/article/how-to-mount-your-google-drive-on-linux-with-google-drive-ocamlfuse/ + +作者:[Jack Wallen ][a] +译者:[GitFuture](https://github.com/GitFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.techrepublic.com/search/?a=jack+wallen diff --git a/translated/tech/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md b/translated/tech/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md new file mode 100644 index 0000000000..befcdb5d31 --- /dev/null +++ b/translated/tech/20160604 Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux.md @@ -0,0 +1,435 @@ +Smem – Linux 下基于进程和用户的内存占用报告程序 +=========================================================================== + + +Linux 系统的内存管理工作中,内存使用情况的监控是十分重要的,不同的 Linux 发行版可能会提供不同的工具。但是它们的工作方式多种多样,这里,我们将会介绍如何安装和使用这样的一个名为 SMEM 的工具软件。 + +Smem 是一款命令行下的内存使用情况报告工具。和其它传统的内存报告工具不同个,它仅做这一件事情——报告 PPS(实际使用的物理内存[比例分配共享库占用的内存]),这种内存使用量表示方法对于那些在虚拟内存中的应用和库更有意义。 + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Smem-Linux-Memory-Reporting-Tool.png) +>Smem – Linux 内存报告工具 + +已有的传统工具会将目光主要集中于读取 RSS(实际使用物理内存[包含共享库占用的内存]),这种方法对于恒量那些使用物理内存方案的使用情况来说是标准方法,但是应用程序往往会高估内存的使用情况。 + +PSS 从另一个侧面,为那些使用虚拟内存方案的应用和库提供了给出了确定内存“公评分担”的合理措施。 + +你可以 [阅读此指南了解 (关于内存的 RSS 和 PSS)][1] Linux 系统中的内存占用。 + +### Smem 这一工具的特点 + +- 系统概览列表 +- 以进程,映射和用户来显示或者是过滤 +- 从 /proc 文件系统中得到数据 +- 从多个数据源配置显示条目 +- 可配置输出单元和百分比 +- 易于配置列表标题和汇总 +- 从镜像文件夹或者是压缩的 tar 文件中获得数据快照 +- 内置的图表生成机制 +- 在嵌入式系统中使用轻量级的捕获工具 + +### 如何安装 Smem - Linux 下的内存使用情况报告工具 + +安装之前,需要确保满足以下的条件: + +- 现代内存 (版本号高于 2.6.27) +- 较新的 Python 版本 (2.4 及以后版本) +- 可选的 [matplotlib][2] 库用于生成图表 + +对于当今的大多数的 Linux 发行版而言,内核版本和 Python 的版本都能够 满足需要,所以仅需要为生成良好的图表安装 matplotlib 库。 + +#### RHEL, CentOS 和 Fedora + +首先启用 [EPEL (Extra Packages for Enterprise Linux)][3] 软件源然后按照下列步骤操作: + +``` +# yum install smem python-matplotlib python-tk +``` + +#### Debian 和 Ubuntu + +``` +$ sudo apt-get install smem +``` + +#### Linux Mint + +``` +$ sudo apt-get install smem python-matplotlib python-tk +``` + +#### Arch Linux + +使用此 [AUR repository][4]。 + +### 如何使用 Smem – Linux 下的内存使用情况报告工具 + +为了查看整个系统所有用户的内存使用情况,运行以下的命令: + +``` +$ sudo smem +``` + +监视 Linux 系统中的内存使用情况 + +``` + PID User Command Swap USS PSS RSS + 6367 tecmint cat 0 100 145 1784 + 6368 tecmint cat 0 100 147 1676 + 2864 tecmint /usr/bin/ck-launch-session 0 144 165 1780 + 7656 tecmint gnome-pty-helper 0 156 178 1832 + 5758 tecmint gnome-pty-helper 0 156 179 1916 + 1441 root /sbin/getty -8 38400 tty2 0 152 184 2052 + 1434 root /sbin/getty -8 38400 tty5 0 156 187 2060 + 1444 root /sbin/getty -8 38400 tty3 0 156 187 2060 + 1432 root /sbin/getty -8 38400 tty4 0 156 188 2124 + 1452 root /sbin/getty -8 38400 tty6 0 164 196 2064 + 2619 root /sbin/getty -8 38400 tty1 0 164 196 2136 + 3544 tecmint sh -c /usr/lib/linuxmint/mi 0 212 224 1540 + 1504 root acpid -c /etc/acpi/events - 0 220 236 1604 + 3311 tecmint syndaemon -i 0.5 -K -R 0 252 292 2556 + 3143 rtkit /usr/lib/rtkit/rtkit-daemon 0 300 326 2548 + 1588 root cron 0 292 333 2344 + 1589 avahi avahi-daemon: chroot helpe 0 124 334 1632 + 1523 root /usr/sbin/irqbalance 0 316 343 2096 + 585 root upstart-socket-bridge --dae 0 328 351 1820 + 3033 tecmint /usr/bin/dbus-launch --exit 0 328 360 2160 + 1346 root upstart-file-bridge --daemo 0 348 371 1776 + 2607 root /usr/bin/xdm 0 188 378 2368 + 1635 kernoops /usr/sbin/kerneloops 0 352 386 2684 + 344 root upstart-udev-bridge --daemo 0 400 427 2132 + 2960 tecmint /usr/bin/ssh-agent /usr/bin 0 480 485 992 + 3468 tecmint /bin/dbus-daemon --config-f 0 344 515 3284 + 1559 avahi avahi-daemon: running [tecm 0 284 517 3108 + 7289 postfix pickup -l -t unix -u -c 0 288 534 2808 + 2135 root /usr/lib/postfix/master 0 352 576 2872 + 2436 postfix qmgr -l -t unix -u 0 360 606 2884 + 1521 root /lib/systemd/systemd-logind 0 600 650 3276 + 2222 nobody /usr/sbin/dnsmasq --no-reso 0 604 669 3288 +.... +``` + +当常规用户运行 smem,将会显示由用户启用的进程的占用情况,其中进程按照 PSS 的值升序排列。 + +下面的输出为用户 “aaronkilik” 启用的进程的使用情况: + +``` +$ smem +``` + +监视 Linux 系统中的内存使用情况 + +``` + PID User Command Swap USS PSS RSS + 6367 tecmint cat 0 100 145 1784 + 6368 tecmint cat 0 100 147 1676 + 2864 tecmint /usr/bin/ck-launch-session 0 144 166 1780 + 3544 tecmint sh -c /usr/lib/linuxmint/mi 0 212 224 1540 + 3311 tecmint syndaemon -i 0.5 -K -R 0 252 292 2556 + 3033 tecmint /usr/bin/dbus-launch --exit 0 328 360 2160 + 3468 tecmint /bin/dbus-daemon --config-f 0 344 515 3284 + 3122 tecmint /usr/lib/gvfs/gvfsd 0 656 801 5552 + 3471 tecmint /usr/lib/at-spi2-core/at-sp 0 708 864 5992 + 3396 tecmint /usr/lib/gvfs/gvfs-mtp-volu 0 804 914 6204 + 3208 tecmint /usr/lib/x86_64-linux-gnu/i 0 892 1012 6188 + 3380 tecmint /usr/lib/gvfs/gvfs-afc-volu 0 820 1024 6396 + 3034 tecmint //bin/dbus-daemon --fork -- 0 920 1081 3040 + 3365 tecmint /usr/lib/gvfs/gvfs-gphoto2- 0 972 1099 6052 + 3228 tecmint /usr/lib/gvfs/gvfsd-trash - 0 980 1153 6648 + 3107 tecmint /usr/lib/dconf/dconf-servic 0 1212 1283 5376 + 6399 tecmint /opt/google/chrome/chrome - 0 144 1409 10732 + 3478 tecmint /usr/lib/x86_64-linux-gnu/g 0 1724 1820 6320 + 7365 tecmint /usr/lib/gvfs/gvfsd-http -- 0 1352 1884 8704 + 6937 tecmint /opt/libreoffice5.0/program 0 1140 2328 5040 + 3194 tecmint /usr/lib/x86_64-linux-gnu/p 0 1956 2405 14228 + 6373 tecmint /opt/google/chrome/nacl_hel 0 2324 2541 8908 + 3313 tecmint /usr/lib/gvfs/gvfs-udisks2- 0 2460 2754 8736 + 3464 tecmint /usr/lib/at-spi2-core/at-sp 0 2684 2823 7920 + 5771 tecmint ssh -p 4521 tecmnt765@212.7 0 2544 2864 6540 + 5759 tecmint /bin/bash 0 2416 2923 5640 + 3541 tecmint /usr/bin/python /usr/bin/mi 0 2584 3008 7248 + 7657 tecmint bash 0 2516 3055 6028 + 3127 tecmint /usr/lib/gvfs/gvfsd-fuse /r 0 3024 3126 8032 + 3205 tecmint mate-screensaver 0 2520 3331 18072 + 3171 tecmint /usr/lib/mate-panel/notific 0 2860 3495 17140 + 3030 tecmint x-session-manager 0 4400 4879 17500 + 3197 tecmint mate-volume-control-applet 0 3860 5226 23736 +... +``` + +使用 smem 是还有一些参数可以选用,例如当参看整个系统的内存占用情况,运行以下的命令: + +``` +$ sudo smem -w +``` +监视 Linux 系统中的内存使用情况 + +``` +Area Used Cache Noncache +firmware/hardware 0 0 0 +kernel image 0 0 0 +kernel dynamic memory 1425320 1291412 133908 +userspace memory 2215368 451608 1763760 +free memory 4424936 4424936 0 +``` + +如果想要查看每一个用户的内存使用情况,运行以下的命令: + +``` +$ sudo smem -u +``` + +Linux 下以用户为单位监控内存占用情况 + +``` +User Count Swap USS PSS RSS +rtkit 1 0 300 326 2548 +kernoops 1 0 352 385 2684 +avahi 2 0 408 851 4740 +postfix 2 0 648 1140 5692 +messagebus 1 0 1012 1173 3320 +syslog 1 0 1396 1419 3232 +www-data 2 0 5100 6572 13580 +mpd 1 0 7416 8302 12896 +nobody 2 0 4024 11305 24728 +root 39 0 323876 353418 496520 +tecmint 64 0 1652888 1815699 2763112 +``` + +你也可以按照映射显示内存使用情况: + +``` +$ sudo smem -m +``` + +Linux 下以映射为单位监控内存占用情况 + +``` +Map PIDs AVGPSS PSS +/dev/fb0 1 0 0 +/home/tecmint/.cache/fontconfig/7ef2298f 18 0 0 +/home/tecmint/.cache/fontconfig/c57959a1 18 0 0 +/home/tecmint/.local/share/mime/mime.cac 15 0 0 +/opt/google/chrome/chrome_material_100_p 9 0 0 +/opt/google/chrome/chrome_material_200_p 9 0 0 +/usr/lib/x86_64-linux-gnu/gconv/gconv-mo 41 0 0 +/usr/share/icons/Mint-X-Teal/icon-theme. 15 0 0 +/var/cache/fontconfig/0c9eb80ebd1c36541e 20 0 0 +/var/cache/fontconfig/0d8c3b2ac0904cb8a5 20 0 0 +/var/cache/fontconfig/1ac9eb803944fde146 20 0 0 +/var/cache/fontconfig/3830d5c3ddfd5cd38a 20 0 0 +/var/cache/fontconfig/385c0604a188198f04 20 0 0 +/var/cache/fontconfig/4794a0821666d79190 20 0 0 +/var/cache/fontconfig/56cf4f4769d0f4abc8 20 0 0 +/var/cache/fontconfig/767a8244fc0220cfb5 20 0 0 +/var/cache/fontconfig/8801497958630a81b7 20 0 0 +/var/cache/fontconfig/99e8ed0e538f840c56 20 0 0 +/var/cache/fontconfig/b9d506c9ac06c20b43 20 0 0 +/var/cache/fontconfig/c05880de57d1f5e948 20 0 0 +/var/cache/fontconfig/dc05db6664285cc2f1 20 0 0 +/var/cache/fontconfig/e13b20fdb08344e0e6 20 0 0 +/var/cache/fontconfig/e7071f4a29fa870f43 20 0 0 +.... +``` + +还有其它的选项用于 smem 的输出,下面将会举两个例子。 + +要按照用户名筛选输出的信息,调用 -u 或者是 --userfilter="regex" 选项,就像下面的命令这样: + +``` +$ sudo smem -u +``` + +按照用户报告内存使用情况 + +``` +User Count Swap USS PSS RSS +rtkit 1 0 300 326 2548 +kernoops 1 0 352 385 2684 +avahi 2 0 408 851 4740 +postfix 2 0 648 1140 5692 +messagebus 1 0 1012 1173 3320 +syslog 1 0 1400 1423 3236 +www-data 2 0 5100 6572 13580 +mpd 1 0 7416 8302 12896 +nobody 2 0 4024 11305 24728 +root 39 0 323804 353374 496552 +tecmint 64 0 1708900 1871766 2819212 +``` + +要按照进程名称筛选输出信息,调用 -P 或者是 --processfilter="regex" 选项,就像下面的命令这样: + +``` +$ sudo smem --processfilter="firefox" +``` + +按照进程名称报告内存使用情况 + +``` +PID User Command Swap USS PSS RSS + 9212 root sudo smem --processfilter=f 0 1172 1434 4856 + 9213 root /usr/bin/python /usr/bin/sm 0 7368 7793 11984 + 4424 tecmint /usr/lib/firefox/firefox 0 931732 937590 961504 +``` + +输出的格式有时候也很重要,smem 提供了一些参数帮助您格式化内存使用报告,我们将举出几个例子。 + +设置哪些列在报告中,使用 -c 或者是 --columns选项,就像下面的命令这样: + +``` +$ sudo smem -c "name user pss rss" +``` + +按列报告内存使用情况 + +``` +Name User PSS RSS +cat tecmint 145 1784 +cat tecmint 147 1676 +ck-launch-sessi tecmint 165 1780 +gnome-pty-helpe tecmint 178 1832 +gnome-pty-helpe tecmint 179 1916 +getty root 184 2052 +getty root 187 2060 +getty root 187 2060 +getty root 188 2124 +getty root 196 2064 +getty root 196 2136 +sh tecmint 224 1540 +acpid root 236 1604 +syndaemon tecmint 296 2560 +rtkit-daemon rtkit 326 2548 +cron root 333 2344 +avahi-daemon avahi 334 1632 +irqbalance root 343 2096 +upstart-socket- root 351 1820 +dbus-launch tecmint 360 2160 +upstart-file-br root 371 1776 +xdm root 378 2368 +kerneloops kernoops 386 2684 +upstart-udev-br root 427 2132 +ssh-agent tecmint 485 992 +... +``` + +也可以调用 -p 选项以百分比的形式报告内存使用情况,就像下面的命令这样: + +``` +$ sudo smem -p +``` + +按百分比报告内存使用情况 + +``` + PID User Command Swap USS PSS RSS + 6367 tecmint cat 0.00% 0.00% 0.00% 0.02% + 6368 tecmint cat 0.00% 0.00% 0.00% 0.02% + 9307 tecmint sh -c { sudo /usr/lib/linux 0.00% 0.00% 0.00% 0.02% + 2864 tecmint /usr/bin/ck-launch-session 0.00% 0.00% 0.00% 0.02% + 3544 tecmint sh -c /usr/lib/linuxmint/mi 0.00% 0.00% 0.00% 0.02% + 5758 tecmint gnome-pty-helper 0.00% 0.00% 0.00% 0.02% + 7656 tecmint gnome-pty-helper 0.00% 0.00% 0.00% 0.02% + 1441 root /sbin/getty -8 38400 tty2 0.00% 0.00% 0.00% 0.03% + 1434 root /sbin/getty -8 38400 tty5 0.00% 0.00% 0.00% 0.03% + 1444 root /sbin/getty -8 38400 tty3 0.00% 0.00% 0.00% 0.03% + 1432 root /sbin/getty -8 38400 tty4 0.00% 0.00% 0.00% 0.03% + 1452 root /sbin/getty -8 38400 tty6 0.00% 0.00% 0.00% 0.03% + 2619 root /sbin/getty -8 38400 tty1 0.00% 0.00% 0.00% 0.03% + 1504 root acpid -c /etc/acpi/events - 0.00% 0.00% 0.00% 0.02% + 3311 tecmint syndaemon -i 0.5 -K -R 0.00% 0.00% 0.00% 0.03% + 3143 rtkit /usr/lib/rtkit/rtkit-daemon 0.00% 0.00% 0.00% 0.03% + 1588 root cron 0.00% 0.00% 0.00% 0.03% + 1589 avahi avahi-daemon: chroot helpe 0.00% 0.00% 0.00% 0.02% + 1523 root /usr/sbin/irqbalance 0.00% 0.00% 0.00% 0.03% + 585 root upstart-socket-bridge --dae 0.00% 0.00% 0.00% 0.02% + 3033 tecmint /usr/bin/dbus-launch --exit 0.00% 0.00% 0.00% 0.03% +.... +``` + +下面的额命令将会在输出的最后输出一行汇总信息: + +``` +$ sudo smem -t +``` + +报告内存占用合计 + +``` +PID User Command Swap USS PSS RSS + 6367 tecmint cat 0 100 139 1784 + 6368 tecmint cat 0 100 141 1676 + 9307 tecmint sh -c { sudo /usr/lib/linux 0 96 158 1508 + 2864 tecmint /usr/bin/ck-launch-session 0 144 163 1780 + 3544 tecmint sh -c /usr/lib/linuxmint/mi 0 108 170 1540 + 5758 tecmint gnome-pty-helper 0 156 176 1916 + 7656 tecmint gnome-pty-helper 0 156 176 1832 + 1441 root /sbin/getty -8 38400 tty2 0 152 181 2052 + 1434 root /sbin/getty -8 38400 tty5 0 156 184 2060 + 1444 root /sbin/getty -8 38400 tty3 0 156 184 2060 + 1432 root /sbin/getty -8 38400 tty4 0 156 185 2124 + 1452 root /sbin/getty -8 38400 tty6 0 164 193 2064 + 2619 root /sbin/getty -8 38400 tty1 0 164 193 2136 + 1504 root acpid -c /etc/acpi/events - 0 220 232 1604 + 3311 tecmint syndaemon -i 0.5 -K -R 0 260 298 2564 + 3143 rtkit /usr/lib/rtkit/rtkit-daemon 0 300 324 2548 + 1588 root cron 0 292 326 2344 + 1589 avahi avahi-daemon: chroot helpe 0 124 332 1632 + 1523 root /usr/sbin/irqbalance 0 316 340 2096 + 585 root upstart-socket-bridge --dae 0 328 349 1820 + 3033 tecmint /usr/bin/dbus-launch --exit 0 328 359 2160 + 1346 root upstart-file-bridge --daemo 0 348 370 1776 + 2607 root /usr/bin/xdm 0 188 375 2368 + 1635 kernoops /usr/sbin/kerneloops 0 352 384 2684 + 344 root upstart-udev-bridge --daemo 0 400 426 2132 +..... +------------------------------------------------------------------------------- + 134 11 0 2171428 2376266 3587972 +``` + +另外,smem 也提供了选项以图形的形式报告内存的使用情况,我们将会在下一小节深入介绍。 + +比如,你可以生成一张进程的 PSS 和 RSS 值的条状图。在下面的例子中,我们会生成属于 root 用户的进程的内存占用图。 + +纵坐标为每一个进程的 PSS 和 RSS 值,横坐标为 root 用户的所有进程: + +``` +$ sudo smem --userfilter="root" --bar pid -c"pss rss" +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Linux-Memory-Usage-in-PSS-and-RSS-Values.png) +>Linux Memory Usage in PSS and RSS Values + +也可以生成进程及其 PSS 和 RSS 占用量的饼状图。以下的命令将会输出一张 root 用户的所有进程的饼状。 + +`--pie` name 意思为以各个进程名字为标签,`-s` 选项帮助以 PSS 的值排序。 + +``` +$ sudo smem --userfilter="root" --pie name -s pss +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Linux-Memory-Consumption-by-Processes.png) +>Linux Memory Consumption by Processes + +它们还提供了一些其它与 PSS 和 RSS 相关的字段用于图表的标签: + +假如需要获得帮助,非常简单,仅需要输入 `smem -h` 或者是浏览帮助页面。 + +关于 smem 的介绍到底为止,不过想要更好的了解它,可以通过 man 手册获得更多的选项,然后一一实践。有什么想法或者疑惑,都可以跟帖评价。 + +参考链接: + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/smem-linux-memory-usage-per-process-per-user/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 + +作者:[Aaron Kili][a] +译者:[dongfengweixiao](https://github.com/dongfengweixiao) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: https://emilics.com/notebook/enblog/p871.html +[2]: http://matplotlib.org/index.html +[3]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[4]: https://www.archlinux.org/packages/community/i686/smem/ diff --git a/translated/tech/20160620 Detecting cats in images with OpenCV.md b/translated/tech/20160620 Detecting cats in images with OpenCV.md new file mode 100644 index 0000000000..9b1e19fef7 --- /dev/null +++ b/translated/tech/20160620 Detecting cats in images with OpenCV.md @@ -0,0 +1,228 @@ +使用 OpenCV 识别图片中的猫 +======================================= + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) + +你知道 OpenCV 可以识别在图片中识别猫脸吗?还是在开箱即用的情况下,无需多余的附件。 + +我也不知道。 + +但是在看完'[Kendrick Tan broke the story][1]'这个故事之后, 我需要亲自体验一下...去看看到OpenCV 是如何在我没有察觉到的情况下,将这一个功能添加进了他的软件库。 + +作为这个博客的大纲,我将会展示如何使用 OpenCV 的猫检测器在图片中识别猫脸。同样的,你也可以在视频流中使用该技术。 + +> 想找这篇博客的源码?[请点这][2]。 + + +### 使用 OpenCV 在图片中检测猫 + +如果你看一眼[OpenCV 的代码库][3],尤其是在[haarcascades 目录][4](OpenCV 用来保存处理他对多种目标检测的Cascade预先训练的级联图像分类), 你将会注意到这两个文件: + +- haarcascade_frontalcatface.xml +- haarcascade_frontalcatface_extended.xml + +这两个 Haar Cascade 文件都将被用来在图片中检测猫脸。实际上,我使用了相同的方式来生成这篇博客顶端的图片。 + +在做了一些调查工作之后,我发现训练这些记过并且将其提供给 OpenCV 仓库的是鼎鼎大名的 [Joseph Howse][5],他在计算机视觉领域有着很高的声望。 + +在博客的剩余部分,我将会展示给你如何使用 Howse 的 Haar 级联模型来检测猫。 + +让我们开工。新建一个叫 cat_detector.py 的文件,并且输入如下的代码: + +### 使用 OpenCVPython 来检测猫 + +``` +# import the necessary packages +import argparse +import cv2 + +# construct the argument parse and parse the arguments +ap = argparse.ArgumentParser() +ap.add_argument("-i", "--image", required=True, + help="path to the input image") +ap.add_argument("-c", "--cascade", + default="haarcascade_frontalcatface.xml", + help="path to cat detector haar cascade") +args = vars(ap.parse_args()) +``` + +第2和第3行主要是导入了必要的 python 包。6-12行主要是我们的命令行参数。我们在这只需要使用单独的参数'--image'。 + +我们可以指定一个 Haar cascade 的路径通过 `--cascade` 参数。默认使用 `haarcascades_frontalcatface.xml`,同时需要保证这个文件和你的 `cat_detector.py` 在同一目录下。 + +注意:我已经打包了猫的检测代码,还有在这个教程里的样本图片。你可以在博客的'Downloads' 部分下载到。如果你是刚刚接触 Python+OpenCV(或者 Haar 级联模型), 我会建议你下载 zip 压缩包,这个会方便你进行操作。 + +接下来,就是检测猫的时刻了: + +``` +# load the input image and convert it to grayscale +image = cv2.imread(args["image"]) +gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) + +# load the cat detector Haar cascade, then detect cat faces +# in the input image +detector = cv2.CascadeClassifier(args["cascade"]) +rects = detector.detectMultiScale(gray, scaleFactor=1.3, + minNeighbors=10, minSize=(75, 75)) +``` + +在15,16行,我们从硬盘上读取了图片,并且进行灰度化(一个常用的图片预处理,方便 Haar cascade 进行分类,尽管不是必须) + +20行,我们加载了Haar casacade,即猫检测器,并且初始化了 cv2.CascadeClassifier 对象。 + +使用 OpenCV 检测猫脸的步骤是21,22行,通过调用 detectMultiScale 方法。我们使用四个参数来调用。包括: + +1. 灰度化的图片,即样本图片。 +2. scaleFactor 参数,[图片金字塔][6]使用的检测猫脸时的检测粒度。一个更大的粒度将会加快检测的速度,但是会对准确性产生影响。相反的,一个更小的粒度将会影响检测的时间,但是会增加正确性。但是,细粒度也会增加错误的检测数量。你可以看博客的 'Haar 级联模型笔记' 部分来获得更多的信息。 +3. minNeighbors 参数控制了检测的最小数量,即在给定区域最小的检测猫脸的次数。这个参数很好的可以排除错误的检测结果。 +4. 最后,minSize 参数很好的自我说明了用途。即最后图片的最小大小,这个例子中就是 75\*75 + +detectMultiScale 函数 return rects,这是一个4维数组链表。这些item 中包含了猫脸的(x,y)坐标值,还有宽度,高度。 + +最后,让我们在图片上画下这些矩形来标识猫脸: + +``` +# loop over the cat faces and draw a rectangle surrounding each +for (i, (x, y, w, h)) in enumerate(rects): + cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) + cv2.putText(image, "Cat #{}".format(i + 1), (x, y - 10), + cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 255), 2) + +# show the detected cat faces +cv2.imshow("Cat Faces", image) +cv2.waitKey(0) +``` + +给相关的区域(举个例子,rects),我们在25行依次历遍它。 + +在26行,我们在每张猫脸的周围画上一个矩形。27,28行展示了一个整数,即图片中猫的数量。 + +最后,31,32行在屏幕上展示了输出的图片。 + +### 猫检测结果 + +为了测试我们的 OpenCV 毛检测器,可以在文章的最后,下载教程的源码。 + +然后,在你解压缩之后,你将会得到如下的三个文件/目录: + +1. cat_detector.py:我们的主程序 +2. haarcascade_frontalcatface.xml: Haar cascade 猫检测资源 +3. images:我们将会使用的检测图片目录。 + +到这一步,执行以下的命令: + +使用 OpenCVShell 检测猫。 + +``` +$ python cat_detector.py --image images/cat_01.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_01.jpg) +>1. 在图片中检测猫脸,甚至是猫的一部分。 + +注意,我们已经可以检测猫脸了,即使他的其余部分是被隐藏的。 + +试下另外的一张图片: + +``` +python cat_detector.py --image images/cat_02.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_02.jpg) +>2. 第二个例子就是在略微不同的猫脸中检测。 + +这次的猫脸和第一次的明显不同,因为它在'Meow'的中央。这种情况下,我们依旧能检测到正确的猫脸。 + +这张图片的结果也是正确的: + +``` +$ python cat_detector.py --image images/cat_03.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_03.jpg) +>3. 使用 OpenCV 和 python 检测猫脸 + +我们最后的一个样例就是在一张图中检测多张猫脸: + +``` +$ python cat_detector.py --image images/cat_04.jpg +``` + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/cat_face_detector_result_04.jpg) +>Figure 4: Detecting multiple cats in the same image with OpenCV +>4. 在同一张图片中使用 OpenCV 检测多只猫 + +注意,Haar cascade 的返回值并不是有序的。这种情况下,中间的那只猫会被标记成第三只。你可以通过判断他们的(x, y)坐标来自己排序。 + +#### 精度的 Tips + +xml 文件中的注释,非常重要,Joseph Hower 提到了猫 脸检测器有可能会将人脸识别成猫脸。 + +这种情况下,他推荐使用两种检测器(人脸&猫脸),然后将出现在人脸识别结果中的结果剔除掉。 + +#### Haar 级联模型注意事项 + +这个方法首先出现在 Paul Viola 和 Michael Jones 2001 年发布的 [Rapid Object Detection using a Boosted Cascade of Simple Features] 论文中。现在它已经成为了计算机识别领域引用最多的成果之一。 + +这个算法能够识别图片中的对象,无论地点,规模。并且,他也能在现有的硬件条件下实现实时计算。 + +在他们的论文中,Viola 和 Jones 关注在训练人脸检测器;但是,这个框架也能用来检测各类事物,如汽车,香蕉,路标等等。 + +#### 有问题? + +Haar 级联模型最大的问题就是如何确定 detectMultiScale 方法的参数正确。特别是 scaleFactor 和 minNeighbors 参数。你很容易陷入,一张一张图片调参数的坑,这个就是该模型很难被实用化的原因。 + +这个 scaleFactor 变量控制了用来检测图片各种对象的[图像棱锥图][8]。如何参数过大,你就会得到更少的特征值,这会导致你无法在图层中识别一些目标。 + +换句话说,如果参数过低,你会检测出过多的图层。这虽然可以能帮助你检测更多的对象。但是他会造成计算速度的降低还会提高错误率。 + +为了避免这个,我们通常使用[Histogram of Oriented Gradients + Linear SVM detection][9]。 + +HOG + 线性 SVM 框架,它的参数更加容易的进行调优。而且也有更低的错误识别率,但是最大的缺点及时无法实时运算。 + +### 对对象识别感兴趣?并且希望了解更多? + +![](http://www.pyimagesearch.com/wp-content/uploads/2016/05/custom_object_detector_example.jpg) +>5. 在 PyImageSearch Gurus 课程中学习如何构建自定义的对象识别器。 + +如果你对学习如何训练自己的自定义对象识别器,请务必要去学习 PyImageSearch Gurus 的课程。 + +在这个课程中,我提供了15节课还有超过168页的教程,来教你如何从0开始构建自定义的对象识别器。你会掌握如何应用 HOG+线性 SVM 计算框架来构建自己的对象识别器。 + +### 总结 + +在这篇博客里,我们学习了如何使用默认的 Haar 级联模型来识别图片中的猫脸。这些 Haar casacades 是通过[Joseph Howse][9] 贡献给 OpenCV 项目的。我是在[这篇文章][10]中开始注意到这个。 + +尽管 Haar 级联模型相当有用,但是我们也经常用 HOG 和 线性 SVM 替代。因为后者相对而言更容易使用,并且可以有效地降低错误的识别概率。 + +我也会在[在 PyImageSearch Gurus 的课程中][11]详细的讲述如何使用 HOG 和线性 SVM 对象识别器,来识别包括汽车,路标在内的各种事物。 + +不管怎样,我希望你享受这篇博客。 + +在你离开之前,确保你会使用这下面的表单注册 PyImageSearch Newsletter。这样你能收到最新的消息。 + +-------------------------------------------------------------------------------- + +via: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/ + +作者:[Adrian Rosebrock][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.pyimagesearch.com/author/adrian/ +[1]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[2]: http://www.pyimagesearch.com/2016/06/20/detecting-cats-in-images-with-opencv/# +[3]: https://github.com/Itseez/opencv +[4]: https://github.com/Itseez/opencv/tree/master/data/haarcascades +[5]: http://nummist.com/ +[6]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[7]: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf +[8]: http://www.pyimagesearch.com/2015/03/16/image-pyramids-with-python-and-opencv/ +[9]: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/ +[10]: http://kendricktan.github.io/find-cats-in-photos-using-computer-vision.html +[11]: https://www.pyimagesearch.com/pyimagesearch-gurus/ + + + diff --git a/translated/tech/20160705 Create Your Own Shell in Python - Part I.md b/translated/tech/20160705 Create Your Own Shell in Python - Part I.md new file mode 100644 index 0000000000..b54d0bff29 --- /dev/null +++ b/translated/tech/20160705 Create Your Own Shell in Python - Part I.md @@ -0,0 +1,228 @@ +使用 Python 创建你自己的 Shell:Part I +========================================== + +我很想知道一个 shell (像 bash,csh 等)内部是如何工作的。为了满足自己的好奇心,我使用 Python 实现了一个名为 **yosh** (Your Own Shell)的 Shell。本文章所介绍的概念也可以应用于其他编程语言。 + +(提示:你可以在[这里](https://github.com/supasate/yosh)查找本博文使用的源代码,代码以 MIT 许可证发布。在 Mac OS X 10.11.5 上,我使用 Python 2.7.10 和 3.4.3 进行了测试。它应该可以运行在其他类 Unix 环境,比如 Linux 和 Windows 上的 Cygwin。) + +让我们开始吧。 + +### 步骤 0:项目结构 + +对于此项目,我使用了以下的项目结构。 + +``` +yosh_project +|-- yosh + |-- __init__.py + |-- shell.py +``` + +`yosh_project` 为项目根目录(你也可以把它简单命名为 `yosh`)。 + +`yosh` 为包目录,且 `__init__.py` 可以使它成为与包目录名字相同的包(如果你不写 Python,可以忽略它。) + +`shell.py` 是我们主要的脚本文件。 + +### 步骤 1:Shell 循环 + +当启动一个 shell,它会显示一个命令提示符并等待你的命令输入。在接收了输入的命令并执行它之后(稍后文章会进行详细解释),你的 shell 会重新回到循环,等待下一条指令。 + +在 `shell.py`,我们会以一个简单的 mian 函数开始,该函数调用了 shell_loop() 函数,如下: + +``` +def shell_loop(): + # Start the loop here + + +def main(): + shell_loop() + + +if __name__ == "__main__": + main() +``` + +接着,在 `shell_loop()`,为了指示循环是否继续或停止,我们使用了一个状态标志。在循环的开始,我们的 shell 将显示一个命令提示符,并等待读取命令输入。 + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() +``` + +之后,我们切分命令输入并进行执行(我们即将实现`命令切分`和`执行`函数)。 + +因此,我们的 shell_loop() 会是如下这样: + +``` +import sys + +SHELL_STATUS_RUN = 1 +SHELL_STATUS_STOP = 0 + + +def shell_loop(): + status = SHELL_STATUS_RUN + + while status == SHELL_STATUS_RUN: + # Display a command prompt + sys.stdout.write('> ') + sys.stdout.flush() + + # Read command input + cmd = sys.stdin.readline() + + # Tokenize the command input + cmd_tokens = tokenize(cmd) + + # Execute the command and retrieve new status + status = execute(cmd_tokens) +``` + +这就是我们整个 shell 循环。如果我们使用 `python shell.py` 启动我们的 shell,它会显示命令提示符。然而如果我们输入命令并按回车,它会抛出错误,因为我们还没定义`命令切分`函数。 + +为了退出 shell,可以尝试输入 ctrl-c。稍后我将解释如何以优雅的形式退出 shell。 + +### 步骤 2:命令切分 + +当用户在我们的 shell 中输入命令并按下回车键,该命令将会是一个包含命令名称及其参数的很长的字符串。因此,我们必须切分该字符串(分割一个字符串为多个标记)。 + +咋一看似乎很简单。我们或许可以使用 `cmd.split()`,以空格分割输入。它对类似 `ls -a my_folder` 的命令起作用,因为它能够将命令分割为一个列表 `['ls', '-a', 'my_folder']`,这样我们便能轻易处理它们了。 + +然而,也有一些类似 `echo "Hello World"` 或 `echo 'Hello World'` 以单引号或双引号引用参数的情况。如果我们使用 cmd.spilt,我们将会得到一个存有 3 个标记的列表 `['echo', '"Hello', 'World"']` 而不是 2 个标记的列表 `['echo', 'Hello World']`。 + +幸运的是,Python 提供了一个名为 `shlex` 的库,它能够帮助我们效验如神地分割命令。(提示:我们也可以使用正则表达式,但它不是本文的重点。) + + +``` +import sys +import shlex + +... + +def tokenize(string): + return shlex.split(string) + +... +``` + +然后我们将这些标记发送到执行进程。 + +### 步骤 3:执行 + +这是 shell 中核心和有趣的一部分。当 shell 执行 `mkdir test_dir` 时,到底发生了什么?(提示: `mkdir` 是一个带有 `test_dir` 参数的执行程序,用于创建一个名为 `test_dir` 的目录。) + +`execvp` 是涉及这一步的首个函数。在我们解释 `execvp` 所做的事之前,让我们看看它的实际效果。 + +``` +import os +... + +def execute(cmd_tokens): + # Execute command + os.execvp(cmd_tokens[0], cmd_tokens) + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +再次尝试运行我们的 shell,并输入 `mkdir test_dir` 命令,接着按下回车键。 + +在我们敲下回车键之后,问题是我们的 shell 会直接退出而不是等待下一个命令。然而,目标正确地被创建。 + +因此,`execvp` 实际上做了什么? + +`execvp` 是系统调用 `exec` 的一个变体。第一个参数是程序名字。`v` 表示第二个参数是一个程序参数列表(可变参数)。`p` 表示环境变量 `PATH` 会被用于搜索给定的程序名字。在我们上一次的尝试中,它将会基于我们的 `PATH` 环境变量查找`mkdir` 程序。 + +(还有其他 `exec` 变体,比如 execv、execvpe、execl、execlp、execlpe;你可以 google 它们获取更多的信息。) + +`exec` 会用即将运行的新进程替换调用进程的当前内存。在我们的例子中,我们的 shell 进程内存会被替换为 `mkdir` 程序。接着,`mkdir` 成为主进程并创建 `test_dir` 目录。最后该进程退出。 + +这里的重点在于**我们的 shell 进程已经被 `mkdir` 进程所替换**。这就是我们的 shell 消失且不会等待下一条命令的原因。 + +因此,我们需要其他的系统调用来解决问题:`fork`。 + +`fork` 会开辟新的内存并拷贝当前进程到一个新的进程。我们称这个新的进程为**子进程**,调用者进程为**父进程**。然后,子进程内存会被替换为被执行的程序。因此,我们的 shell,也就是父进程,可以免受内存替换的危险。 + +让我们看看修改的代码。 + +``` +... + +def execute(cmd_tokens): + # Fork a child shell process + # If the current process is a child process, its `pid` is set to `0` + # else the current process is a parent process and the value of `pid` + # is the process id of its child process. + pid = os.fork() + + if pid == 0: + # Child process + # Replace the child shell process with the program called with exec + os.execvp(cmd_tokens[0], cmd_tokens) + elif pid > 0: + # Parent process + while True: + # Wait response status from its child process (identified with pid) + wpid, status = os.waitpid(pid, 0) + + # Finish waiting if its child process exits normally + # or is terminated by a signal + if os.WIFEXITED(status) or os.WIFSIGNALED(status): + break + + # Return status indicating to wait for next command in shell_loop + return SHELL_STATUS_RUN + +... +``` + +当我们的父进程调用 `os.fork()`时,你可以想象所有的源代码被拷贝到了新的子进程。此时此刻,父进程和子进程看到的是相同的代码,且并行运行着。 + +如果运行的代码属于子进程,`pid` 将为 `0`。否则,如果运行的代码属于父进程,`pid` 将会是子进程的进程 id。 + +当 `os.execvp` 在子进程中被调用时,你可以想象子进程的所有源代码被替换为正被调用程序的代码。然而父进程的代码不会被改变。 + +当父进程完成等待子进程退出或终止时,它会返回一个状态,指示继续 shell 循环。 + +### 运行 + +现在,你可以尝试运行我们的 shell 并输入 `mkdir test_dir2`。它应该可以正确执行。我们的主 shell 进程仍然存在并等待下一条命令。尝试执行 `ls`,你可以看到已创建的目录。 + +但是,这里仍有许多问题。 + +第一,尝试执行 `cd test_dir2`,接着执行 `ls`。它应该会进入到一个空的 `test_dir2` 目录。然而,你将会看到目录并没有变为 `test_dir2`。 + +第二,我们仍然没有办法优雅地退出我们的 shell。 + +我们将会在 [Part 2][1] 解决诸如此类的问题。 + + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ + +作者:[Supasate Choochaisri][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ diff --git a/translated/tech/20160706 Create Your Own Shell in Python - Part II.md b/translated/tech/20160706 Create Your Own Shell in Python - Part II.md new file mode 100644 index 0000000000..0f0cd6a878 --- /dev/null +++ b/translated/tech/20160706 Create Your Own Shell in Python - Part II.md @@ -0,0 +1,216 @@ +使用 Python 创建你自己的 Shell:Part II +=========================================== + +在[part 1][1] 中,我们已经创建了一个主要的 shell 循环、切分了的命令输入,以及通过 `fork` 和 `exec` 执行命令。在这部分,我们将会解决剩下的问题。首先,`cd test_dir2` 命令无法修改我们的当前目录。其次,我们仍无法优雅地从 shell 中退出。 + +### 步骤 4:内置命令 + +“cd test_dir2 无法修改我们的当前目录” 这句话是对的,但在某种意义上也是错的。在执行完该命令之后,我们仍然处在同一目录,从这个意义上讲,它是对的。然而,目录实际上已经被修改,只不过它是在子进程中被修改。 + +还记得我们 fork 了一个子进程,然后执行命令,执行命令的过程没有发生在父进程上。结果是我们只是改变了子进程的当前目录,而不是父进程的目录。 + +然后子进程退出,而父进程在原封不动的目录下继续运行。 + +因此,这类与 shell 自己相关的命令必须是内置命令。它必须在 shell 进程中执行而没有分叉(forking)。 + +#### cd + +让我们从 `cd` 命令开始。 + +我们首先创建一个 `builtins` 目录。每一个内置命令都会被放进这个目录中。 + +```shell +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + |-- __init__.py + |-- shell.py +``` + +在 `cd.py` 中,我们通过使用系统调用 `os.chdir` 实现自己的 `cd` 命令。 + +```python +import os +from yosh.constants import * + + +def cd(args): + os.chdir(args[0]) + + return SHELL_STATUS_RUN +``` + +注意,我们会从内置函数返回 shell 的运行状态。所以,为了能够在项目中继续使用常量,我们将它们移至 `yosh/constants.py`。 + +```shell +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + |-- __init__.py + |-- constants.py + |-- shell.py +``` + +在 `constants.py` 中,我们将状态常量都放在这里。 + +```python +SHELL_STATUS_STOP = 0 +SHELL_STATUS_RUN = 1 +``` + +现在,我们的内置 `cd` 已经准备好了。让我们修改 `shell.py` 来处理这些内置函数。 + +```python +... +# Import constants +from yosh.constants import * + +# Hash map to store built-in function name and reference as key and value +built_in_cmds = {} + + +def tokenize(string): + return shlex.split(string) + + +def execute(cmd_tokens): + # Extract command name and arguments from tokens + cmd_name = cmd_tokens[0] + cmd_args = cmd_tokens[1:] + + # If the command is a built-in command, invoke its function with arguments + if cmd_name in built_in_cmds: + return built_in_cmds[cmd_name](cmd_args) + + ... +``` + +我们使用一个 python 字典变量 `built_in_cmds` 作为哈希映射(hash map),以存储我们的内置函数。我们在 `execute` 函数中提取命令的名字和参数。如果该命令在我们的哈希映射中,则调用对应的内置函数。 + +(提示:`built_in_cmds[cmd_name]` 返回能直接使用参数调用的函数引用的。) + +我们差不多准备好使用内置的 `cd` 函数了。最后一步是将 `cd` 函数添加到 `built_in_cmds` 映射中。 + +``` +... +# Import all built-in function references +from yosh.builtins import * + +... + +# Register a built-in function to built-in command hash map +def register_command(name, func): + built_in_cmds[name] = func + + +# Register all built-in commands here +def init(): + register_command("cd", cd) + + +def main(): + # Init shell before starting the main loop + init() + shell_loop() +``` + +我们定义了 `register_command` 函数,以添加一个内置函数到我们内置的命令哈希映射。接着,我们定义 `init` 函数并且在这里注册内置的 `cd` 函数。 + +注意这行 `register_command("cd", cd)` 。第一个参数为命令的名字。第二个参数为一个函数引用。为了能够让第二个参数 `cd` 引用到 `yosh/builtins/cd.py` 中的 `cd` 函数引用,我们必须将以下这行代码放在 `yosh/builtins/__init__.py` 文件中。 + +``` +from yosh.builtins.cd import * +``` + +因此,在 `yosh/shell.py` 中,当我们从 `yosh.builtins` 导入 `*` 时,我们可以得到已经通过 `yosh.builtins` 导入的 `cd` 函数引用。 + +我们已经准备好了代码。让我们尝试在 `yosh` 同级目录下以模块形式运行我们的 shell,`python -m yosh.shell`。 + +现在,`cd` 命令可以正确修改我们的 shell 目录了,同时非内置命令仍然可以工作。非常好! + +#### exit + +最后一块终于来了:优雅地退出。 + +我们需要一个可以修改 shell 状态为 `SHELL_STATUS_STOP` 的函数。这样,shell 循环可以自然地结束,shell 将到达终点而退出。 + +和 `cd` 一样,如果我们在子进程中 fork 和执行 `exit` 函数,其对父进程是不起作用的。因此,`exit` 函数需要成为一个 shell 内置函数。 + +让我们从这开始:在 `builtins` 目录下创建一个名为 `exit.py` 的新文件。 + +``` +yosh_project +|-- yosh + |-- builtins + | |-- __init__.py + | |-- cd.py + | |-- exit.py + |-- __init__.py + |-- constants.py + |-- shell.py +``` + +`exit.py` 定义了一个 `exit` 函数,该函数仅仅返回一个可以退出主循环的状态。 + +``` +from yosh.constants import * + + +def exit(args): + return SHELL_STATUS_STOP +``` + +然后,我们导入位于 `yosh/builtins/__init__.py` 文件的 `exit` 函数引用。 + +``` +from yosh.builtins.cd import * +from yosh.builtins.exit import * +``` + +最后,我们在 `shell.py` 中的 `init()` 函数注册 `exit` 命令。 + + +``` +... + +# Register all built-in commands here +def init(): + register_command("cd", cd) + register_command("exit", exit) + +... +``` + +到此为止! + +尝试执行 `python -m yosh.shell`。现在你可以输入 `exit` 优雅地退出程序了。 + +### 最后的想法 + +我希望你能像我一样享受创建 `yosh` (**y**our **o**wn **sh**ell)的过程。但我的 `yosh` 版本仍处于早期阶段。我没有处理一些会使 shell 崩溃的极端状况。还有很多我没有覆盖的内置命令。为了提高性能,一些非内置命令也可以实现为内置命令(避免新进程创建时间)。同时,大量的功能还没有实现(请看 [公共特性](http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html) 和 [不同特性](http://www.tldp.org/LDP/intro-linux/html/x12249.html)) + +我已经在 github.com/supasate/yosh 中提供了源代码。请随意 fork 和尝试。 + +现在该是创建你真正自己拥有的 Shell 的时候了。 + +Happy Coding! + +-------------------------------------------------------------------------------- + +via: https://hackercollider.com/articles/2016/07/06/create-your-own-shell-in-python-part-2/ + +作者:[Supasate Choochaisri][a] +译者:[cposture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://disqus.com/by/supasate_choochaisri/ +[1]: https://hackercollider.com/articles/2016/07/05/create-your-own-shell-in-python-part-1/ +[2]: http://tldp.org/LDP/Bash-Beginners-Guide/html/x7243.html +[3]: http://www.tldp.org/LDP/intro-linux/html/x12249.html +[4]: https://github.com/supasate/yosh diff --git a/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md b/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md new file mode 100644 index 0000000000..b06c56b46e --- /dev/null +++ b/translated/tech/20160706 Getting started with Docker Swarm and deploying a replicated Python 3 Application.md @@ -0,0 +1,282 @@ +教程:开始学习如何使用 Docker Swarm 部署可扩展的 Python3 应用 +============== + +[Ben Firshman][2]最近在[Dockercon][1]做了一个关于使用 Docker 构建无服务应用的演讲,你可以在[这查看详情][3](可以和视频一起)。之后,我写了[一篇文章][4]关于如何使用[AWS Lambda][5]构建微服务系统。 + +今天,我想展示给你的就是如何使用[Docker Swarm][6]然后部署一个简单的 Python Falcon REST 应用。尽管,我不会使用[dockerrun][7]或者是其他无服务特性。你可能会惊讶,使用 Docker Swarm 部署(替换)一个 Python(Java, Go 都一样) 应用是如此的简单。 + +注意:这展示的部分步骤是截取自[Swarm Tutorial][8]。我已经修改了部分章节,并且[在 Vagrant 的帮助文档][9]中添加了构建本地测试环境的文档。请确保,你使用的是1.12或以上版本的 Docker 引擎。我写这篇文章的时候,使用的是1.12RC2版本的 Docker。注意的是,这只是一个测试版本,只会可能还会有修改。 + +你要做的第一件事,就是你要保证你正确的安装了[Vagrant][10],如果你想本地运行的话。你也可以按如下步骤使用你最喜欢的云服务提供商部署 Docker Swarm 虚拟机系统。 + +我们将会使用这三台 VM:一个简单的 Docker Swarm 管理平台和两台 worker。 + +安全注意事项:Vagrantfile 代码中包含了部分位于 Docker 测试服务器上的 shell 脚本。这是一个隐藏的安全问题。如果你没有权限的话。请确保你会在运行代码之前[审查这部分的脚本][11]。 + +``` +$ git clone https://github.com/chadlung/vagrant-docker-swarm +$ cd vagrant-docker-swarm +$ vagrant plugin install vagrant-vbguest +$ vagrant up +``` + +Vagrant up 命令可能会花很长的时间来执行。 + +SSH 登陆进入 manager1 虚拟机: + +``` +$ vagrant ssh manager1 +``` + +在 manager1 的终端中执行如下命令: + +``` +$ sudo docker swarm init --listen-addr 192.168.99.100:2377 +``` + +现在还没有 worker 注册上来: + +``` +$ sudo docker node ls +``` + +Let’s register the two workers. Use two new terminal sessions (leave the manager1 session running): +通过两个新的终端会话(退出 manager1 的登陆后),我们注册两个 worker。 + +``` +$ vagrant ssh worker1 +``` + +在 worker1 上执行如下命令: + +``` +$ sudo docker swarm join 192.168.99.100:2377 +``` + +在 worker2 上重复这些命令。 + +在 manager1 上执行这个命令: + +``` +$ docker node ls +``` + +你将会看到: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-3.15.25-PM.png) + +开始在 manager1 的终端里,部署一个简单的服务。 + +``` +sudo docker service create --replicas 1 --name pinger alpine ping google.com +``` + +这个命令将会部署一个服务,他会从 worker 机器中的一台 ping google.com。(manager 也可以运行服务,不过[这也可以被禁止][12])如果你只是想 worker 运行容器的话)。可以使用如下命令,查看哪些节点正在执行服务: + +``` +$ sudo docker service tasks pinger +``` + +结果回合这个比较类似: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.23.05-PM.png) + +所以,我们知道了服务正跑在 worker1 上。我们可以回到 worker1 的会话里,然后进入正在运行的容器: + +``` +$ sudo docker ps +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.25.02-PM.png) + +你可以看到容器的 id 是: ae56769b9d4d + +在我的例子中,我运行的是如下的代码: + +``` +$ sudo docker attach ae56769b9d4d +``` + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-5.26.49-PM.png) + +你可以仅仅只用 CTRL-C 来停止服务。 + +回到 manager1,并且移除 pinger 服务。 + +``` +$ sudo docker service rm pinger +``` + +现在,我们将会部署可复制的 Python 应用。请记住,为了保持文章的简洁,而且容易复制,所以部署的是一个简单的应用。 + +你需要做的第一件事就是将镜像放到[Docker Hub][13]上,或者使用我[已经上传的一个][14]。这是一个简单的 Python 3 Falcon REST 应用。他有一个简单的入口: /hello 带一个 value 参数。 + +[chadlung/hello-app][15]的 Python 代码看起来像这样: + +``` +import json +from wsgiref import simple_server + +import falcon + + +class HelloResource(object): + def on_get(self, req, resp): + try: + value = req.get_param('value') + + resp.content_type = 'application/json' + resp.status = falcon.HTTP_200 + resp.body = json.dumps({'message': str(value)}) + except Exception as ex: + resp.status = falcon.HTTP_500 + resp.body = str(ex) + + +if __name__ == '__main__': + app = falcon.API() + hello_resource = HelloResource() + app.add_route('/hello', hello_resource) + httpd = simple_server.make_server('0.0.0.0', 8080, app) + httpd.serve_forever() +``` + +Dockerfile 很简单: + +``` +FROM python:3.4.4 + +RUN pip install -U pip +RUN pip install -U falcon + +EXPOSE 8080 + +COPY . /hello-app +WORKDIR /hello-app + +CMD ["python", "app.py"] +``` + +再一次说明,这是非常详细的奖惩,如果你想,你也可以在本地访问这个入口: + +这将返回如下结果: + +``` +{"message": "Fred"} +``` + +在 Docker Hub 上构建和部署这个 hello-app(修改成你自己的 Docker Hub 仓库或者[这个][15]): + +``` +$ sudo docker build . -t chadlung/hello-app:2 +$ sudo docker push chadlung/hello-app:2 +``` + +现在,我们可以将应用部署到之前的 Docker Swarm 了。登陆 manager1 终端,并且执行: + +``` +$ sudo docker service create -p 8080:8080 --replicas 2 --name hello-app chadlung/hello-app:2 +$ sudo docker service inspect --pretty hello-app +$ sudo docker service tasks hello-app +``` + +现在,我们已经可以测试了。使用任何一个节点 Swarm 的 IP,来访问/hello 的入口,在本例中,我在 Manager1 的终端里使用 curl 命令: + +注意,在 Swarm 中的所有 IP 都可以工作,即使这个服务只运行在一台或者更多的节点上。 + +``` +$ curl -v -X GET "http://192.168.99.100:8080/hello?value=Chad" +$ curl -v -X GET "http://192.168.99.101:8080/hello?value=Test" +$ curl -v -X GET "http://192.168.99.102:8080/hello?value=Docker" +``` + +结果就是: + +``` +* Hostname was NOT found in DNS cache +* Trying 192.168.99.101... +* Connected to 192.168.99.101 (192.168.99.101) port 8080 (#0) +> GET /hello?value=Chad HTTP/1.1 +> User-Agent: curl/7.35.0 +> Host: 192.168.99.101:8080 +> Accept: */* +> +* HTTP 1.0, assume close after body +< HTTP/1.0 200 OK +< Date: Tue, 28 Jun 2016 23:52:55 GMT +< Server: WSGIServer/0.2 CPython/3.4.4 +< content-type: application/json +< content-length: 19 +< +{"message": "Chad"} +``` + +从浏览器中访问其他节点: + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-28-at-6.54.31-PM.png) + +如果你想看运行的所有服务,你可以在 manager1 节点上运行如下代码: + +``` +$ sudo docker service ls +``` + +如果你想添加可视化控制平台,你可以安装[Docker Swarm Visualizer][16](这非常简单上手)。在 manager1 的终端中执行如下代码: + +![]($ sudo docker run -it -d -p 5000:5000 -e HOST=192.168.99.100 -e PORT=5000 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer) + +打开你的浏览器,并且访问: + +结果(假设已经运行了两个 Docker Swarm 服务): + +![](http://www.giantflyingsaucer.com/blog/wp-content/uploads/2016/06/Screen-Shot-2016-06-30-at-2.37.28-PM.png) + +停止运行 hello-app(已经在两个节点上运行了),可以在 manager1 上执行这个代码: + +``` +$ sudo docker service rm hello-app +``` + +如果想停止, 那么在 manager1 的终端中执行: + +``` +$ sudo docker ps +``` + +获得容器的 ID,这里是: f71fec0d3ce1 + +从 manager1 的终端会话中执行这个代码: + +``` +$ sudo docker stop f71fec0d3ce1 +``` + +祝你使用 Docker Swarm。这篇文章主要是以1.12版本来进行描述的。 + +-------------------------------------------------------------------------------- + +via: http://www.giantflyingsaucer.com/blog/?p=5923 + +作者:[Chad Lung][a] +译者:[译者ID](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.giantflyingsaucer.com/blog/?author=2 +[1]: http://dockercon.com/ +[2]: https://blog.docker.com/author/bfirshman/ +[3]: https://blog.docker.com/author/bfirshman/ +[4]: http://www.giantflyingsaucer.com/blog/?p=5730 +[5]: https://aws.amazon.com/lambda/ +[6]: https://docs.docker.com/swarm/ +[7]: https://github.com/bfirsh/dockerrun +[8]: https://docs.docker.com/engine/swarm/swarm-tutorial/ +[9]: https://github.com/chadlung/vagrant-docker-swarm +[10]: https://www.vagrantup.com/ +[11]: https://test.docker.com/ +[12]: https://docs.docker.com/engine/reference/commandline/swarm_init/ +[13]: https://hub.docker.com/ +[14]: https://hub.docker.com/r/chadlung/hello-app/ +[15]: https://hub.docker.com/r/chadlung/hello-app/ +[16]: https://github.com/ManoMarks/docker-swarm-visualizer diff --git a/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md new file mode 100644 index 0000000000..84b9db7968 --- /dev/null +++ b/translated/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md @@ -0,0 +1,206 @@ +LFCS 系列第十一讲:如何使用命令 vgcreate、lvcreate 和 lvextend 管理和创建 LVM +============================================================================================ + +由于 LFCS 考试中的一些改变已在 2016 年 2 月 2 日生效,我们添加了一些必要的专题到 [LFCS 系列][1]。我们也非常推荐备考的同学,同时阅读 [LFCE 系列][2]。 + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png) +>LFCS:管理 LVM 和创建 LVM 分区 + +在安装 Linux 系统的时候要做的最重要的决定之一便是给系统文件,home 目录等分配空间。在这个地方犯了错,再要增长空间不足的分区,那样既麻烦又有风险。 + +**逻辑卷管理** (即 **LVM**)相较于传统的分区管理有许多优点,已经成为大多数(如果不能说全部的话) Linux 发行版安装时的默认选择。LVM 最大的优点应该是能方便的按照你的意愿调整(减小或增大)逻辑分区的大小。 + +LVM 的组成结构: + +* 把一块或多块硬盘或者一个或多个分区配置成物理卷(PV)。 +* 一个用一个或多个物理卷创建出的卷组(**VG**)。可以把一个卷组想象成一个单独的存储单元。 +* 在一个卷组上可以创建多个逻辑卷。每个逻辑卷相当于一个传统意义上的分区 —— 优点是它的大小可以根据需求重新调整大小,正如之前提到的那样。 + +本文,我们将使用三块 **8 GB** 的磁盘(**/dev/sdb**、**/dev/sdc** 和 **/dev/sdd**)分别创建三个物理卷。你既可以直接在设备上创建 PV,也可以先分区在创建。 + +在这里我们选择第一种方式,如果你决定使用第二种(可以参考本系列[第四讲:创建分区和文件系统][3])确保每个分区的类型都是 `8e`。 + +### 创建物理卷,卷组和逻辑卷 + +要在 **/dev/sdb**、**/dev/sdc** 和 **/dev/sdd**上创建物理卷,运行: + +``` +# pvcreate /dev/sdb /dev/sdc /dev/sdd +``` + +你可以列出新创建的 PV ,通过: + +``` +# pvs +``` + +并得到每个 PV 的详细信息,通过: + +``` +# pvdisplay /dev/sdX +``` + +(**X** 即 b、c 或 d) + +如果没有输入 `/dev/sdX` ,那么你将得到所有 PV 的信息。 + +使用 /dev/sdb` 和 `/dev/sdc` 创建卷组 ,命名为 `vg00` (在需要时是可以通过添加其他设备来扩展空间的,我们等到说明这点的时候再用,所以暂时先保留 `/dev/sdd`): + +``` +# vgcreate vg00 /dev/sdb /dev/sdc +``` + +就像物理卷那样,你也可以查看卷组的信息,通过: + +``` +# vgdisplay vg00 +``` + +由于 `vg00` 是由两个 **8 GB** 的磁盘组成的,所以它将会显示成一个 **16 GB** 的硬盘: + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png) +>LVM 卷组列表 + +当谈到创建逻辑卷,空间的分配必须考虑到当下和以后的需求。根据每个逻辑卷的用途来命名是一个好的做法。 + +举个例子,让我们创建两个 LV,命名为 `vol_projects` (**10 GB**) 和 `vol_backups` (剩下的空间), 在日后分别用于部署项目文件和系统备份。 + +参数 `-n` 用于为 LV 指定名称,而 `-L` 用于设定固定的大小,还有 `-l` (小写的 L)在 VG 的预留空间中用于指定百分比大小的空间。 + +``` +# lvcreate -n vol_projects -L 10G vg00 +# lvcreate -n vol_backups -l 100%FREE vg00 +``` + +和之前一样,你可以查看 LV 的列表和基础信息,通过: + +``` +# lvs +``` + +或是详细信息,通过: + +``` +# lvdisplay +``` + +若要查看单个 **LV** 的信息,使用 **lvdisplay** 加上 **VG** 和 **LV** 作为参数,如下: + +``` +# lvdisplay vg00/vol_projects +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png) +>逻辑卷列表 + +如上图,我们看到 LV 已经被创建成存储设备了(参考 LV Path line)。在使用每个逻辑卷之前,需要先在上面创建文件系统。 + +这里我们拿 ext4 来做举例,因为对于每个 LV 的大小, ext4 既可以增大又可以减小(相对的 xfs 就只允许增大): + +``` +# mkfs.ext4 /dev/vg00/vol_projects +# mkfs.ext4 /dev/vg00/vol_backups +``` + +我们将在下一节向大家说明,如何调整逻辑卷的大小并在需要的时候添加额外的外部存储空间。 + +### 调整逻辑卷大小和扩充卷组 + +现在设想以下场景。`vol_backups` 中的空间即将用完,而 `vol_projects` 中还有富余的空间。由于 LVM 的特性,我们可以轻易的减小后者的大小(比方说 **2.5 GB**),并将其分配给前者,与此同时调整每个文件系统的大小。 + +幸运的是这很简单,只需: + +``` +# lvreduce -L -2.5G -r /dev/vg00/vol_projects +# lvextend -l +100%FREE -r /dev/vg00/vol_backups +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png) +>减小逻辑卷和卷组 + +在调整逻辑卷的时候,其中包含的减号 `(-)` 或加号 `(+)` 是十分重要的。否则 LV 将会被设置成指定的大小,而非调整指定大小。 + +有些时候,你可能会遭遇那种无法仅靠调整逻辑卷的大小就可以解决的问题,那时你就需要购置额外的存储设备了,你可能需要再加一块硬盘。这里我们将通过添加之前配置时预留的 PV (`/dev/sdd`),用以模拟这种情况。 + +想把 `/dev/sdd` 加到 `vg00`,执行: + +``` +# vgextend vg00 /dev/sdd +``` + +如果你在运行上条命令的前后执行 vgdisplay `vg00` ,你就会看出 VG 的大小增加了。 + +``` +# vgdisplay vg00 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png) +>查看卷组磁盘大小 + +现在,你可以使用新加的空间,按照你的需求调整现有 LV 的大小,或者创建一个新的 LV。 + +### 在启动和需求时挂载逻辑卷 + +当然,如果我们不打算实际的使用逻辑卷,那么创建它们就变得毫无意义了。为了更好的识别逻辑卷,我们需要找出它的 `UUID` (用于识别一个格式化存储设备的唯一且不变的属性)。 + +要做到这点,可使用 blkid 加每个设备的路径来实现: + +``` +# blkid /dev/vg00/vol_projects +# blkid /dev/vg00/vol_backups +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png) +>寻找逻辑卷的 UUID + +为每个 LV 创建挂载点: + +``` +# mkdir /home/projects +# mkdir /home/backups +``` + +并在 `/etc/fstab` 插入相应的条目(确保使用之前获得的UUID): + +``` +UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4 defaults 0 0 +UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0 +``` + +保存并挂载 LV: + +``` +# mount -a +# mount | grep home +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Mount-Logical-Volumes-on-Linux-1.png) +>挂载逻辑卷 + +在涉及到 LV 的实际使用时,你还需要按照曾在本系列[第八讲:管理用户和用户组][4]中讲解的那样,为其设置合适的 `ugo+rwx`。 + +### 总结 + +本文介绍了 [逻辑卷管理][5],一个用于管理可扩展存储设备的多功能工具。与 RAID(曾在本系列讲解过的 [第六讲:组装分区为RAID设备——创建和管理系统备份][6])结合使用,你将同时体验到(LVM 带来的)可扩展性和(RAID 提供的)冗余。 + +在这类的部署中,你通常会在 `RAID` 上发现 `LVM`,这就是说,要先配置好 RAID 然后它在上面配置 LVM。 + +如果你对本问有任何的疑问和建议,可以直接在下方的评论区告诉我们。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-and-create-lvm-parition-using-vgcreate-lvcreate-and-lvextend/ + +作者:[Gabriel Cánepa][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/gacanepa/ +[1]: https://linux.cn/article-7161-1.html +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: https://linux.cn/article-7187-1.html +[4]: https://linux.cn/article-7418-1.html +[5]: http://www.tecmint.com/create-lvm-storage-in-linux/ +[6]: https://linux.cn/article-7229-1.html diff --git a/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md new file mode 100644 index 0000000000..2d0238becf --- /dev/null +++ b/translated/tech/LFCS/Part 12 - How to Explore Linux with Installed Help Documentations and Tools.md @@ -0,0 +1,178 @@ +LFCS第十二讲: 如何使用Linux的帮助文档和工具 +================================================================================== + +由于 2016 年 2 月 2 号开始启用了新的 LFCS 考试要求, 我们在[LFCS series][1]系列添加了一些必要的内容 . 为了考试的需要, 我们强烈建议你看一下[LFCE series][2] . + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Explore-Linux-with-Documentation-and-Tools.png) +>LFCS: 了解Linux的帮助文档和工具 + +当你习惯了在命令行下进行工作, 你会发现Linux有许多文档需要你去使用和配置Linux系统. + +另一个你必须熟悉命令行帮助工具的理由是,在[LFCS][3] 和 [LFCE][4] 考试中, 你只能靠你自己和命令行工具,没有互联网也没有百度。 + +基于上面的理由, 在这一章里我们将给你一些建议来帮助你通过**Linux Foundation Certification** 考试. + +### Linux 帮助手册 + +man命令, 大体上来说就是一个工具手册. 它包含选项列表(和解释) , 甚至还提供一些例子. + +我们用**man command** 加工具名称来打开一个帮助手册以便获取更多内容. 例如: + +``` +# man diff +``` + +我们将打开`diff`的手册页, 这个工具将一行一行的对比文本文档 (如你想退出只需要轻轻的点一下Q键). + +下面我来比较两个文本文件 `file1` 和 `file2` . 这两个文本文件包含着相同版本Linux的安装包信息. + +输入`diff` 命令它将告诉我们 `file1` 和`file2` 有什么不同: + +``` +# diff file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-Two-Text-Files-in-Linux.png) +>在Linux中比较两个文本文件 + +`<` 这个符号是说`file2`少一行. 如果是 `file1`少一行, 我们将用 `>` 符号来替代. + +接下来说, **7d6** 意思是说 文件1的**#7**行在 `file2`中被删除了 ( **24d22** 和**41d38**是同样的意思), 65,67d61告诉我们移动 **65** 到 **67** . 我们把以上步骤都做了两个文件将完全匹配. + +你还可以通过 `-y` 选项来对比两个文件: + +``` +# diff -y file1 file2 +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Compare-and-List-Difference-of-Two-Files.png) +>通过列表来列出两个文件的不同 + + 当然你也可以用`diff`来比较两个二进制文件 . 如果它们完全一样, `diff` 将什么也不会输出. 否则, 他将会返回如下信息: “**Binary files X and Y differ**”. + +### –help 选项 + +`--help`选项 , 大多数命令都可以用它(并不是所有) , 他可以理解为一个命令的简单介绍. 尽管它不提供工具的详细介绍, 但是确实是一个能够快速列出程序使用信息的不错的方法. + +例如, + +``` +# sed --help +``` + +显示 sed 的每个选项的用法(sed文本流编辑器). + +一个经典的`sed`例子,替换文件字符. 用 `-i` 选项 (描述为 “**编辑文件在指定位置**”), 你可以编辑一个文件而且并不需要打开他. 如果你想要备份一个原始文件, 用 `-i` 选项 加后缀来创建一个原始文件的副本. + +例如, 替换 `lorem.txt`中的`Lorem` 为 `Tecmint` (忽略大小写) 并且创建一个新的原始文件副本, 命令如下: + +``` +# less lorem.txt | grep -i lorem +# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt +# less lorem.txt | grep -i lorem +# less lorem.txt.orig | grep -i lorem +``` + + 请注意`lorem.txt`文件中`Lorem` 都已经替换为 `Tecmint` , 并且原始的 `lorem.txt` 保存为`lorem.txt.orig`. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Replace-A-String-in-File.png) +>替换文件文本 + +### /usr/share/doc内的文档 +这可能是我最喜欢的方法. 如果你进入 `/usr/share/doc` 目录, 你可以看到好多Linux已经安装的工具的名称的文件夹. + +根据[Filesystem Hierarchy Standard][5](文件目录标准),这些文件夹包含了许多帮助手册没有的信息, 还有一些模板和配置文件. + +例如, 让我们来看一下 `squid-3.3.8` (版本可能会不同) 一个非常受欢迎的HTTP代理[squid cache server][6]. + +让我们用`cd`命令进入目录 : + +``` +# cd /usr/share/doc/squid-3.3.8 +``` + +列出当前文件夹列表: + +``` +# ls +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Files-in-Linux.png) +>ls linux列表命令 + +你应该特别注意 `QUICKSTART` 和 `squid.conf.documented`. 这两个文件包含了Squid许多信息, . 对于别的安装包来说, 他们的名字可能不同 (有可能是 **QuickRef** 或者**00QUICKSTART**), 但原理是一样的. + +对于另外一些安装包, 比如 the Apache web server, 在`/usr/share/doc`目录提供了配置模板, 当你配置独立服务器或者虚拟主机的时候会非常有用. + +### GNU 信息文档 + +你可以把它想象为帮助手册的超级链接形式. 正如上面说的, 他不仅仅提供工具的帮助信息, 而且还是超级链接的形式(是的!在命令行中的超级链接) 你可以通过箭头按钮和回车按钮来浏览你需要的内容. + +一个典型的例子是: + +``` +# info coreutils +``` + +通过coreutils 列出当前系统的 基本文件,shell脚本和文本处理工具[basic file, shell and text manipulation utilities][7] , 你可以得到他们的详细介绍. + +![](http://www.tecmint.com/wp-content/uploads/2016/03/Info-Coreutils.png) +>Info Coreutils + +和帮助手册一样你可以按Q键退出. + +此外, GNU info 还可以像帮助手册一样使用. 例如: + +``` +# info tune2fs +``` + +它将显示 **tune2fs**的帮助手册, ext2/3/4 文件系统管理工具. + +让我们来看看怎么用**tune2fs**: + +显示 **/dev/mapper/vg00-vol_backups**文件系统信息: + +``` +# tune2fs -l /dev/mapper/vg00-vol_backups +``` + +修改文件系统标签 (修改为Backups): + +``` +# tune2fs -L Backups /dev/mapper/vg00-vol_backups +``` + +设置 `/` 自检的挂载次数 (用`-c` 选项设置 `/`的自检的挂载次数 或者用 `-i` 选项设置 自检时间 **d=days, w=weeks, and m=months**). + +``` +# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts +# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks +``` + +以上这些内容也可以通过 `--help` 选项找到, 或者查看帮助手册. + +### 摘要 + +不管你选择哪种方法,知道并且会使用它们在考试中对你是非常有用的. 你知道其它的一些方法吗? 欢迎给我们留言. + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[3]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[4]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/ +[5]: http://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/ +[6]: http://www.tecmint.com/configure-squid-server-in-linux/ +[7]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[8]: diff --git a/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md deleted file mode 100644 index 2781dde63d..0000000000 --- a/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md +++ /dev/null @@ -1,230 +0,0 @@ -Flowsnow translating... -LFCS系列第九讲: 使用Yum, RPM, Apt, Dpkg, Aptitude, Zypper进行Linux包管理 -================================================================================ -去年八月, Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划,这对广大系统管理员来说是一个很好的机会,管理员们可以通过绩效考试来表明自己可以成功支持Linux系统的整体运营。 当需要的时候一个Linux基金会认证的系统管理员有足够的专业知识来确保系统高效运行,提供第一手的故障诊断和监视,并且为工程师团队在问题升级时提供智能决策。 - -![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) - -Linux基金会认证系统管理员 – 第九讲 - -请观看下面关于Linux基金会认证计划的演示。 - -注:youtube 视频 - - -本文是本系列十套教程中的第九讲,今天在这篇文章中我们会引导你学习Linux包管理,这也是LFCS认证考试所需要的。 - -### 包管理 ### - -简单的说,包管理是系统中安装和维护软件的一种方法,其中维护也包含更新和卸载。 - -在Linux早期,程序只以源代码的方式发行,还带有所需的用户使用手册和必备的配置文件,甚至更多。现如今,大多数发行商使用默认的预装程序或者被称为包的程序集合。用户使用这些预装程序或者包来安装该发行版本。然而,Linux最伟大的一点是我们仍然能够获得程序的源代码用来学习、改进和编译。 - -**包管理系统是如何工作的** - -如果某一个包需要一定的资源,如共享库,或者需要另一个包,据说就会存在依赖性问题。所有现在的包管理系统提供了一些解决依赖性的方法,以确保当安装一个包时,相关的依赖包也安装好了 - -**打包系统** - -几乎所有安装在现代Linux系统上的软件都会在互联网上找到。它要么能够通过中央库(中央库能包含几千个包,每个包都已经构建、测试并且维护好了)发行商得到,要么能够直接得到可以下载和手动安装的源代码。 - -由于不同的发行版使用不同的打包系统(Debian的*.deb文件/ CentOS的*.rpm文件/ openSUSE的专门为openSUSE构建的*.rpm文件),因此为一个发行版本开发的包会与其他发行版本不兼容。然而,大多数发行版本都可能是LFCS认证的三个发行版本之一。 - -**高级和低级打包工具** - -为了有效地进行包管理的任务,你需要知道,你将有两种类型的实用工具:低级工具(能在后端实际安装,升级,卸载包文件),以及高级工具(负责确保能很好的执行依赖性解决和元数据检索的任务,元数据也称为关于数据的数据)。 - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
发行版低级工具高级工具
 Debian版及其衍生版 dpkg apt-get / aptitude
 CentOS版 rpm yum
 openSUSE版 rpm zypper
- -让我们来看下低级工具和高级工具的描述。 - -dpkg的是基于Debian系统中的一个低级包管理器。它可以安装,删除,提供有关资料,并建立*.deb包,但它不能自动下载并安装它们相应的依赖包。 - -- 阅读更多: [15个dpkg命令实例][1] - -apt-get是Debian和衍生版的高级包管理器,并提供命令行方式从多个来源检索和安装软件包,其中包括解决依赖性。和dpkg不同的是,apt-get不是直接基于.deb文件工作,而是基于包的正确名称。 - -- 阅读更多: [25个apt-get命令实力][2] - -Aptitude是基于Debian的系统的另一个高级包管理器,它可用于快速简便的执行管理任务(安装,升级和删除软件包,还可以自动处理解决依赖性)。与atp-get和额外的包管理器相比,它提供了相同的功能,例如提供对包的几个版本的访问。 - -rpm是Linux标准基础(LSB)兼容发布版使用的一种包管理器,用来对包进行低级处理。就像dpkg,rpm可以查询,安装,检验,升级和卸载软件包,并能被基于Fedora的系统频繁地使用,比如RHEL和CentOS。 - -- 阅读更多: [20个rpm命令实例][3] - -相对于基于RPM的系统,yum增加了系统自动更新的功能和带依赖性管理的包管理功能。作为一个高级工具,和apt-get或者aptitude相似,yum基于库工作。 - -- 阅读更多: [20个yum命令实例][4] -- -### 低级工具的常见用法 ### - -你用低级工具处理最常见的任务如下。 - -**1. 从已编译(*.deb或*.rpm)的文件安装一个包** - -这种安装方法的缺点是没有提供解决依赖性的方案。当你在发行版本库中无法获得某个包并且又不能通过高级工具下载安装时,你很可能会从一个已编译文件安装该包。因为低级工具不需要解决依赖性问题,所以当安装一个没有解决依赖性的包时会出现出错并且退出。 - - # dpkg -i file.deb [Debian版和衍生版] - # rpm -i file.rpm [CentOS版 / openSUSE版] - -**注意**: 不要试图在CentOS中安装一个为openSUSE构建的.rpm文件,反之亦然! - -**2. 从已编译文件中更新一个包** - -同样,当中央库中没有某安装包时,你只能手动升级该包。 - - # dpkg -i file.deb [Debian版和衍生版] - # rpm -U file.rpm [CentOS版 / openSUSE版] - -**3. 列举安装的包** - -当你第一次接触一个已经在工作中的系统时,很可能你会想知道安装了哪些包。 - - # dpkg -l [Debian版和衍生版] - # rpm -qa [CentOS版 / openSUSE版] - -如果你想知道一个特定的包安装在哪儿, 你可以使用管道命令从以上命令的输出中去搜索,这在这个系列的[操作Linux文件 – 第一讲][5] 中有介绍。假定我们需要验证mysql-common这个包是否安装在Ubuntu系统中。 - - # dpkg -l | grep mysql-common - -![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) - -检查安装的包 - -另外一种方式来判断一个包是否已安装。 - - # dpkg --status package_name [Debian版和衍生版] - # rpm -q package_name [CentOS版 / openSUSE版] - -例如,让我们找出sysdig包是否安装在我们的系统。 - - # rpm -qa | grep sysdig - -![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) - -检查sysdig包 - -**4. 查询一个文件是由那个包安装的** - - # dpkg --search file_name - # rpm -qf file_name - -例如,pw_dict.hwm文件是由那个包安装的? - - # rpm -qf /usr/share/cracklib/pw_dict.hwm - -![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) - -Linux中查询文件 - -### 高级工具的常见用法 ### - -你用高级工具处理最常见的任务如下。 - -**1. 搜索包** - -aptitude更新将会更新可用的软件包列表,并且aptitude搜索会根据包名进行实际性的搜索。 - - # aptitude update && aptitude search package_name - -在搜索所有选项中,yum不仅可以通过包名还可以通过包的描述搜索程序包。 - - # yum search package_name - # yum search all package_name - # yum whatprovides “*/package_name” - -假定我们需要一个名为sysdig的包,要知道的是我们需要先安装然后才能运行。 - - # yum whatprovides “*/sysdig” - -![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) - -检查包描述 - -whatprovides告诉yum搜索一个含有能够匹配上述正则表达式的文件的包。 - - # zypper refresh && zypper search package_name [在openSUSE上] - -**2. 从仓库安装一个包** - -当安装一个包时,在包管理器解决了所有依赖性问题后,可能会提醒你确认安装。需要注意的是运行更新或刷新(根据所使用的软件包管理器)不是绝对必要,但是考虑到安全性和依赖性的原因,保持安装的软件包是最新的是一个好的系统管理员的做法。 - - # aptitude update && aptitude install package_name [Debian版和衍生版] - # yum update && yum install package_name [CentOS版] - # zypper refresh && zypper install package_name [openSUSE版] - -**3. 卸载包** - -按选项卸载将会卸载软件包,但把配置文件保留完好,然而清除包从系统中完全删去该程序。 -# aptitude remove / purge package_name -# yum erase package_name - - ---注意要卸载的openSUSE包前面的减号 --- - - # zypper remove -package_name - -在默认情况下,大部分(如果不是全部)的包管理器会提示你,在你实际卸载之前你是否确定要继续卸载。所以,请仔细阅读屏幕上的信息,以避免陷入不必要的麻烦! - -**4. 显示包的信息** - -下面的命令将会显示birthday这个包的信息。 - - # aptitude show birthday - # yum info birthday - # zypper info birthday - -![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) - -检查包信息 - -### 总结 ### - -作为一个系统管理员,包管理器是你不能回避的东西。你应该立即准备使用本文中介绍的这些工具。希望你在准备LFCS考试和日常工作中会觉得这些工具好用。欢迎在下面留下您的意见或问题,我们将尽可能快的回复你。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-package-management/ - -作者:[Gabriel Cánepa][a] -译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/dpkg-command-examples/ -[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ -[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ -[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file diff --git a/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md b/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md new file mode 100644 index 0000000000..ef91e93575 --- /dev/null +++ b/translated/tech/awk/Part 3 - How to Use Awk to Filter Text or Strings Using Pattern Specific Actions.md @@ -0,0 +1,82 @@ +如何使用 Awk 来筛选文本或字符串 +========================================================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Filter-Text-or-Strings-Using-Pattern.png) + +作为 Awk 命令系列的第三部分,这次我们将看一看如何基于用户定义的特定模式来筛选文本或字符串。 + +在筛选文本时,有时你可能想根据某个给定的条件或使用一个特定的可被匹配的模式,去标记某个文件或数行字符串中的某几行。使用 Awk 来完成这个任务是非常容易的,这也正是 Awk 中可能对你有所帮助的几个特色之一。 + +让我们看一看下面这个例子,比方说你有一个写有你想要购买的食物的购物清单,其名称为 food_prices.list,它所含有的食物名称及相应的价格如下所示: + +``` +$ cat food_prices.list +No Item_Name Quantity Price +1 Mangoes 10 $2.45 +2 Apples 20 $1.50 +3 Bananas 5 $0.90 +4 Pineapples 10 $3.46 +5 Oranges 10 $0.78 +6 Tomatoes 5 $0.55 +7 Onions 5 $0.45 +``` + +然后,你想使用一个 `(*)` 符号去标记那些单价大于 $2 的食物,那么你可以通过运行下面的命令来达到此目的: + +``` +$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Text-Using-Awk.gif) +>打印出单价大于 $2 的项目 + +从上面的输出你可以看到在含有 芒果(mangoes) 和 菠萝(pineapples) 的那行末尾都已经有了一个 `(*)` 标记。假如你检查它们的单价,你可以看到它们的单价的确超过了 $2 。 + +在这个例子中,我们已经使用了两个模式: + +- 第一个模式: `/ *\$[2-9]\.[0-9][0-9] */` 将会得到那些含有食物单价大于 $2 的行, +- 第二个模式: `/*\$[0-1]\.[0-9][0-9] */` 将查找那些食物单价小于 $2 的那些行。 + +上面的命令具体做了什么呢?这个文件有四个字段,当模式一匹配到含有食物单价大于 $2 的行时,它便会输出所有的四个字段并在该行末尾加上一个 `(*)` 符号来作为标记。 + +第二个模式只是简单地输出其他含有食物单价小于 $2 的行,因为它们出现在输入文件 food_prices.list 中。 + +这样你就可以使用模式来筛选出那些价格超过 $2 的食物项目,尽管上面的输出还有些问题,带有 `(*)` 符号的那些行并没有像其他行那样被格式化输出,这使得输出显得不够清晰。 + +我们在 Awk 系列的第二部分中也看到了同样的问题,但我们可以使用下面的两种方式来解决: + +1. 可以像下面这样使用 printf 命令,但这样使用又长又无聊: + +``` +$ awk '/ *\$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Printf.gif) +>使用 Awk 和 Printf 来筛选和输出项目 + +2. 使用 `$0` 字段。Awk 使用变量 **0** 来存储整个输入行。对于上面的问题,这种方式非常方便,并且它还简单、快速: + +``` +$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Variable.gif) +>使用 Awk 和变量来筛选和输出项目 + +### 结论 + +这就是全部内容了,使用 Awk 命令你便可以通过几种简单的方法去利用模式匹配来筛选文本,帮助你在一个文件中对文本或字符串的某些行做标记。 + +希望这篇文章对你有所帮助。记得阅读这个系列的下一部分,我们将关注在 awk 工具中使用比较运算符。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/awk-filter-text-or-string-using-patterns/ + +作者:[Aaron Kili][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ \ No newline at end of file diff --git a/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md b/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md new file mode 100644 index 0000000000..86649a52d5 --- /dev/null +++ b/translated/tech/awk/Part 4 - How to Use Comparison Operators with Awk in Linux.md @@ -0,0 +1,95 @@ +在 Linux 下如何使用 Awk 比较操作符 +=================================================== + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Comparison-Operators-with-AWK.png) + +对于 Awk 命令的用户来说,处理一行文本中的数字或者字符串时,使用比较运算符来过滤文本和字符串是十分方便的。 + +在 Awk 系列的此部分中,我们将探讨一下如何使用比较运算符来过滤文本或者字符串。如果你是程序员,那么你应该已经熟悉比较运算符;对于其它人,下面的部分将介绍比较运算符。 + +### Awk 中的比较运算符是什么? + +Awk 中的比较运算符用于比较字符串和或者数值,包括以下类型: + +- `>` – 大于 +- `<` – 小于 +- `>=` – 大于等于 +- `<=` – 小于等于 +- `==` – 等于 +- `!=` – 不等于 +- `some_value ~ / pattern/` – 如果some_value匹配模式pattern,则返回true +- `some_value !~ / pattern/` – 如果some_value不匹配模式pattern,则返回true + +现在我们通过例子来熟悉 Awk 中各种不同的比较运算符。 + +在这个例子中,我们有一个文件名为 food_list.txt 的文件,里面包括不同食物的购买列表。我想给食物数量小于或等于30的物品所在行的后面加上`(**)` + +``` +File – food_list.txt +No Item_Name Quantity Price +1 Mangoes 45 $3.45 +2 Apples 25 $2.45 +3 Pineapples 5 $4.45 +4 Tomatoes 25 $3.45 +5 Onions 15 $1.45 +6 Bananas 30 $3.45 +``` + +Awk 中使用比较运算符的通用语法如下: + +``` +# expression { actions; } +``` + +为了实现刚才的目的,执行下面的命令: + +``` +# awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' food_list.txt + +No Item_Name` Quantity Price +1 Mangoes 45 $3.45 +2 Apples 25 $2.45 ** +3 Pineapples 5 $4.45 ** +4 Tomatoes 25 $3.45 ** +5 Onions 15 $1.45 ** +6 Bananas 30 $3.45 ** +``` + +在刚才的例子中,发生如下两件重要的事情: + +- 第一个表达式 `{ action ; }` 组合, `$3 <= 30 { printf “%s\t%s\n”, $0,”**” ; }` 打印出数量小于等于30的行,并且在后面增加`(**)`。物品的数量是通过 `$3`这个域变量获得的。 +- 第二个表达式 `{ action ; }` 组合, `$3 > 30 { print $0 ;}` 原样输出数量小于等于 `30` 的行。 + +再举一个例子: + +``` +# awk '$3 <= 20 { printf "%s\t%s\n", $0,"TRUE" ; } $3 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Quantity Price +1 Mangoes 45 $3.45 +2 Apples 25 $2.45 +3 Pineapples 5 $4.45 TRUE +4 Tomatoes 25 $3.45 +5 Onions 15 $1.45 TRUE +6 Bananas 30 $3.45 +``` + +在这个例子中,我们想通过在行的末尾增加 (TRUE) 来标记数量小于等于20的行。 + +### 总结 + +这是一篇对 Awk 中的比较运算符介绍性的指引,因此你需要尝试其他选项,发现更多使用方法。 + +如果你遇到或者想到任何问题,请在下面评论区留下评论。请记得阅读 Awk 系列下一部分的文章,那里我将介绍组合表达式。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/comparison-operators-in-awk/ + +作者:[Aaron Kili][a] +译者:[chunyang-wen](https://github.com/chunyang-wen) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md b/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md new file mode 100644 index 0000000000..ed1ba4aa7c --- /dev/null +++ b/translated/tech/awk/Part 5 - How to Use Compound Expressions with Awk in Linux.md @@ -0,0 +1,79 @@ +如何使用 Awk 复合表达式 +==================================================== + +![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Compound-Expressions-with-Awk.png) + +一直以来在查对条件是否匹配时,我们寻求的都是简单的表达式。那如果你想用超过一个表达式,来查对特定的条件呢? + +本文,我们将看看如何在过滤文本和字符串时,结合多个表达式,即复合表达式,用以查对条件。 + +Awk 的复合表达式可由表示`与`的组合操作符 `&&` 和表示`或`的 `||` 构成。 + +复合表达式的常规写法如下: + +``` +( first_expression ) && ( second_expression ) +``` + +为了保证整个表达式的正确,在这里必须确保 `first_expression` 和 `second_expression` 是正确的。 + +``` +( first_expression ) || ( second_expression) +``` + +为了保证整个表达式的正确,在这里必须确保 `first_expression` 或 `second_expression` 是正确的。 + +**注意**:切记要加括号。 + +表达式可以由比较操作符构成,具体可查看 awk 系列的第四部分。 + +现在让我们通过一个例子来加深理解: + +此例中,有一个文本文件 `tecmint_deals.txt`,文本中包含着一张随机的 Tecmint 交易清单,其中包含了名称、价格和种类。 + +``` +TecMint Deal List +No Name Price Type +1 Mac_OS_X_Cleanup_Suite $9.99 Software +2 Basics_Notebook $14.99 Lifestyle +3 Tactical_Pen $25.99 Lifestyle +4 Scapple $19.00 Unknown +5 Nano_Tool_Pack $11.99 Unknown +6 Ditto_Bluetooth_Altering_Device $33.00 Tech +7 Nano_Prowler_Mini_Drone $36.99 Tech +``` + +我们只想打印出价格超过 $20 的物品,并在其中种类为 “Tech” 的物品的行末用 (**) 打上标记。 + +我们将要执行以下命令。 + +``` +# awk '($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' tecmint_deals.txt + +6 Ditto_Bluetooth_Altering_Device $33.00 Tech * +7 Nano_Prowler_Mini_Drone $36.99 Tech * +``` + +此例,在复合表达式中我们使用了两个表达式: + +- 表达式 1:`($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/)` ;查找交易价格超过 `$20` 的行,即只有当 `$3` 也就是价格满足 `/^\$[2-9][0-9]*\.[0-9][0-9]$/` 时值才为 true。 +- 表达式 2:`($4 == “Tech”)` ;查找是否有种类为 “`Tech`”的交易,即只有当 `$4` 等于 “`Tech`” 时值才为 true。 +切记,只有当 `&&` 操作符的两端状态,也就是两个表达式都是 true 的情况下,这一行才会被打上 `(**)` 标志。 + +### 总结 + +有些时候为了匹配你的真实想法,就不得不用到复合表达式。当你掌握了比较和复合表达式操作符的用法之后,在难的文本或字符串过滤条件也能轻松解决。 + +希望本向导对你有所帮助,如果你有任何问题或者补充,可以在下方发表评论,你的问题将会得到相应的解释。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/combine-multiple-expressions-in-awk/ + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md b/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md new file mode 100644 index 0000000000..8ead60bd44 --- /dev/null +++ b/translated/tech/awk/Part 6 - How to Use ‘next’ Command with Awk in Linux.md @@ -0,0 +1,76 @@ + +如何使用AWK的‘next’命令 +============================================= + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png) + +在Awk 系列的第六章, 我们来看一下`next`命令 ,它告诉 Awk 跳过你所提供的表达式而是读取下一个输入行. +`next` 命令帮助你阻止运行多余的步骤. + +要明白它是如何工作的, 让我们来分析一下food_list.txt它看起来像这样 : + +``` +Food List Items +No Item_Name Price Quantity +1 Mangoes $3.45 5 +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 +6 Bananas $3.45 30 +``` + +运行下面的命令,它将在每个食物数量小于或者等于20的行后面标一个星号: + +``` +# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Price Quantity +1 Mangoes $3.45 5 * +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 * +6 Bananas $3.45 30 +``` + +上面的命令实际运行如下: + +- 首先, 它用`$4 <= 20`表达式检查每个输入行的第四列是否小于或者等于20,如果满足条件, 它将在末尾打一个星号 `(*)` . +- 接着, 它用`$4 > 20`表达式检查每个输入行的第四列是否大于20,如果满足条件,显示出来. + +但是这里有一个问题, 当第一个表达式用`{ printf "%s\t%s\n", $0,"**" ; }`命令进行标注的时候在同样的步骤第二个表达式也进行了判断这样就浪费了时间. + +因此当我们已经用第一个表达式打印标志行的时候就不在需要用第二个表达式`$4 > 20`再次打印. + +要处理这个问题, 我们需要用到`next` 命令: + +``` +# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt + +No Item_Name Price Quantity +1 Mangoes $3.45 5 * +2 Apples $2.45 25 +3 Pineapples $4.45 55 +4 Tomatoes $3.45 25 +5 Onions $1.45 15 * +6 Bananas $3.45 30 +``` + +当输入行用`$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`命令打印以后,`next`命令 将跳过第二个`$4 > 20` `{ print $0 ;}`表达式, 继续判断下一个输入行,而不是浪费时间继续判断一下是不是当前输入行还大于20. + +next命令在编写高效的命令脚本时候是非常重要的, 它可以很大的提高脚本速度. 下面我们准备来学习Awk的下一个系列了. + +希望这篇文章对你有帮助,你可以给我们留言. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/use-next-command-with-awk-in-linux/ + +作者:[Aaron Kili][a] +译者:[kokialoves](https://github.com/kokialoves) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ diff --git a/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md b/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md new file mode 100644 index 0000000000..0d0d499d00 --- /dev/null +++ b/translated/tech/awk/Part 7 - How to Read Awk Input from STDIN in Linux.md @@ -0,0 +1,76 @@ + + +在 Linux 上怎么读取标准输入(STDIN)作为 Awk 的输入 +============================================ + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Read-Awk-Input-from-STDIN.png) + + +在 Awk 工具系列的前几节,我们看到大多数操作都是从一个文件或多个文件读取输入,或者你想要把标准输入作为 Awk 的输入. +在 Awk 系列的第7节中,我们将会看到几个例子,这些例子都是关于你可以筛选其他命令的输出代替从一个文件读取输入作为 awk 的输入. + + +我们开始使用 [dir utility][1] , dir 命令和 [ls 命令][2] 相似,在第一个例子下面,我们使用 'dir -l' 命令的输出作为 Awk 命令的输入,这样就可以打印出文件拥有者的用户名,所属组组名以及在当前路径下他/她拥有的文件. +``` +# dir -l | awk '{print $3, $4, $9;}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-By-User-in-Directory.png) +>列出当前路径下的用户文件 + + +看另一个例子,我们 [使用 awk 表达式][3] ,在这里,我们想要在 awk 命令里使用一个表达式筛选出字符串,通过这样来打印出 root 用户的文件.命令如下: +``` +# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} ' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-by-Root-User.png) +>列出 root 用户的文件 + + +上面的命令包含了 '(==)' 来进行比较操作,这帮助我们在当前路径下筛选出 root 用户的文件.这种方法的实现是通过使用 '$3=="root"' 表达式. + +让我们再看另一个例子,我们使用一个 [awk 比较运算符][4] 来匹配一个确定的字符串. + + +现在,我们已经用了 [cat utility][5] 来浏览文件名为 tecmint_deals.txt 的文件内容,并且我们想要仅仅查看有字符串 Tech 的部分,所以我们会运行下列命令: +``` +# cat tecmint_deals.txt +# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}' +# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}' +``` + +![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-Comparison-Operator-to-Match-String.png) +>用 Awk 比较运算符匹配字符串 + + +在上面的例子中,我们已经用了参数为 `~ /匹配字符/` 的比较操作,但是上面的两个命令给我们展示了一些很重要的问题. + +当你运行带有 tech 字符串的命令时终端没有输出,因为在文件中没有 tech 这种字符串,但是运行带有 Tech 字符串的命令,你却会得到包含 Tech 的输出. + +所以你应该在进行这种比较操作的时候时刻注意这种问题,正如我们在上面看到的那样, awk 对大小写很敏感. + + +你可以一直使用另一个命令的输出作为 awk 命令的输入来代替从一个文件中读取输入,这就像我们在上面看到的那样简单. + + +希望这些例子足够简单可以使你理解 awk 的用法,如果你有任何问题,你可以在下面的评论区提问,记得查看 awk 系列接下来的章节内容,我们将关注 awk 的一些功能,比如变量,数字表达式以及赋值运算符. +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/read-awk-input-from-stdin-in-linux/ + +作者:[Aaron Kili][a] +译者:[vim-kakali](https://github.com/vim-kakali) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.tecmint.com/author/aaronkili/ +[1]: http://www.tecmint.com/linux-dir-command-usage-with-examples/ +[2]: http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ +[3]: http://www.tecmint.com/combine-multiple-expressions-in-awk +[4]: http://www.tecmint.com/comparison-operators-in-awk +[5]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ + + + diff --git a/选题模板.txt b/选题模板.txt index fabd1f1dba..a7cd92e614 100644 --- a/选题模板.txt +++ b/选题模板.txt @@ -20,8 +20,7 @@ ### 子一级标题 正文内容 : I have a [dream][1]。 - - + -------------------------------------------------------------------------------- via: 原文地址 @@ -38,4 +37,7 @@ 说明: 1. 标题层级很多时从 “##” 开始 2. 引文链接地址在下方集中写 - +3. 因为 Windows 系统文件名有限制,所以文章名不要有特殊符号,如 `\/:*"<>|`,同时也不推荐全大写,或者其它不利阅读的格式 +4. 正文格式参照中文排版指北(https://github.com/LCTT/TranslateProject/blob/master/%E4%B8%AD%E6%96%87%E6%8E%92%E7%89%88%E6%8C%87%E5%8C%97.md) +5. 我们使用的 markdown 语法和 github 一致,具体语法可参见 https://github.com/guodongxiaren/README 。而实际中使用的都是基本语法,比如链接、包含图片、标题、列表、字体控制和代码高亮。 +6. 选题的内容分为两类: 干货和湿货。干货就是技术文章,比如针对某种技术、工具的介绍、讲解和讨论。湿货则是和技术、开发、计算机文化有关的文章。选题时主要就是根据这两条来选择文章,文章需要对大家有益处,篇幅不宜太短,可以是系列文章,也可以是长篇大论,但是文章要有内容,不能有严重的错误,最好不要选择已经有翻译的原文。